id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
2,747,566 | https://en.wikipedia.org/wiki/Gramicidin%20S | Gramicidin S or Gramicidin Soviet is an antibiotic that is effective against some gram-positive and gram-negative bacteria as well as some fungi.
It is a derivative of gramicidin, produced by the gram-positive bacterium Brevibacillus brevis. Gramicidin S is a cyclodecapeptide, constructed as two identical pentapeptides joined head to tail, formally written as cyclo(-Val-Orn-Leu-D-Phe-Pro-)2. That is to say, it forms a ring structure composed of five different amino acids, each one used twice within the structure. Another interesting point is that it utilizes two amino acids uncommon in peptides: ornithine as well as the atypical stereoisomer of phenylalanine. It is synthesized by gramicidin S synthetase.
Biosynthesis
Gramicidin S biosynthetic pathway consists of two-enzyme of nonribosomal peptide synthases (NRPSs), gramicidin S synthetase I (GrsA) and gramicidin S synthetase II (GrsB), to give a product as a cyclic decapeptide. Within the biosynthetic pathway, there are total of five modules that specifically recognize, activate, and condense the amino acids to gramicidin S. Starting module GrsA consists of three domains: Adenylation (A) domain where it incorporates the amino acid and activates it by adenylation using ATP, Thiolation (T) domain or peptidyl carrier protein (PCP) in which the adenylated amino acid gets covalently attached to the 4´-phosphopantetheine group and this gets loaded onto the conserved serine in the T domain, Epimerization (E) domain where it epimerizes L-amino acid to D-amino acid. Starting module GrsA loads D-Phe onto the system.
Second enzyme cluster GrsB contains four modules, each containing condensation (C), adenylation (A), and thiolation (T) domains and thioesterase domain (TE) at the end. C domain forms a peptide bond between two amino acids, D-Phe and L-Pro. L-Val, L-Orn, and L-Leu are incorporated sequentially by the next three modules of GrsB. After repeating the whole module synthesis once again, TE domain cyclizes and releases the two peptides and dimerize them together to form the final product.
History
Gramicidin S was discovered by Russian microbiologist Georgyi Frantsevitch Gause and his wife Maria Brazhnikova in 1942. Within the year Gramicidin S was being used in Soviet military hospitals to treat infection and eventually found usage at the front lines of combat by 1946. Gause was awarded the Stalin Prize for Medicine for his discovery in 1946. In 1944, Gramicidin S was sent by the Ministry of Health of the USSR to Great Britain via the International Red Cross in a collaborative effort to establish the exact structure. English chemist Richard Synge proved that the compound was an original antibiotic and a polypeptide using paper chromatography. He would later go on to receive the Nobel Prize for his work in chromatography. The crystal structure was finally established by Dorothy Hodgkin and Gerhard Schmidt; Margaret Thatcher worked for a term in 1947 with Gerhard Schmidt on the antibiotic Gramicidin S, as an undergraduate research project. The importance of Gramicidin S and antibiotic research in general was so great that Gause was not persecuted during the period of Lysenkoism in the USSR, while many of his colleagues were. Indeed, it was his need for developing new strains to mass-produce antibiotics that allowed politically sanctioned collaborations with geneticists like Joseph Rapoport and Alexander Malinovsky, who would both actively participate in the downfall of Lysenkoism.
Structure and pharmacological effect
Gramicidin S differs from other gramicidin types in that it is a cationic cyclic decapeptide and has a structure of an anti-parallel beta-sheet. Gramicidin S molecule is amphiphilic, with hydrophobic amino acids (D-Phe, Val, Leu side chains) and charged aminoacid (L-Orn). It exhibits strong antibiotic activity towards Gram-negative and Gram-positive and even several pathogenic fungi. The mode of action is not entirely agreed upon, but it is generally accepted that it is disruption of the lipid membrane and enhancement of the permeability of the bacterial cytoplasmic membrane. Unfortunately, being hemolytic at even low concentrations, gramicidin S is only used as topical applications at present. Additionally, Gramicidin S has been employed as a spermicide and therapeutic for genital ulcers caused by sexually transmitted disease.
References
Antibiotics
Cyclic peptides
Soviet inventions
Health in the Soviet Union | Gramicidin S | [
"Biology"
] | 1,037 | [
"Antibiotics",
"Biocides",
"Biotechnology products"
] |
2,748,227 | https://en.wikipedia.org/wiki/Synergetics%20%28Haken%29 | Synergetics is an interdisciplinary science explaining the formation and self-organization of patterns and structures in open systems far from thermodynamic equilibrium. It is founded by Hermann Haken, inspired by the laser theory. Haken's interpretation of the laser principles as self-organization of non-equilibrium systems paved the way at the end of the 1960s to the development of synergetics. One of his successful popular books is Erfolgsgeheimnisse der Natur, translated into English as The Science of Structure: Synergetics.
Self-organization requires a 'macroscopic' system, consisting of many nonlinearly interacting subsystems. Depending on the external control parameters (environment, energy fluxes) self-organization takes place.
Order-parameter concept
Essential in synergetics is the order-parameter concept which was originally introduced in the Ginzburg–Landau theory in order to describe phase transitions in thermodynamics. The order parameter concept is generalized by Haken to the "enslaving-principle" saying that the dynamics of fast-relaxing (stable) modes is completely determined by the 'slow' dynamics of, as a rule, only a few 'order-parameters' (unstable modes). The order parameters can be interpreted as the amplitudes of the unstable modes determining the macroscopic pattern.
As a consequence, self-organization means an enormous reduction of degrees of freedom (entropy) of the system which macroscopically reveals an increase of 'order' (pattern-formation). This far-reaching macroscopic order is independent of the details of the microscopic interactions of the subsystems. This supposedly explains the self-organization of patterns in so many different systems in physics, chemistry and biology.
See also
Effective field theory
Josiah Willard Gibbs
Phase rule
Free energy principle
Fokker–Planck equation
Ginzburg–Landau theory
Buckminster Fuller
Alexander Bogdanov
Abiogenesis
References
A. S. Mikhailov: Foundations of Synergetics I. Distributed active systems (2nd rev. ed. 1994). Springer Verlag, Berlin, 1990, .
A. S. Mikhailov, A. Yu. Loskutov: Foundations of Synergetics II. Chaos and Noise, 2nd revised and enlarged edition, Springer Series in Synergetics. Springer, Berlin — Heidelberg 1996 (erste Auflage 1991), .
Norbert Niemeier "Organisatorischer Wandel aus der Sicht der Synergetik"; Deutscher Universitätsverlag, Wiesbaden, 2000,
D.S. Chernavskii: "Синергетика и информация: Динамическая теория информации", 2-е изд. М.: УРСС.
External links
Homepage of the former Institute for Theoretical Physics and Synergetics (IFTPUS)
Holism
Cybernetics
Thermodynamics
es:Sinergia | Synergetics (Haken) | [
"Physics",
"Chemistry",
"Mathematics"
] | 631 | [
"Thermodynamics",
"Dynamical systems"
] |
2,749,048 | https://en.wikipedia.org/wiki/Quasi-invariant%20measure | In mathematics, a quasi-invariant measure μ with respect to a transformation T, from a measure space X to itself, is a measure which, roughly speaking, is multiplied by a numerical function of T. An important class of examples occurs when X is a smooth manifold M, T is a diffeomorphism of M, and μ is any measure that locally is a measure with base the Lebesgue measure on Euclidean space. Then the effect of T on μ is locally expressible as multiplication by the Jacobian determinant of the derivative (pushforward) of T.
To express this idea more formally in measure theory terms, the idea is that the Radon–Nikodym derivative of the transformed measure μ′ with respect to μ should exist everywhere; or that the two measures should be equivalent (i.e. mutually absolutely continuous):
That means, in other words, that T preserves the concept of a set of measure zero. Considering the whole equivalence class of measures ν, equivalent to μ, it is also the same to say that T preserves the class as a whole, mapping any such measure to another such. Therefore, the concept of quasi-invariant measure is the same as invariant measure class.
In general, the 'freedom' of moving within a measure class by multiplication gives rise to cocycles, when transformations are composed.
As an example, Gaussian measure on Euclidean space Rn is not invariant under translation (like Lebesgue measure is), but is quasi-invariant under all translations.
It can be shown that if E is a separable Banach space and μ is a locally finite Borel measure on E that is quasi-invariant under all translations by elements of E, then either dim(E) < +∞ or μ is the trivial measure μ ≡ 0.
See also
References
Measures (measure theory)
Dynamical systems | Quasi-invariant measure | [
"Physics",
"Mathematics"
] | 378 | [
"Physical quantities",
"Measures (measure theory)",
"Quantity",
"Size",
"Mechanics",
"Dynamical systems"
] |
2,749,404 | https://en.wikipedia.org/wiki/W%C3%B6hler%20synthesis | The Wöhler synthesis is the conversion of ammonium cyanate into urea. This chemical reaction was described in 1828 by Friedrich Wöhler. It is often cited as the starting point of modern organic chemistry. Although the Wöhler reaction concerns the conversion of ammonium cyanate, this salt appears only as an (unstable) intermediate. Wöhler demonstrated the reaction in his original publication with different sets of reactants: a combination of cyanic acid and ammonia, a combination of silver cyanate and ammonium chloride, a combination of lead cyanate and ammonia and finally from a combination of mercury cyanate and cyanatic ammonia (which is again cyanic acid with ammonia).
Modified versions of the Wöhler synthesis
The reaction can be demonstrated by starting with solutions of potassium cyanate and ammonium chloride which are mixed, heated and cooled again. An additional proof of the chemical transformation is obtained by adding a solution of oxalic acid which forms urea oxalate as a white precipitate.
Alternatively the reaction can be carried out with lead cyanate and ammonia. The actual reaction taking place is a double displacement reaction to form ammonium cyanate:
Ammonium cyanate decomposes to ammonia and cyanic acid which in turn react to produce urea:
Complexation with oxalic acid drives this chemical equilibrium to completion.
Debate
It is disputed that Wöhler's synthesis sparked the downfall of the theory of vitalism, which states that organic matter possessed a certain vital force common to all living things. Prior to the Wöhler synthesis, the work of John Dalton and Jöns Jacob Berzelius had already convinced chemists that organic and inorganic matter obey the same chemical laws. It took until 1845 when Kolbe reported another inorganic – organic conversion (of carbon disulfide to acetic acid) before vitalism started to lose support. Wöhler also did not, as some textbooks have claimed, act as a "crusader" against vitalism. A 2000 survey by historian Peter Ramberg found that 90% of chemical textbooks repeat some version of the Wöhler myth.
References
Organic reactions
Inorganic reactions
Name reactions | Wöhler synthesis | [
"Chemistry"
] | 438 | [
"Name reactions",
"Inorganic reactions",
"Organic reactions"
] |
2,750,200 | https://en.wikipedia.org/wiki/Hooshang%20Heshmat | Hooshang Heshmat was the CEO (1994 - 2022) and co-founder of Mohawk Innovative Technology. The company researches and develops green technology for integration into turbomachinery. Heshmat is a fellow in both the American Society of Mechanical Engineers (ASME), and the Society of Tribologists and Lubrication Engineers. In 2007, Heshmat received the Mayo D. Hersey Award, in recognition of his "contributions over a substantial period of time to the advancement of the science and engineering of tribology". In 2008, Heshmat received the International Award from the Society of Tribologists and Lubrication Engineers.
Published works
References
American mechanical engineers
Living people
Year of birth missing (living people)
American technology chief executives
Tribologists | Hooshang Heshmat | [
"Materials_science"
] | 163 | [
"Tribology",
"Tribologists"
] |
2,751,642 | https://en.wikipedia.org/wiki/Dielectric%20complex%20reluctance | Dielectric complex reluctance is a scalar measurement of a passive dielectric circuit (or element within that circuit) dependent on sinusoidal voltage and sinusoidal electric induction flux, and this is determined by deriving the ratio of their complex effective amplitudes. The units of dielectric complex reluctance are (inverse Farads - see Daraf) [Ref. 1-3].
As seen above, dielectric complex reluctance is a phasor represented as uppercase Z epsilon where:
and represent the voltage (complex effective amplitude)
and represent the electric induction flux (complex effective amplitude)
, lowercase z epsilon, is the real part of dielectric reluctance
The "lossless" dielectric reluctance, lowercase z epsilon, is equal to the absolute value (modulus) of the dielectric complex reluctance. The argument distinguishing the "lossy" dielectric complex reluctance from the "lossless" dielectric reluctance is equal to the natural number raised to a power equal to:
Where:
is the imaginary unit
is the phase of voltage
is the phase of electric induction flux
is the phase difference
The "lossy" dielectric complex reluctance represents a dielectric circuit element's resistance to not only electric induction flux but also to changes in electric induction flux. When applied to harmonic regimes, this formality is similar to Ohm's Law in ideal AC circuits. In dielectric circuits, a dielectric material has a dielectric complex reluctance equal to:
Where:
is the length of the circuit element
is the cross-section of the circuit element
is the complex dielectric permeability
See also
Dielectric
Dielectric reluctance — Special definition of dielectric reluctance that does not account for energy loss
References
Hippel A. R. Dielectrics and Waves. – N.Y.: JOHN WILEY, 1954.
Popov V. P. The Principles of Theory of Circuits. – M.: Higher School, 1985, 496 p. (In Russian).
Küpfmüller K. Einführung in die theoretische Elektrotechnik, Springer-Verlag, 1959.
Electric and magnetic fields in matter | Dielectric complex reluctance | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 448 | [
"Condensed matter physics",
"Electric and magnetic fields in matter",
"Materials science"
] |
2,753,131 | https://en.wikipedia.org/wiki/Sulfur-reducing%20bacteria | Sulfur-reducing bacteria are microorganisms able to reduce elemental sulfur (S0) to hydrogen sulfide (H2S). These microbes use inorganic sulfur compounds as electron acceptors to sustain several activities such as respiration, conserving energy and growth, in absence of oxygen. The final product of these processes, sulfide, has a considerable influence on the chemistry of the environment and, in addition, is used as electron donor for a large variety of microbial metabolisms. Several types of bacteria and many non-methanogenic archaea can reduce sulfur. Microbial sulfur reduction was already shown in early studies, which highlighted the first proof of S0 reduction in a vibrioid bacterium from mud, with sulfur as electron acceptor and as electron donor. The first pure cultured species of sulfur-reducing bacteria, Desulfuromonas acetoxidans, was discovered in 1976 and described by Pfennig Norbert and Biebel Hanno as an anaerobic sulfur-reducing and acetate-oxidizing bacterium, not able to reduce sulfate. Only few taxa are true sulfur-reducing bacteria, using sulfur reduction as the only or main catabolic reaction. Normally, they couple this reaction with the oxidation of acetate, succinate or other organic compounds. In general, sulfate-reducing bacteria are able to use both sulfate and elemental sulfur as electron acceptors. Thanks to its abundancy and thermodynamic stability, sulfate is the most studied electron acceptor for anaerobic respiration that involves sulfur compounds. Elemental sulfur, however, is very abundant and important, especially in deep-sea hydrothermal vents, hot springs and other extreme environments, making its isolation more difficult. Some bacteriasuch as Proteus, Campylobacter, Pseudomonas and Salmonellahave the ability to reduce sulfur, but can also use oxygen and other terminal electron acceptors.
Taxonomy
Sulfur reducers are known to cover about 74 genera within the Bacteria domain. Several types of sulfur-reducing bacteria have been discovered in different habitats like deep and shallow sea hydrothermal vents, freshwater, volcanic acidic hot springs and others. Many sulfur reducers belong to the phylum Thermodesulfobacteriota (Desulfuromonas, Pelobacter, Desulfurella, Geobacter), the class Gammaproteobacteria, and the phylum Campylobacterota according to GTDB classification. Other phyla that present sulfur-reducing bacteria are: Bacillota (Desulfitobacterium, Ammonifex, and Carboxydothermus), Aquificota (Desulfurobacterium and Aquifex), Synergistota (Dethiosulfovibrio), Deferribacterota (Geovibrio), Thermodesulfobacteriota, Spirochaetota, and Chrysiogenota.
Metabolism
Sulfur reduction metabolism is an ancient process, found in the deep branches of the phylogenetic tree. Sulfur reduction uses elemental sulfur (S0) and generates hydrogen sulfide (H2S) as the main end product. This metabolism is largely present in extreme environments where, especially in recent years, many microorganisms have been isolated, bringing new and important data on the subject.
Many sulfur-reducing bacteria are able to produce ATP through lithotrophic sulfur respiration, using zero-valence sulfur as electron acceptor, for instance the genera Wolinella, Ammonifex, Desulfuromonas and Desulfurobacterium. On the other side, there are obligate fermenters able to reduce elemental sulfur, for example Thermotoga, Thermosipho and Fervidobacterium. Among these fermenters there are species, such as Thermotoga maritina, that are not dependent on sulfur reduction, and utilize it as a supplementary electron sink. Some researches propose the hypothesis that polysulfide could be an intermediate of sulfur respiration, due to the conversion of elemental sulfur into polysulfide that occurs in sulfide solutions, performing this reaction:
Pseudomonadota
The Pseudomonadota are a major phylum of gram-negative bacteria. There is a wide range of metabolisms. Most members are facultative or obligately anaerobic, chemoautotrophs and heterotrophs. Many are able to move using flagella, others are nonmotile. They are currently divided into several classes, referred to by Greek letters, based on rRNA sequences: Alphaproteobacteria, Betaproteobacteria, Gammaproteobacteria, Zetaproteobacteria, etc.
Class Gammaproteobacteria
The Gammaproteobacteria class include several medically, ecologically and scientifically important groups of bacteria. They are major organisms in diverse marine ecosystems and even extreme environments. This class contains a huge variety of taxonomic and metabolic diversity, including aerobic and anaerobic species, chemolitoauthotrophic, chemoorganotrophic and phototrophic species and also free living, biofilms formers, commensal and symbionts.
Acidithiobacillus spp.
Acidithiobacillus are chemolithoautrophics, Gram-negative road-shaped bacteria, using energy from the oxidation of iron and sulfur containing minerals for growth. They are able to live at extremely low pH (pH 1–2) and fixes both carbon and nitrogen from the atmosphere. It solubilizes copper and other metals from rocks and plays an important role in nutrient and metal biogeochemical cycling in acid environments. Acidithiobacillus ferrooxidans is abundant in natural environments associated with pyritic ore bodies, coal deposits, and their acidified drainages. It obtain energy by the oxidation of reduced sulfur compounds and it can also reduce ferric ion and elemental sulfur, thus promoting the recycling of iron and sulfur compounds under anaerobic conditions. It can also fix and nitrogen and be a primary producer of carbon and nitrogen in acidic environments.
Shewanella spp.
Shewanella are Gram-negative, motile bacilli. The first description of the species was provided in 1931, Shewanella putrefaciens, a non-fermentative bacilli with a single polar flagellum which grow well on conventional solid media. This species is pathogenic for humans, even if infections are rare and reported especially in the geographic area characterized by warm climates.
Pseudomonas spp.
Pseudomonas are Gram-negative chemoorganotrophic Gammaproteobacteria, straight or slightly curved rod-shaped. They are able to move thanks to one or several polar flagella; rarely nonmotile. Aerobic, having a strictly respiratory type of metabolism with oxygen as the terminal electron acceptor; in some cases, allowing growth anaerobically, nitrate can be used as an alternate electron acceptor. Almost all the species fail to grow under acid conditions (pH 4.5 or lower). Pseudomonas are widely distributed in nature. Some species are pathogenic for humans, animals, or plants. Type species: Pseudomonas mendocina.
Phylum Thermodesulfobacteriota
The Thermodesulfobacteriota phylum comprises several morphologically different bacterial groups, Gram-negative, non-sporeforming that exhibit either anaerobic or aerobic growth. They are ubiquitous in marine sediments and contains most of the known sulfur reducing bacteria (e.g. Desulfuromonas spp.). The aerobic representatives are able to digest other bacteria and several of these members are important constituents of the microflora in soil and waters.
Desulfuromusa spp.
Desulfuromusa genus includes bacteria obligately anaerobic that use sulfur as an electron acceptor and short-chain fatty acids, dicarboxylic acids, and amino acids, as electron donors that are oxidized completely to . They are gram negative and complete oxidizer bacteria; their cells are motile and slightly curved or rod shaped. Three sulfur reducing species are known, Desulfromusa kysingii, Desulfuromusa bakii and Desulfuromusa succinoxidans.
Desulfurella spp.
Desulfurella are short rod-shaped, gram-negative cells, motile thanks to a single polar flagellum or nonmotile, non-sporeforming. Obligately anaerobic, moderate thermophilic, they generally occur in warm sediments and in thermally heated cyanobacterial or bacterial communities that are rich in organic compounds and elemental sulfur. Type species: Desulfurella acetivorans.
Hippea spp.
Hippea species are moderate thermophiles neutrophiles to moderate acidophiles, obligate anaerobes sulfur-reducing bacteria with gram-negative rod-shaped cells. They are able to grow lithotrophically with hydrogen and sulfur, and oxidize completely volatile fatty acids, fatty acids and alcohols. They inhabit submarine hot vents. The type species is Hippea maritima.
Desulfuromonas spp.
Desulfuromonas species are gram-negative, mesophilic, obligately anaerobic and complete oxidizers sulfur-reducing bacteria. They are able to grow on acetate as sole organic substrate and reduce elemental sulfur or polysulfide to sulfide. Currently known species of the genus Desulfuromonas are Desulfuromonas acetoxidans, Desulfuromonas acetexigens, the marine organism Desulfuromonas palmitates and Desulfuromonas thiophila.
Desulfiromonas thiophila is an obligate anaerobic bacteria, that uses sulfur as only electron acceptor. Multiplies by binary fission and cells are motile thanks to polar flagella. They live in anoxic mud of freshwater sulfur springs, at a temperature from 26 to 30 °C and pH 6.9 to 7.9.
Geobacter spp.
Geobacter species have a respiratory metabolism with Fe(III) serving as the common terminal electron acceptor in all species.
Geobacter sulfurreducens was isolated from a drainage ditch in Norman, Okla. It is rod-shaped, gram-negative, non-motile and non-spore forming. The optimum temperature range is 30 to 35°. About the metabolism, is strict anaerobic chemoorganotroph which oxidizes acetate with Fe(III), S, Co(III), fumarate, or malate as the electron acceptor. Hydrogen is also used as an electron donor for Fe(III) reduction, whereas other carboxylic acids, sugars, alcohols, amino acids, yeast extract, phenol, and benzoate are not. C-type cytochromes was found in cells.
Pelobacter spp.
Pelobacter is unique group of fermentative microorganisms belonging to the phylum Thermodesulfobacteriota. They consume fermentatively alcohols such as 2,3-butanediol, acetoin and ethanol, but not sugars, with acetate plus ethanol and/or hydrogen as the end products.
Paleobacter carbinolcus, isolated from anoxic mud, it belongs to the family Desulfuromonadaceae. This bacterial species grow by fermentation, syntrophic hydrogen/formate transfer, or electron transfer to sulfur from short-chain alcohols, hydrogen or formate but they don't oxidize acetate. There is no recent information about sugar fermentation or autotrophic growth. The sequencing analysis of genome demonstrated the expression of c-type cytochromes and the utilization of Fe (III) as a terminal acceptor with the indirect reduction of elemental sulfur that acts as a shuttle for electron transfer to Fe (III). Recent study thought that this electron transfer involves two periplasmic thioredoxins (Pcar_0426, Pcar_0427), an outer membrane protein (Pcar_0428), and a cytoplasmic oxidoreductase (Pcar_0429) encoded by the most highly upregulated genes.
Thermodesulfobacteriota
Thermodesulfobacteriota are Gram- negative, rod-shaped cells, occur singly, in pairs, or in chains in young cultures. Do not form spores. Usually nonmotile, but motility might be observed in some species. Thermophilic, strictly anaerobic, chemoheterotrophs.
Phylum Campylobacterota
The phylum Campylobacterota presents many sulfur-oxidizing known species, that have been recently recognized as able to reduce elemental sulfur, in some cases also preferring this pathway, coupled with hydrogen oxidation. Here's a list of the species able to reduce elemental sulfur. The mechanism used to reduce sulfur is still unclear for some of these species.
Wolinella
Wolinella is a sulfur reducing genus of bacteria and incomplete oxidizer that cannot use acetate as an electron donor. Publicly known is one species, Wolinella succinogenens.
Wolinella succinogenens is a well known non-vent sulfur-reducing bacteria, found in cattle rumen, that utilizes a [Ni-fe] hydrogenase to oxidize hydrogen and a single periplasmatic polysulfide reductase (PsrABC) bounded to the inner membrane to reduce elemental sulfur. PsrA is responsible for polysulfide reduction to , at a molybdopterin active site, PsrB is an [FeS] electron transfer protein and PsrC is a quinone-containing membrane anchor.
Sulfurospirillum
Sulfurospirillum species are sulfur reducing bacteria and incomplete oxidizer that use either or formate as electron donor but not acetate.
Sulfurovum
Sulfurovum sp. NCB37-1 has been given the hypothesis in which a polysulfide reductase (PsrABC) is involved in its sulfur reduction.
Sulfurimonas
Sulfurimonas species were previously considered to be chemolithoautotrophic sulfur-oxidizing bacteria (SOB), and there were only genetic evidences supporting a possible sulfur-reducing metabolism, but now it has been shown that sulfur reduction also occurs in this genus. The mechanism and the enzymes involved in this process have also been deduced, using Sulfurimonas sp. NW10 as a representative. In particular the presence of both a cytoplasmic and a periplasmic polysulfide reductases has been detected, in order to reduce cyclooctasulfur, which is the most common form of elemental sulfur in vent environments.
Sulfurimonas sp. NW10 shows an over-expression of the gene clusters ( and ) coding for the two reductases while reducing sulfur. These clusters were also found in other Sulfurimonas species isolated from hydrothermal vents, meaning that sulfur reduction is common in Sulfurimonas spp.
Further genetic analysis revealed that the polysulfide reductases from Sulfurimonas sp.NW10 share less than 40% sequence similarity with the one from W. succinogenes. This means that through time there has been a significant genetic differentiation between the two bacteria, most likely due to their different environments. Furthermore, the cytoplasmic sulfur-reduction performed by Sulfurimonas sp. NW10 is nowadays considered unique, being the only example among all the mesophilic sulfur-reducing bacteria. Before this discovery, only two hyperthermophilic bacteria were known to be able to do cytoplasmic sulfur-reduction, Aquifex aeolicus and Thermovibrio ammonificans.
Nautilia
Nautilia species are anaerobic, neutrophile, thermophilic sulfur-reducing bacteria, first discovered and isolated from a polychaete worm inhabiting deep sea hydrothermal vents, Alvinella pompejana. They are very short, gram-negative, motile and rod-shaped cells with a single polar flagellum. They grow chemolithoautotrophically on molecular hydrogen, elemental sulfur and . The use of sugars, peptides, organic acids or alcohols are not required in either the absence nor presence of sulfur. They rarely use sulfite and colloidal sulfur as electron acceptors. Sulfate, thiosulfate, nitrate, fumarate and ferric iron are not used. Four species have been found: Nautilia lithotrophica, Nautilia profundicola, Nautilia nitratireducens and Nautilia abyssi. The type species is Nautilia lithotrophica.
Nautilia abyssi is a gram-negative sulfur-reducing bacterium that lives in anaerobic conditions at hydrothermal vents. It was first discovered living on a chimney at a depth of 2620 meters on the East Pacific Rise. It grows at temperatures between 33° and 65°C, and within a pH range of 5.0–8.0. Under optimal conditions (60°C, pH 6.0–6.5), the generation time is 120 minutes. Like other species in this genus, cells move by means of a single polar flagellum. For metabolism N. abyssi use as an electron donor, elemental sulfur as an electron acceptor and as the carbon source.
Caminibacter
Caminibacter mediatlanticus was first isolated from a deep-sea hydrothermal vent on the Middle Atlantic Ridge. It is a thermophilic chemolithoautotroph, -oxidizing marine bacteria, that uses nitrate or elemental sulfur as electron acceptors, producing ammonia or hydrogen sulfide and it cannot use oxygen, thiosulfate, sulfite, selenate and arsenate. Its growth optimum is at 55 °C, and it seems to be inhibited by acetate, formate, lactate and peptone.
Aquificota
Aquificota phylum comprises rod-shaped, motile cells. Includes chemoorganotrophs and some of them are able to reduce elemental sulfur. Growth has been observed between pH 6.0 and 8.0.
Aquifex
Aquifex are rod-shaped, Gram-negative, nonsporulating cells with rounded ends. Wedge-shaped refractile areas in the cells are formed during growth. Type species: Aquifex pyrophilus.
Desulfurobacterium
Desulfurobacterium are rod-shaped, Gram-negative cells. Type species: Desulfurobacterium thermolithotrophum.
Thermovibrio ammonificans
Thermovibrio ammonificans is a gram-negative sulfur reducing bacteria, found in deep sea hydrothermal vent chimney. It is a chemolithoautotroph that grows in the presence of and , using nitrate or elemental sulfur as electron acceptors with concomitant formation of ammonium or hydrogen sulfide, respectively. Thiosulfate, sulfite and oxygen are not used as electron acceptors. Cells are short rods shape and motile thanks to polar flagellation. Their growth range temperature is from 60 °C to 80 °C and pH 5–7.
Thermosulfidibacter spp.
Thermosulfidibacter are gram-negative, anaerobic, thermophilic and neutrophilic bacteria. Strictly chemolithoautotrophic. The type species is Thermosulfidibacter takaii.
Thermosulfidibacter takaii are motile rods with a polar flagellum. Strictly anaerobic. Growth occurs at 55–78 °C (optimum, 70 °C), pH 5.0–7.5 (optimum, pH 5.5–6.0). They are sulfur-reducers.
Bacillota
Bacillota are mostly Gram-positive bacteria with some Gram-negative exceptions.
Ammonifex
These bacteria are Gram-negative, extremely thermophilic, strictly anaerobic, faculatative chemolithoautotrophic. Type species: Ammonifex degensii.
Carboxydothermus
Carboxydothermus pertinax differs from other members of his genus by its ability to grow chemolithoautotrophically with reduction of elemental sulfur or thiosulfate coupled to CO oxidation. The other electron acceptor is ferric citrate, amorphous iron (III) oxide, 9,10-anthraquinone 2,6-disulfonate. Hydrogen is used as energy source and as carbon source. Cells are rod-shaped with peritrichous flagella and grow at 65 °C temperature.
Chrysiogenota
Chrysiogenota are Gram-negative bacteria, motile thanks to a single polar flagellum, curved, rod-shaped cells. They are mesophilic, exhibiting anaerobic respiration in which arsenate serves as the electron acceptor. Strictly anaerobic, these bacteria are grown at 25–30 °C.
Desulfurispirillum spp.
Desulfurispirillum species are gram-negative, motile spirilla, obligately anaerobic with respiratory metabolism. Use elemental sulfur and nitrate as electron acceptors, and short-chain fatty acids and hydrogen as electron donors. Alkaliphilic and slightly halophilic.
Desulfurispirillum alkaliphilum is obligate anaerobic and heterotrophic bacteria, motile by single bipolar flagella. It uses elemental sulfur, polysulfide, nitrate and fumarate as electron acceptors. The final products are sulfide and ammonium. Utilizes short-chain fatty acids and H2 as electron donor and carbon as source. It is moderate alkaliphilic with a pH range for growth between 8.0 and 10.2 and an optimum at pH 9.0 and slightly halophilic with a salt range from 0.1 to 2.5 M Na+. Mesophilic with a maximum temperature for growth at 45 and an optimum at 35 °C.
Spirochaetota
Spirochaetes are free-living, gram-negative, helical-shaped and motile bacteria, often protist or animal-associated. They are obligate and facultative anaerobes. Among this phylum, two species are recognized as sulfur-reducing bacteria, Spirochaeta perfilievii and Spirochaeta smaragdinae.
Spirochaeta perfilievii are gram-negative, helical bacteria. Their size range varies from 10 to 200 μm. The shortest cells are those grown in extremely anaerobic environments. They are mesophilic with a temperature range 4–32 °C (optimum at 28–30 °C). Grows at pH 6.5–8.5 (optimum pH 7.0–7.5). Obligate, moderate halophile. Under anaerobic conditions, sulfur and thiosulfate are reduced to sulfide.
Spirochaeta smaragdinae are gram-negative, chemoorganotrophic, obligately anaerobic and halophilic bacteria. They are able to reduce sulfur to sulfide. Their temperature range is from 20 to 40 °C (optimum 37 °C), their pH range varies from 5.5 to 8.0 (optimum 7.0).
Synergistota
Dethiosulfovibrio spp.
Dethiosulfovibrio are a gram negative sulfur reducing bacteria that was isolated from "Thiodendron", bacterial sulfur mats obtained from different saline environments. Cells are curved or fibroid-like rods and motile thanks to flagella located on the concave side of the cells.
The temperature range is from 15° to 40 °C and at pH values between 5±5 and 8±0. About their metabolism, they ferments proteins, peptides, some organic acids and amino acids like serine, histidine, lysine, arginine, cysteine and threonine. Only in the presence of sulfur or thiosulfate can use alanine, glutamate, isoleucine, leucine and valine, moreover the presence of sulfur or thiosulfate increases the cell yield and the growth rate. They are obligately anaerobic and slightly halophilic. In the presence of fermentable substrates they are able to reduce elemental sulfur and thiosulfate but not sulfate or sulfite to sulfide. Growth did not occur with as electron donor and carbon dioxide or acetate as carbon sources in the presence of thiosulfate or elemental sulfur as electron acceptor. Unable to utilize carbohydrates, alcohols and some organic acids like acetate or succinate. Four species were found, Dethiosulfovibrio russensis, Dethiosulfovibrio marinus, Dethiosulfovibrio peptidovorans and Dethiosulfovibrio acidaminovorans.
Thermanaerovibrio spp.
Thermophilic and neutrophilic Gram-negative bacteria. Motile thanks to lateral flagella, located on the concave side of the cell. Non-spore-forming. Multiplication occurs by binary fission. Strictly anaerobic with chemo-organotrophic growth on fermentable substrates or lithoheterotrophic growth with molecular hydrogen and elemental sulfur, reducing the sulfur to . Inhabits the granular methanogenic sludge and neutral hot springs. The type species is Thermanaerovibrio acidaminovorans.
Thermanaerovibrio velox is gram-negative bacteria that was isolated from a thermophilic cyanobacterial mat from caldera Uzon, Kamchatka, Russia. The reproduction occurs by binary-fission and they do not form spore. Growth temperature goes from 45° to 70°, and pH range from 4 to 8.
Thermotogota
Thermotoga spp. are gram-negative, rod-shaped, non-spore forming, hyperthermophilic microorganisms, given their name by the presence of a sheathlike envelope called “toga”. They are strictly anaerobes and fermenters, catabolizing sugars or starch and producing lactate, acetate, , and as products, and can grow in a range temperature of 48–90 °C. High levels of inhibit their growth, and they share many genetic similarities with Archaea, caused by horizontal gene transfer They are also able to perform anaerobic respiration using as electron donor and usually Fe(III) as electron acceptor. Species belonging to the genus Thermotoga were found in terrestrial hot springs and marine hydrothermal vents. The species able to reduce sulfur do not show an alteration of growth yield and stoichiometry of organic products, and no ATP production occurs. Furthermore, toleration to increases during sulfur reduction, thus they produce to overcome growth inhibition. The genome of Thermotoga spp. is widely used as a model for studying adaptation to high temperatures, microbial evolution and biotechnological opportunities, such as biohydrogen production and biocatalysis.
Thermotoga maritima is the type species for the genus Thermotoga, growth is observed between 55 °C and 90 °C, the optimum is at 80 °C. Each cell presents a unique sheath- like structure and monotrichous flagellum. It was firstly isolated from a geothermally heated, shallow marine sediment at Vulcano, in Italy.
Thermotoga neapolitana is the second species isolated belonging to the genus Thermotoga. It was firstly found in a submarine thermal vent at Lucrino, near Naples, Italy, and has its optimum growth at 77 °C
Ecology
Sulfur-reducing bacteria are mostly mesophilic and thermophilic. Growth has been observed between a temperature range 37-95 °C, however the optimum is different depending on the species (i.e. Thermotoga neapolitana optimum 77 °C, Nautilia lithotrophica optimum 53 °C). They have been reported in many different environments, such as anoxic marine sediments, brackish and freshwater sediments, anoxic muds, bovine rumen, hot waters from solfataras and volcanic areas. Many of these bacteria are found in hot vents, where elemental sulfur is an abundant sulfur species. This happens due to volcanic activities, in which hot vapours and elemental sulfur are released together through the fractures of Earth's crust. The ability of using zero valence sulfur as both an electron donor or acceptor, allows Sulfurimonas spp. to spread widely among different habitats, from highly reducing to more oxidizing deep-sea environments. In some communities found in hydrothermal vents, their proliferation is enhanced thanks to the reactions carried out by thermophilic photo- or chemoautotrophs, in which there is simultaneously production of elemental sulfur and organic matter, respectively electron acceptor and energy source for sulfur-reducing bacteria. Sulfur reducers of hydrothermal vents can be free-living organisms, or endosymbionts of animals such as shrimps and tube worms.
Symbiosis
Thiodendron latens is a symbiotic association of aerotolerant spirochaetes and anaerobic sulfidogenes. The spirochaete species are the main structural and functional component of these mats and they may accumulate elemental sulfur in the intracellular space. This association of micro-organisms inhabits sulfide-rich habitats, where the chemical oxidation of sulfide by oxygen, manganese or ferric iron or by the activity of sulfide-oxidizing bacteria results in the formation of thiosulfate or elemental sulfur. The partly oxidized sulfur compounds can be either completely oxidized to sulfate by sulfur-oxidizing bacteria, if enough oxygen is present, or reduced to sulfide by sulfidogenic bacteria. In such places oxygen limitation is frequent, as indicated by micro-profile measurements from such habitats. This relationship may represent an effective shortcut in the sulfur cycle.
Synthophy
Desulfuromonas acetooxidans is able to grow in cocultures with green sulfur bacteria such as Chlorobium (vibrioforme and phaeovibroides). The electron donor for the sulfur-reducing bacterium is acetate, coupled with elemental sulfur reduction to sulfide. The green sulfur bacterium produces re-oxidizing the previously produced, in presence of light. During these cocultures experiments no elemental sulfur appears in the medium because it is immediately reduced.
Sulfur cycle
The sulfur cycle is one of the major biogeochemical processes. The majority of sulfur on Earth is present in sediments and rocks, but its quantity in the oceans represents the primary reservoir of sulfate of the entire biosphere. Human activities such as burning fossil fuels, also contribute to the cycle by entering a significant amount of sulfur dioxide in the atmosphere. The earliest life forms on Earth were sustained by sulfur metabolism, and the enormous diversity of present microorganisms is still supported by the sulfur cycle. It also interacts with numerous biogeochemical cycles of other elements such as carbon, oxygen, nitrogen and iron. Sulfur has diverse oxidation states ranging from +6 to −2, which permit to sulfur compounds to be used as electron donors and electron acceptors in numerous microbial metabolisms, which transform organic and inorganic sulfur compounds, contributing to physical, biological and chemical components of the biosphere.
The sulfur cycle follows several linked pathways.
Sulfate ReductionUnder anaerobic conditions, sulfate is reduced to sulfide by sulfate reducing bacteria, such as Desulfovibrio and Desulfobacter.
Sulfide Oxidation
Under aerobic conditions, sulfide is oxidized to sulfur and then sulfate by sulfur oxidizing bacteria, such as Thiobacillus, Beggiatoa and many others. Under anaerobic conditions, sulfide can be oxidized to sulfur and then sulfate by Purple and Green sulfur bacteria.
Sulfur Oxidation
Sulfur can also be oxidized to sulfuric acid by chemolithotrophic bacteria, such as Thiobacillus and Acidithiobacillus
Sulfur Reduction
Some bacteria are capable to reduce sulfur to sulfide enacting a sort of anaerobic respiration. This process can be carried out by both sulfate reducing bacteria and sulfur reducing bacteria. Although they thrive in the same habitats, sulfur reducing bacteria are incapable of sulfate reduction. Bacteria like Desulfuromonas acetoxidans are able to reduce sulfur at the cost of acetate. Some iron reducing bacteria reduce sulfur to generate ATP.
These are the main inorganic processes involved in the sulfur cycle but organic compounds can contribute as well to the cycle. The most abundant in nature is dimethyl sulfide () produced by the degradation of dimethylsulfoniopropionate. Many other organic S compounds affect the global sulfur cycle, including methanethiol, dimethyl disulfide, and carbon disulfide.
Uses
Microorganisms that have sulfur-based metabolism represent a great opportunity for industrial processes, in particular the ones that execute sulfidogenesis (production of sulfide). For example, these type of bacteria can be used in to generate hydrogen sulfide in order to obtain the selective precipitation and recovery of heavy metals in metallurgical and mining industries.
Flue gases treatment
According to innovative Chinese research, the SCDD process used to desulfurize flue gases can be lowered in costs and environmental impact, using biological reduction of elemental sulfur to , which represents the reducing agent in this process. The electron donors would be organics from wastewater, such as acetate and glucose. The SCDD process revisited in this way would take three steps at determinate conditions of pH, temperature and reagents concentration. The first in which biological sulfur reduction occurs, the second through which dissolved in wastewaters is stripped into hydrogen sulfide gas, and the third consists in the treatment of flue gases, removing over 90% of and NO, according to this study. Furthermore, the 88% of the sulfur input would be recovered as octasulfur and then reutilized, representing both a chemical-saving and a profitable solution.
Treatment of arsenic-contaminated waters
Sulfur reducing bacteria are used to remove Arsenite from the arsenic-contaminated waters, like acid mine drainage (AMD), metallurgy industry effluents, soils, surface and ground waters. The sulfidogenic process driven by sulfur reducing bacteria (Desulfurella) take place under acid condition and produce sulfide with which arsenite precipitates. Microbial sulfur reduction also produces protons that lower the pH in arsenic-contaminated water and prevent the formation of thioarsenite by-production with sulfide.
Treatment of mercury-contaminated waters
Wastewater deriving from industries that work on chloralkali and battery production, contains high levels of mercury ions, threatening aquatic ecosystems. Recent studies demonstrate that sulfidogenic process by sulfur reducing bacteria can be a good technology in the treatment of mercury-contaminate waters.
References
Phototrophic bacteria
Hydrogen biology | Sulfur-reducing bacteria | [
"Chemistry",
"Biology"
] | 7,463 | [
"Bacteria",
"Photosynthesis",
"Phototrophic bacteria"
] |
2,754,080 | https://en.wikipedia.org/wiki/Homoeoid | A homoeoid is a shell (a bounded region) bounded by two concentric, similar ellipses (in 2D) or ellipsoids (in 3D).
When the thickness of the shell becomes negligible, it is called a thin homoeoid. The name homoeoid was coined by Lord Kelvin and Peter Tait.
Mathematical definition
If the outer shell is given by
with semiaxes the inner shell is given for by
.
The thin homoeoid is then given by the limit
Physical meaning
A homoeoid can be used as a construction element of a matter or charge distribution. The gravitational or electromagnetic potential of a homoeoid homogeneously filled with matter or charge is constant inside the shell. This means that a test mass or charge will not feel any force inside the shell.
See also
Focaloid
References
External links
Surfaces
Mathematical physics
Potentials | Homoeoid | [
"Physics",
"Mathematics"
] | 177 | [
"Functions and mappings",
"Equations of physics",
"Mathematical objects",
"Potential theory",
"Mathematical relations",
"Physics theorems"
] |
1,978,061 | https://en.wikipedia.org/wiki/Transfer-messenger%20RNA | Transfer-messenger RNA (abbreviated tmRNA, also known as 10Sa RNA and by its genetic name SsrA) is a bacterial RNA molecule with dual tRNA-like and messenger RNA-like properties. The tmRNA forms a ribonucleoprotein complex (tmRNP) together with Small Protein B (SmpB), Elongation Factor Tu (EF-Tu), and ribosomal protein S1. In trans-translation, tmRNA and its associated proteins bind to bacterial ribosomes which have stalled in the middle of protein biosynthesis, for example when reaching the end of a messenger RNA which has lost its stop codon. The tmRNA is remarkably versatile: it recycles the stalled ribosome, adds a proteolysis-inducing tag to the unfinished polypeptide, and facilitates the degradation of the aberrant messenger RNA. In the majority of bacteria these functions are carried out by standard one-piece tmRNAs. In other bacterial species, a permuted ssrA gene produces a two-piece tmRNA in which two separate RNA chains are joined by base-pairing.
Discovery and early work
tmRNA was first designated 10Sa RNA in 1979, after a mixed "10S" electrophoretic fraction of Escherichia coli RNA was further resolved into tmRNA and the similarly sized RNase P RNA (10Sb). The presence of pseudouridine in the mixed 10S RNA hinted that tmRNA has modified bases found also in tRNA. The similarity at the 3' end of tmRNA to the T stem-loop of tRNA was first recognized upon sequencing ssrA from Mycobacterium tuberculosis. Subsequent sequence comparison revealed the full tRNA-like domain (TLD) formed by the 5' and 3' ends of tmRNA, including the acceptor stem with elements like those in alanine tRNA that promote its aminoacylation by alanine-tRNA ligase. It also revealed differences from tRNA: the anticodon arm is missing in tmRNA, and the D arm region is a loop without base pairs.
Structure
Secondary structure of the standard one-piece tmRNAs
The complete E. coli tmRNA secondary structure was elucidated by comparative sequence analysis and structural probing. Watson-Crick and G-U base pairs were identified by comparing the bacterial tmRNA sequences using automated computational methods in combination with manual alignment procedures. The accompanying figure shows the base pairing pattern of this prototypical tmRNA, which is organized into 12 phylogenetically supported helices (also called pairings P1 to P12), some divided into helical segments.
A prominent feature of every tmRNA is the conserved tRNA-like domain (TLD), composed of helices 1, 12, and 2a (analogs of the tRNA acceptor stem, T-stem and variable stem, respectively), and containing the 5' monophosphate and alanylatable 3' CCA ends. The mRNA-like region (MLR) is in standard tmRNA a large loop containing pseudoknots and a coding sequence (CDS) for the tag peptide, marked by the resume codon and the stop codon. The encoded tag peptide (ANDENYALAA in E. coli) varies among bacteria, perhaps depending on the set of proteases and adaptors available.
tmRNAs typically contain four pseudoknots, one (pk1) upstream of the tag peptide CDS, and the other three pseudoknots (pk2 to pk4) downstream of the CDS. The pseudoknot regions, although generally conserved, are evolutionarily plastic. For example, in the (one-piece) tmRNAs of cyanobacteria, pk4 is substituted with two tandemly arranged smaller pseudoknots. This suggests that tmRNA folding outside the TLD can be important, yet the pseudoknot region lacks conserved residues and pseudoknots are among the first structures to be lost as ssrA sequences diverge in plastid and endosymbiont lineages. Base pairing in the three-pseudoknot region of E. coli tmRNA is disrupted during trans-translation.
Two-piece tmRNAs
Circularly permuted ssrA has been reported in three major lineages: i) all alphaproteobacteria and the primitive mitochondria of jakobid protists, ii) two disjoint groups of cyanobacteria (Gloeobacter and a clade containing Prochlorococcus and many Synechococcus), and iii) some members of the betaproteobacteria (Cupriavidus and some Rhodocyclales). All produce the same overall two-piece (acceptor and coding pieces) form, equivalent to the standard form nicked downstream of the reading frame. None retain more than two pseudoknots compared to the four (or more) of standard tmRNA.
Alphaproteobacteria have two signature sequences: replacement of the typical T-loop sequence TΨCRANY with GGCRGUA, and the sequence AACAGAA in the large loop of the 3´-terminal pseudoknot. In mitochondria, the MLR has been lost, and a remarkable re-permutation of mitochondrial ssrA results in a small one-piece product in Jakoba libera.
The cyanobacteria provide the most plausible case for evolution of a permuted gene from a standard gene, due to remarkable sequence similarities between the two gene types as they occur in different Synechococcus strains.
tmRNA processing
Most tmRNAs are transcribed as larger precursors which are processed much like tRNA. Cleavage at the 5´ end is by ribonuclease P. Multiple exonucleases can participate in the processing of the 3´ end of tmRNA, although RNase T and RNase PH are most effective. Depending on the bacterial species, the 3'-CCA is either encoded or added by tRNA nucleotidyltransferase.
Similar processing at internal sites of permuted precursor tmRNA explains its physical splitting into two pieces. The two-piece tmRNAs have two additional ends whose processing must be considered. For alphaproteobacteria, one 5´ end is the unprocessed start site of transcription. The far 3´ end may in some cases be the result of rho-independent termination.
Three-dimensional structures
High-resolution structures of the complete tmRNA molecules are currently unavailable and may be difficult to obtain due to the inherent flexibility of the MLR. In 2007,
the crystal structure of the Thermus thermophilus TLD bound to the SmpB protein was obtained at 3 Å resolution. This structure shows that SmpB mimics the D stem and the anticodon of a canonical tRNA whereas helical section 2a of tmRNA corresponds to the variable arm of tRNA.
A cryo-electron microscopy study of tmRNA at an early stage of trans-translation shows the spatial relationship between the ribosome and the tmRNP (tmRNA bound to the EF-Tu protein). The TLD is located near the GTPase-associated center in the 50S ribosomal subunit; helix 5 and pseudoknots pk2 to pk4 form an arc around the beak of the 30S ribosomal subunit.
Trans-translation
Coding by tmRNA was discovered in 1995 when Simpson and coworkers overexpressed the mouse cytokine IL-6 in E. coli and found multiple truncated cytokine-derived peptides each tagged at the carboxyl termini with the same 11-amino acid residue extension (A)ANDENYALAA. With the exception of the N-terminal alanine, which comes from the 3' end of tmRNA itself, this tag sequence was traced to a short open reading frame in E. coli tmRNA. Keiler, et al., recognized that the tag peptide confers proteolysis and proposed the trans-translation model for tmRNA action.
While details of the trans-translation mechanism are under investigation it is generally agreed that tmRNA first occupies the empty A site of the stalled ribosome. Subsequently, the ribosome moves from the 3' end of the truncated messenger RNA onto the resume codon of the MLR, followed by a slippage-prone stage from where translation continues normally until the in-frame tmRNA stop codon is encountered. Trans-translation is essential in some bacterial species, whereas other bacteria require tmRNA to survive when subjected to stressful growth conditions. It is believed that tmRNA can help the cell with antibiotic resistance by rescuing the ribosomes stalled by antibiotics. Depending on the organism, the tag peptide may be recognized by a variety of proteases or protease adapters.
Mobile genetic elements and the tmRNA gene
ssrA is both a target for some mobile DNAs and a passenger on others. It has been found interrupted by three types of mobile elements. By different strategies none of these disrupt gene function: group I introns remove themselves by self-splicing, rickettsial palindromic elements (RPEs) insert in innocuous sites, and integrase-encoding genomic islands split their target ssrA yet restore the split-off portion.
Non-chromosomal ssrA was first detected in a genomic survey of mycobacteriophages (in 10% of the phages). Other mobile elements including plasmids and genomic islands have been found bearing ssrA. One interesting case is Rhodobacter sphaeroides ATCC 17025, whose native tmRNA gene is disrupted by a genomic island; unlike all other genomic islands in tmRNA (or tRNA) genes this island has inactivated the native target gene without restoration, yet compensates by carrying its own tmRNA gene. A very unusual relative of ssrA is found in the lytic mycobacteriophage DS6A, that encodes little more than the TLD.
Mitochondrial tmRNAs (ssrA gene)
A mitochondrion-encoded, structurally reduced form of tmRNA (mt-tmRNA) was first postulated for the jakobid flagellate Reclinomonas americana. Subsequently, the presence of a mitochondrial gene (ssrA) coding for tmRNA, as well as transcription and RNA processing sites were confirmed for all but one member of jakobids. Functional evidence, i.e., mt-tmRNA Aminoacylation with alanine, is available for Jakoba libera.
More recently, ssrA was also identified in mitochondrial genomes of oomycetes. Like in α-Proteobacteria (the ancestors of mitochondria), mt-tmRNAs are circularly permuted, two-piece RNA molecules, except in Jakoba libera where the gene has reverted to encoding a one-piece tmRNA conformation.
Identification of ssrA in mitochondrial genomes
Mitochondrial tmRNA genes were initially recognized as short sequences that are conserved among jakobids and that have the potential to fold into a distinct tRNA-like secondary structure. With the availability of nine complete jakobid mtDNA sequences, and a significantly improved covariance search tool (Infernal;), a covariance model has been developed based on jakobid mitochondrial tmRNAs, which identified mitochondrial ssrA genes also in oomycete. At present, a total of 34 oomycete mt-tmRNAs have been detected across six genera: Albugo, Bremia, Phytophthora, Pseudoperonospora, Pythium and Saprolegnia. A covariance model built with both jakobid and oomycete sequences is now available at Rfam under the name ‘mt-tmRNA’.
mt-tmRNA Structure
The standard bacterial tmRNA consists of a tRNA(Ala)-like domain (allowing addition of a non-encoded alanine to mRNAs that happen to lack a stop coding), and an mRNA-like domain coding for a protein tag that destines the polypeptide for proteolysis. The mRNA-like domain was lost in mt-tmRNAs.
Comparative sequence analysis indicates features typical for mt-tmRNAs.
Most conserved is the primary sequence of the amino acyl acceptor stem. This portion of the molecule has an invariable A residue in the discriminator position and a G-U pair at position 3 (except in Seculamonas ecuadoriensis, which has a G-C pair); this position is the recognition site for alanyl tRNA synthase. P2 is a helix of variable length (3 to 10 base pairs) and corresponds to the anticodon stem of tRNAs, yet without an anticodon loop (as not required for tmRNA function). P2 stabilizes the tRNA-like structure, but four nucleotides invariant across oomycetes and jakobids suggest an additional, currently unidentified function. P3 has five base pairs and corresponds to the T-arm of tRNAs, yet with different consensus nucleotides both in the paired region and the loop. The T-loop sequence is conserved across oomycetes and jakobid, with only few deviations (e.g., Saprolegnia ferax). Finally, instead of the tRNA-like D-stem with a shortened three-nucleotide D-loop characteristic for bacterial tmRNAs, mitochondrial counterparts have a highly variable 5 to 14-nt long loop.
The intervening sequence (Int.) of two-piece mt-tmRNAs is A+U rich and of irregular length (4-34 nt). ). For secondary structure models of one- and two-piece mt-tmRNAs see Figure 1.
mt-tmRNA processing and expression
RNA-Seq data of Phytophthora sojae show an expression level similar to that of neighboring mitochondrial tRNAs, and four major processing sites confirm the predicted termini of mature mt-tmRNA. The tmRNA precursor molecule is likely processed by RNase P and a tRNA 3’ processing endonuclease (see Figure 2); the latter activity is assumed to lead to the removal of the intervening sequence. Following the addition of CCA at the 3’ discriminator nucleotide, the tmRNA can be charged by alanyl-tRNA synthetase with alanine.
See also
CLPP
Ribosome
Messenger RNA
References
Further reading
External links
tmRDB: A database of tmRNA sequences
The tmRNA website
Rfam entry for tmRNA
RNA
Protein biosynthesis | Transfer-messenger RNA | [
"Chemistry"
] | 3,131 | [
"Protein biosynthesis",
"Gene expression",
"Biosynthesis"
] |
1,978,796 | https://en.wikipedia.org/wiki/Epitaxial%20wafer | An epitaxial wafer (also called epi wafer, epi-wafer, or epiwafer) is a wafer of semiconducting material made by epitaxial growth (epitaxy) for use in photonics, microelectronics, spintronics, or photovoltaics. The epi layer may be the same material as the substrate, typically monocrystaline silicon, or it may be a silicon dioxide (SoI) or a more exotic material with specific desirable qualities. The purpose of epitaxy is to perfect the crystal structure over the bare substrate below and improve the wafer surface's electrical characteristics, making it suitable for highly complex microprocessors and memory devices.
History
Silicon epi wafers were first developed around 1966 and achieved commercial acceptance by the early 1980s. Methods for growing the epitaxial layer on monocrystalline silicon or other wafers include: various types of chemical vapor deposition (CVD) classified as Atmospheric pressure CVD (APCVD) or metal organic chemical vapor deposition (MOCVD), as well as molecular beam epitaxy (MBE).
Two "kerfless" methods (without abrasive sawing) for separating the epitaxial layer from the substrate are called "implant-cleave" and "stress liftoff". A method applicable when the epi-layer and substrate are the same material employs ion implantation to deposit a thin layer of crystal impurity atoms and resulting mechanical stress at the precise depth of the intended epi layer thickness. The induced localized stress provides a controlled path for crack propagation in the following cleavage step. In the dry stress lift-off process applicable when the epi-layer and substrate are suitably different materials, a controlled crack is driven by a temperature change at the epi/wafer interface purely by the thermal stresses due to the mismatch in thermal expansion between the epi layer and substrate, without the necessity for any external mechanical force or tool to aid crack propagation. It was reported that this process yields single atomic plane cleavage, reducing the need for post-lift-off polishing and allowing multiple substrate reuses up to 10 times.
Types
The epitaxial layers may consist of compounds with particular desirable features such as gallium nitride (GaN), gallium arsenide (GaAs), or some combination of the elements gallium, indium, aluminum, nitrogen, phosphorus or arsenic.
Photovoltaic research and development
Solar cells, or photovoltaic cells (PV) for producing electric power from sunlight can be grown as thick epi wafers on a monocrystalline silicon "seed" wafer by chemical vapor deposition (CVD), and then detached as self-supporting wafers of some standard thickness (e.g., 250 μm) that can be manipulated by hand, and directly substituted for wafer cells cut from monocrystalline silicon ingots. Solar cells with this technique can have efficiencies approaching wafer-cut cells but at appreciably lower costs if the CVD can be done at atmospheric pressure in a high-throughput inline process. In September 2015, the Fraunhofer Institute for Solar Energy Systems (Fraunhofer ISE) announced the achievement of efficiency above 20% for such cells. Optimizing the production chain was done in collaboration with NexWafe GmbH, a company spun off from Fraunhofer ISE to commercialize production. The surface of epitaxial wafers may be textured to enhance light absorption. In April 2016, the company Crystal Solar of Santa Clara, California, in collaboration with the European research institute IMEC announced that they achieved a 22.5% cell efficiency of an epitaxial silicon cell with an nPERT (n-type passivated emitter, rear totally-diffused) structure grown on 6-inch (150 mm) wafers. In September 2015 Hanwha Q Cells presented an achieved conversion efficiency of 21.4% (independently confirmed) for screen-printed solar cells made with Crystal Solar epitaxial wafers.
In June 2015, it was reported that heterojunction solar cells grown epitaxially on n-type monocrystalline silicon wafers had reached an efficiency of 22.5% over a total cell area of 243.4 cm.
In 2016, a new approach was described for producing hybrid photovoltaic wafers combining the high efficiency of III-V multi-junction solar cells with the economies and wealth of experience associated with silicon. The technical complications involved in growing the III-V material on silicon at the required high temperatures, a subject of study for some 30 years, are avoided by epitaxial growth of silicon on GaAs at low temperature by Plasma-enhanced chemical vapor deposition (PECVD)
References
Swinger, Patricia. Building on the Past, Ready for the Future: A Fiftieth Anniversary Celebration of MEMC Electronic Materials, Inc.. The Donning Company, 2009.
Notes
Semiconductor device fabrication | Epitaxial wafer | [
"Materials_science"
] | 1,053 | [
"Semiconductor device fabrication",
"Microtechnology"
] |
1,978,904 | https://en.wikipedia.org/wiki/Latin%20letters%20used%20in%20mathematics%2C%20science%2C%20and%20engineering | Many letters of the Latin alphabet, both capital and small, are used in mathematics, science, and engineering to denote by convention specific or abstracted constants, variables of a certain type, units, multipliers, or physical entities. Certain letters, when combined with special formatting, take on special meaning.
Below is an alphabetical list of the letters of the alphabet with some of their uses. The field in which the convention applies is mathematics unless otherwise noted.
Typographical variation
Some common conventions:
Intensive quantities in physics are usually denoted with minusculeswhile extensive are denoted with capital letters.
Most symbols are written in italics.
Vectors can be denoted in boldface.
Sets of numbers are typically bold or blackboard bold.
Aa
A represents:
the first point of a triangle
the digit "ten" in hexadecimal and other positional numeral systems with a radix of 11 or greater
the unit ampere for electric current in physics
the area of a figure
the mass number or nucleon number of an element in chemistry
the Helmholtz free energy of a closed thermodynamic system of constant pressure and temperature
a vector potential, in electromagnetics it can refer to the magnetic vector potential
with a subscript, an alternating group on that many objects
an Abelian group in abstract algebra
the Glaisher–Kinkelin constant
atomic weight, denoted by Ar
work in classical mechanics
the pre-exponential factor in the Arrhenius Equation
electron affinity
represents the algebraic numbers or affine space in algebraic geometry.
A blood type
A spectral type
a represents:
the first side of a triangle (opposite point A)
the scale factor of the expanding universe in cosmology
the acceleration in mechanics equations
the first constant in a linear equation
a constant in a polynomial
the unit are for area (100 m2)
the unit prefix atto (10−18)
the first term in a sequence or series
Reflectance
Bb
B represents:
the digit "11" in hexadecimal and other positional numeral systems with a radix of 12 or greater
the second point of a triangle
a ball (also denoted by ℬ () or )
a basis of a vector space or of a filter (both also denoted by ℬ ())
in econometrics and time-series statistics it is often used for the backshift or lag operator, the formal parameter of the lag polynomial
the magnetic field, denoted or
B with various subscripts represents several variations of Brun's constant and Betti numbers; it can also be used to mean the Bernoulli numbers.
B meson
A blood type
Boron
Buoyancy
Bulk modulus
Luminance
A spectral type
b represents:
the second side of a triangle (opposite point B)
the impact parameter in nuclear scattering
the second constant in a linear equation
usually with an index, sometimes with an arrow over it, a basis vector
a breadth
the molality of a solution
Bottom quark
Barn ( cm)
Cc
C represents:
the third point of a triangle
the digit "12" in hexadecimal and other positional numeral systems with a radix of 13 or greater
the unit coulomb of electrical charge
capacitance in electrical theory
with indices denoting the number of combinations, a binomial coefficient
together with a degree symbol (°), the Celsius measurement of temperature = °C
the circumference of a circle or other closed curve
with a subscript, a cycle on that many vertices
with a subscript, a cyclic group of that order
the complement of a set (lowercase c and the symbol ∁ are also used)
an arbitrary category
the number concentration*
Carbon
Heat capacity
The C programming language
Cunningham correction factor
represents the set of complex numbers.
A vertically elongated C with an integer subscript n sometimes denotes the n-th coefficient of a formal power series.
c represents:
the unit prefix centi (10−2)
the amount concentration in chemistry
the speed of light in vacuum
the third side of a triangle (opposite corner C)
Lowercase Fraktur denotes the cardinality of the set of real numbers (the "continuum"), or, equivalently, of the power set of natural numbers.
the third constant in a linear equation
a constant in a polynomial
Charm quark
Speed of sound
Specific heat capacity
Dd
D represents
the digit "13" in hexadecimal and other positional numeral systems with a radix of 14 or greater
diffusion coefficient or diffusivity in dimensions of [distance2/time]
the differential operator in Euler's calculus notation
with a subscript, a dihedral group of that order or a dihedral group on a regular polygon of that many sides, depending on the convention chosen
dissociation energy
Dimension
Deuterium
Electric displacement
D meson
Density
d represents
the differential operator
the unit day of time (86,400 s)
the difference in an arithmetic sequence
a metric operator/function
the diameter of a circle
the unit prefix deci (10−1)
a thickness
a distance
Down quark
Infinitesimal increment in calculus
Density
Ee
E represents:
the digit "14" in hexadecimal and other positional numeral systems with a radix of 15 or greater
an exponent in decimal numbers. For example, 1.2E3 is 1.2×103 or 1200
the set of edges in a graph or matroid
the unit prefix exa (1018)
energy in physics
electric field denoted or
electromotive force (denoted and measured in volts), refers to voltage
an event (as in P(E), which reads "the probability P of event E occurring")
in statistics, the expected value of a random variable, sometimes as
Ek represents kinetic energy
(Arrhenius) activation energy, denoted Ea or EA
ionization energy, denoted Ei
electron affinity, denoted Eea
dissociation energy, denoted Ed
e represents:
Euler's number, a transcendental number equal to 2.71828182845... which is used as the base for natural logarithms
a vector of unit length, especially in the direction of one of the coordinates axes
the elementary charge in physics
an electron, usually denoted e− to distinguish against a positron e+
the eccentricity of a conic section
the identity element in a group
In a cartesian coordinate system, a unit vector in notations like , or
Ff
F represents
the digit "15" in hexadecimal and other positional numeral systems with a radix of 16 or greater
the unit farad of electrical capacity
the Helmholtz free energy of a closed thermodynamic system of constant pressure and temperature
together with a degree symbol (°) represents the Fahrenheit measurement of temperature = °F
Fluorine
A spectral type
F represents
force in mechanics equations
pFq is a hypergeometric series
the probability distribution function in statistics
a Fibonacci number
an arbitrary functor
a field
an event space sigma algebra as part of a probability space, often as
f represents:
the unit prefix femto (10−15)
f represents:
the generic designation of a function
Friction
Gg
G represents
an arbitrary graph, as in: G(V,E)
an arbitrary group
the unit prefix giga (109)
the Newtonian constant of gravitation
the Einstein tensor
the Gibbs energy
the centroid of a triangle
Catalan's constant
weight measured in newtons
Green's function
a spectral type
g represents:
the generic designation of a second function
the acceleration due to gravity on Earth
a unit of mass, the gramme
Gravitational field, denoted g
Metric tensor (general relativity)
Gluon
Hh
H represents:
an arbitrary subgraph
an arbitrary subgroup
a Hilbert space
the unit henry of magnetic inductance
the homology and cohomology functor
the enthalpy
the (Shannon) entropy of information
the orthocenter of a triangle
a partial sum of the harmonic series
Auxiliary magnetic field, denoted
Hamiltonian in quantum mechanics
Hankel function
Heaviside step function
Higgs boson
Hydrogen
Set of quaternions
Hat matrix
H0 is either the Hubble constant; or the Dimensionless Hubble parameter of (100 h km·s−1·Mpc−1, with h being the associated error.
represents the quaternions (after William Rowan Hamilton).
ΔH‡ represents the standard enthalpy of activation in the Eyring equation.
ℋ () represents the Hamiltonian in Hamiltonian mechanics.
h represents:
the class number in algebraic number theory
a small increment in the argument of a function
the unit hour for time (3600 s)
the Planck constant (6.626 069(57)× 10−34 J·s)
the unit prefix hecto (10)
the generic designation of a third function
the altitude of a triangle
a height
Spherical Hankel function
Ii
I represents:
the closed unit interval, which contains all real numbers from 0 to 1, inclusive
the identity matrix
the Irradiance
the moment of inertia
intensity in physics, typically the vector field I
Luminous intensity, typically Iv
the incenter of a triangle
the electric current
ionization energy, denoted I
I represents:
the index of an indexed family
Iodine
i represents:
the imaginary unit, a complex number that is the square root of −1
Imaginary quaternion unit
a subscript to denote the ith term (that is, a general term or index) in a sequence or list
the index to the elements of a vector, written as a subscript after the vector name
the index to the rows of a matrix, written as the first subscript after the matrix name
an index of summation using the sigma notation
the unit vector in Cartesian coordinates going in the x-direction, usually bold i
Jj
J represents:
the unit joule of energy
the current density in electromagnetism denoted
the radiosity in thermal mechanics
the moment of inertia
Total angular momentum quantum number
Bessel function of the first kind
Impulse
J represents:
the scheme of a diagram in category theory
j represents:
the index to the columns of a matrix, written as the second subscript after the matrix name
in electrical engineering, the square root of −1, instead of i
in electrical engineering, the principal cube root of 1:
the unit vector in Cartesian coordinates going in the y-direction, usually bold j
Electrical current density
Spherical Bessel function of the first kind
Imaginary unit in electrical engineering (where i represents current)
Unit vector for the second imaginary dimension in the quaternion number system (bold j)
Kk
K represents:
the temperature unit kelvin
the functors of K-theory
an unspecified (real) constant
a field in algebra
with a subscript, a complete graph on that many vertices
the area of a polygon
kinetic energy
Kaon
Potassium
Sectional curvature
A spectral type
k represents
the unit prefix kilo- (103)
the Boltzmann constant, often represented as kB to avoid confusion
the angular wavenumber of the wave equation, the magnitude of the wave vector k
an integer, e.g. a dummy variable in summations, or an index of a matrix
an unspecified (real) constant
the spring constant of Hooke's law
the spacetime curvature from the Friedmann equations in cosmology
the rate constant (coefficient)
the unit vector in Cartesian coordinates going in the z-direction, usually bold k
Unit vector for the third dimension in the quaternion number system (bold k)
Unit vector in the z direction
Ll
L represents:
length, used often in quantum mechanics as the size of an infinite square well
angular momentum
the unit of volume the litre
the radiance
the space of all integrable real (or complex) functions
the space of linear maps, as in L(E,F) or L(E) = End(E)
the likelihood function
a formal language
the operator creating a line graph
the lag operator in statistics
a Lucas number
the Lagrange function
Inductance in electromagnetism (measured in henries)
A spectral type
l represents:
the unit of volume the litre (often avoided due to confusion with the number 1 and uppercase letter I)
the length of a side of a rectangle or a cuboid (e.g. V = lwh; A = lw)
the last term of a sequence or series (e.g. Sn = n(a+l)/2)
the orbital angular momentum quantum number
ℒ () represents:
the Lagrangian (sometimes just L)
exposure (in particle physics)
ℓ represents:
Mean free path
Mm
M represents:
a manifold
a metric space
a matroid
the unit prefix mega- (106)
the Madelung constant for crystal structures held by ionic bonding
the moment of force
Torque when denoted as moment of force
molar mass
molar mass constant, denoted Mu
relative molecular mass, denoted Mr
Magnetization vector field M
A spectral type
m represents:
the number of rows in a matrix
atomic mass
atomic mass constant denoted mu
the slope in a linear regression or in any line
the mass in mechanics equations
the unit metre of length
the unit prefix milli (10−3)
a median of a triangle
the overall order of reaction
Magnitude
Minute (but the SI abbreviation is "min")
Slope
Magnetic moment in a magnetization field
Nn
N represents
the unit newton of force
the nine-point center of a triangle
Bessel function of the second kind (uncommon)
Nitrogen
Normal distribution
Normal vector
N represents
the neutron number
The number of particles of a thermodynamical system
NA represents the Avogadro constant
represents the natural numbers.
n represents
A neutron, which may be shown as , or n
the unit prefix nano (10−9)
n represents
the number of columns in a matrix
the "number of" in algebraic equations
the number density of particles in a volume
the index of the nth term of a sequence or series (e.g. tn = a + (n − 1)d)
the principal quantum number
the amount of a given substance
the number concentration
the overall order of reaction
Refractive index of a material
Spherical Bessel function of the second kind (uncommon)
An integer
Oo
O represents
the order of asymptotic behavior of a function (upper bound); see Big O notation
— the Origin of the coordinate system in Cartesian coordinates
the circumcenter of a triangle or other cyclic polygon, or more generally the center of a circle
A blood type
Oxygen
A spectral type
o represents
the order of asymptotic behavior of a function (strict upper bound); see Little o notation (also known as "small o notation")
the order of an element in a group
represents
the octonions
Pp
P represents:
the pressure in physics equations
the unit prefix peta (1015)
probability in statistics and statistical mechanics
an arbitrary point in geometry
with a subscript, a path on that many vertices
power, measured in watts
Active power in electrical engineering
weight measured in newtons
Legendre polynomial
Phosphorus
Polarization
represents
the prime numbers
Projective space
Projection (linear algebra)
a probability (as in P(E), which reads "the probability P of event E happening")
p represents
a prime number
the numerator of a fraction
the unit prefix pico (10−12)
a proton, often p+ or p
the linear momentum in physics equations
the perimeter of a triangle or other polygon
generalized momentum
the pressure in physics equations
Sound pressure
Electric dipole moment
Qq
Q represents:
the unit prefix quetta- (1030)
heat energy
electroweak charge, denoted QW
Reactive power in electrical engineering
Volumetric flow rate
represents the rational numbers
q represents:
the unit prefix quecto- (10−30)
a second prime number
the denominator of a fraction
the quotient resulting from integer division
the deceleration parameter in cosmology
electric charge of a particle
a generalized coordinate
Quark
Rr
R represents:
the unit prefix ronna- (1027)
the Ricci tensor
the circumradius of a cyclic polygon such as a triangle
an arbitrary relation
Riemann curvature tensor
Electrical resistance
Molar gas constant
represents the set of real numbers and various algebraic structures built upon the set of real numbers, such as .
r represents:
the unit prefix ronto- (10−27)
the radius of a circle or sphere
radial distance in a polar coordinate system or spherical coordinate system
the inradius of a triangle or other tangential polygon
the ratio of a geometric series (e.g. arn−1)
the remainder resulting from integer division
the separation of two objects, for example in Coulomb's law
a position vector
the rate of concentration change of B (due to chemical reaction) denoted rB
Ss
S represents
a sum
the unit siemens of electric conductance
the unit sphere (with superscript denoting dimension)
the scattering matrix
entropy
action in joule-seconds
Apparent power in electrical engineering
Area
Spin operator
Sulfur
Symmetric group
with a subscript, a symmetric group on that many objects
s represents:
an arclength
a path length
the displacement in mechanics equations
the unit second of time
a complex variable s = σ + i t in analytic number theory
the semiperimeter of a triangle or other polygon
Strange quark
Specific entropy
𝒮 () represents a system's action in physics
represents
the sedenions
Tt
T represents:
the top element of a lattice
a tree (a special kind of graph)
temperature in physics equations
the unit tesla of magnetic flux density
the unit prefix tera (1012)
the stress–energy tensor
tension in physics
an arbitrary monad
the time it takes for one oscillation
kinetic energy
Torque
A spectral type
Tritium
Period, the reciprocal of frequency
t represents:
time in graphs, functions or equations
a term in a sequence or series (e.g. tn = tn−1 + 5)
the imaginary part of the complex variable s = σ + it in analytic number theory
the sample statistic resulting from a Student's t-test
the half life of a quantity, denoted as t1⁄2
Top quark
represents
the trigintaduonions
Uu
U represents:
a U-set which is a set of uniqueness
a unitary operator
in thermodynamics, the internal energy of a system
a forgetful functor
Potential energy
Uranium
U(n) represents the unitary group of degree n
∪ represents the union operator
u represents:
the initial velocity in mechanics equations
Up quark
Vv
V represents:
Vanadium
the unit volt of voltage
the set of vertices in a graph
a vector space
potential energy
molar volume denoted by Vm
v represents
the final velocity in mechanics equations
frequency, especially when referring to electromagnetic waves
a specific volume in classical mechanics
the rate of concentration change of B (due to chemical reaction) denoted vB
the rate of reaction based on amount concentration denoted v or vc
the rate of reaction based on number concentration denoted v or vC
Ww
W represents:
the unit watt of power
work, both mechanical and thermodynamical
in thermodynamics, the number of possible quantum states in Boltzmann's entropy formula
weight measured in newtons
Lambert's W function
Tungsten
W boson
Work function
Wiener process
w represents:
the coordinate on the fourth axis in four-dimensional space
work in classical mechanics
Width
Xx
X represents
a random variable
a triangle center
the first part of a bipartite graph
Ẋ represents
the rate of change of quantity X
x represents
a realized value of a random variable
an unknown variable, most often (but not always) from the set of real numbers, while a complex unknown would rather be called z, and an integer by a letter like m from the middle of the alphabet
the coordinate on the first or horizontal axis in a Cartesian coordinate system, or the viewport in a graph or window in computer graphics; the abscissa
Axis in the direction of travel of an aerospace vehicle (longitudinal axis)
a mole fraction
Variable to be determined in an algebraic equation
A vector in linear algebra
Yy
Y represents:
the unit prefix yotta- (1024)
Bessel function of the second kind
the second part of a bipartite graph
Yttrium
Gross domestic product
Y represents:
a second random variable
y represents:
the unit prefix yocto- (10−24)
a realized value of a second random variable
a second unknown variable
the coordinate on the second or vertical axis (backward axis in three dimensions) in a Cartesian coordinate system, or in the viewport of a graph or window in computer graphics; the ordinate
The port-starboard axis (transverse axis) of an aerospace vehicle
a mole fraction
Zz
Z represents:
the unit prefix zetta (1021)
the atomic number or proton number of an element in chemistry
a standardized normal random variable in probability theory and statistics
Partition function
in meteorology, the radar reflectivity factor
Electrical impedance
Z boson
Compressibility factor
represents the integers
z represents:
the unit prefix zepto (10−21)
the coordinate on the third or vertical axis in three dimensional space
The vertical axis or altitude in an aerospace vehicle
the view depth in computer graphics, see also "z-buffering"
the argument of a complex function, or any other variable used to represent a complex value
in astronomy, wavelength redshift
a third unknown variable
the collision frequency of A with A is denoted zA(A)
the collision frequency factor is denoted zAB
See also
Blackboard bold letters used in mathematics
Greek letters used in mathematics, science, and engineering
List of letters used in mathematics, science, and engineering
Mathematical Alphanumeric Symbols
Glossary of mathematical symbols
References
Elementary mathematics | Latin letters used in mathematics, science, and engineering | [
"Mathematics"
] | 4,402 | [
"Elementary mathematics",
"nan"
] |
1,979,061 | https://en.wikipedia.org/wiki/Two-fluid%20model | Two-fluid model is a macroscopic traffic flow model to represent traffic in a town/city or metropolitan area, put forward in the 1970s by Ilya Prigogine and Robert Herman.
There is also a two-fluid model which helps explain the behavior of superfluid helium. This model states that there will be two components in liquid helium below its lambda point (the temperature where superfluid forms). These components are a normal fluid and a superfluid component. Each liquid has a different density and together their sum makes the total density, which remains constant. The ratio of superfluid density to the total density increases as the temperature approaches absolute zero.
It is possible to solve a one-dimensional coupled Euler and Navier-Stokes equations with the self-similar Ansatz as a simple model of coupled inviscid and viscous fluid system.
See also
Superfluidity
External links
Two Fluid Model of Superfluid Helium
References
Mathematical modeling
Traffic flow
Superfluidity | Two-fluid model | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 204 | [
"Physical phenomena",
"Phase transitions",
"Mathematical modeling",
"Applied mathematics",
"Chemical engineering",
"Phases of matter",
"Superfluidity",
"Condensed matter physics",
"Exotic matter",
"Piping",
"Matter",
"Fluid dynamics"
] |
1,980,733 | https://en.wikipedia.org/wiki/Deep-level%20trap | Deep-level traps or deep-level defects are a generally undesirable type of electronic defect in semiconductors. They are "deep" in the sense that the energy required to remove an electron or hole from the trap to the valence or conduction band is much larger than the characteristic thermal energy kT, where k is the Boltzmann constant and T is the temperature. Deep traps interfere with more useful types of doping by compensating the dominant charge carrier type, annihilating either free electrons or electron holes depending on which is more prevalent. They also directly interfere with the operation of transistors, light-emitting diodes and other electronic and opto-electronic devices, by offering an intermediate state inside the band gap. Deep-level traps shorten the non-radiative life time of charge carriers, and—through the Shockley–Read–Hall (SRH) process—facilitate recombination of minority carriers, having adverse effects on the semiconductor device performance. Hence, deep-level traps are not appreciated in many opto-electronic devices as it may lead to poor efficiency and reasonably large delay in response.
Common chemical elements that produce deep-level defects in silicon include iron, nickel, copper, gold, and silver. In general, transition metals produce this effect, while light metals such as aluminium do not.
Surface states and crystallographic defects in the crystal lattice can also play role of deep-level traps.
References
Optoelectronics
Semiconductor properties
Semiconductor structures | Deep-level trap | [
"Physics",
"Materials_science"
] | 310 | [
"Semiconductor properties",
"Condensed matter physics"
] |
1,981,775 | https://en.wikipedia.org/wiki/Far-red%20light | Far-red light is a range of light at the extreme red end of the visible spectrum, just before infrared light. Usually regarded as the region between 700 and 750 nm wavelength, it is dimly visible to human eyes. It is largely reflected or transmitted by plants because of the absorbance spectrum of chlorophyll, and it is perceived by the plant photoreceptor phytochrome. However, some organisms can use it as a source of energy in photosynthesis. Far-red light also is used for vision by certain organisms such as some species of deep-sea fishes and mantis shrimp.
In horticulture
Plants perceive light through internal photoreceptors absorbing a specified wavelength signaling (photomorphogenesis) or transferring the energy to a plant process (photosynthesis). In plants, the photoreceptors cryptochrome and phototropin absorb radiation in the blue spectrum (B: λ=400–500 nm) and regulate internal signaling such as hypocotyl inhibition, flowering time, and phototropism. Additional receptors called phytochrome absorb radiation in the red (R: λ=660–730 nm) and far-red (FR: λ>730 nm) spectra and influence many aspects of plant development such as germination, seedling etiolation, transition to flowering, shade avoidance, and tropisms. Phytochrome has the ability to interchange its conformation based on the quantity or quality of light it perceives and does so via photoconversion from phytochrome red (Pr) to phytochrome far-red (Pfr). Pr is the inactive form of phytrochrome, ready to perceive red light. In a high R:FR environment, Pr changes conformation to the active form of phytochrome Pfr. Once active, Pfr translocates to the cellular nucleus, binds to phytochrome interacting factors (PIF), and targets the PIFs to the proteasome for degradation. Exposed to a low R:FR environment, Pfr absorbs FR and changes conformation back to the inactive Pr. The inactive conformation will remain in the cytosol, allowing PIFs to target their binding site on the genome and induce expression (i.e. shade avoidance through cellular elongation). FR irradiation can lead to compromised plant immunity and increased pathogen susceptibility.
FR has long been considered a minimal input in photosynthesis. In the early 1970’s, PhD physicist and soil crop professor Dr. Keith J. McCree lobbied for a standard definition of photosynthetically active radiation (PAR: λ=400–700 nm) which did not include FR. More recently, scientists have provided evidence that a broader spectrum called photo-biologically active radiation (PBAR: λ=280–800 nm) is more applicable terminology. This range of wavelengths not only includes FR, but also UV-A and UV-B. The Emerson Effect established that the rate of photosynthesis in red and green algae was higher when exposed to R and FR than the sum of the two individually. This research laid the ground work for the elucidation of the dual photosystems in plants. Photosystem I (PSI) and photosystem II (PSII) work synergistically; through photochemical processes PSII transports electrons to PSI. Any imbalance between R and FR leads to unequal excitation between PSI and PSII, thereby reducing the efficiency of photochemistry.
See also
Crown shyness
References
Citations
General sources
Color
Optical spectrum | Far-red light | [
"Physics"
] | 753 | [
"Optical spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
1,982,403 | https://en.wikipedia.org/wiki/Effectiveness | Effectiveness or effectivity is the capability of producing a desired result or the ability to produce desired output. When something is deemed effective, it means it has an intended or expected outcome, or produces a deep, vivid impression.
Etymology
The origin of the word effective stems from the Latin word , which means "creative, productive, or effective". It surfaced in Middle English between 1300 and 1400 AD.
Usage
Science and technology
Mathematics and logic
In mathematics and logic, effective is used to describe metalogical methods that fit the criteria of an effective procedure.
In group theory, a group element acts effectively (or faithfully) on a point, if that point is not fixed by the action.
Physics
In physics, an effective theory is, similar to a phenomenological theory, a framework intended to explain certain (observed) effects without the claim that the theory correctly models the underlying (unobserved) processes.
In heat transfer, effectiveness is a measure of the performance of a heat exchanger when using the NTU method.
Medicine
In medicine, effectiveness relates to how well a treatment works in practice, especially as shown in pragmatic clinical trials, as opposed to efficacy, which measures how well it works in explanatory clinical trials or research laboratory studies.
Humanities and social sciences
In management, effectiveness relates to getting the right things done. Peter Drucker reminds his readers that "effectiveness can and must be learned". The term "institutional effectiveness" has been widely adopted within higher education settings to assess "how well an institution is achieving its mission and goals". For example, Utica University in New York State holds that "an effective institution is characterized by a clearly defined mission that articulates who it serves, what it aspires to be, and what it values. Likewise, an effective institution has clear goals that are broadly communicated to its stakeholders". Pope Francis adopts the same term in a critique of governmental effectiveness when he refers to "a number of countries [with] a relatively low level of institutional effectiveness", which leads to "greater problems for their people while benefiting those who profit from this situation". He refers, for example, to countries whose laws are "well written" but not effectively enforced.
In human–computer interaction, effectiveness is defined as "the accuracy and completeness of users' tasks while using a system".
In military science, effectiveness is a criterion used to assess changes determined in the target system, in its behavior, capability, or assets, tied to the attainment of an end state, achievement of an objective, or creation of an effect, while combat effectiveness is: "...the readiness of a military unit to engage in combat based on behavioral, operational, and leadership considerations. Combat effectiveness measures the ability of a military force to accomplish its objective and is one component of overall military effectiveness."
Related terms
Efficacy, efficiency, and effectivity are terms that can, in some cases, be interchangeable with the term effectiveness. The word effective is sometimes used in a quantitative way, "being very effective or not very effective". However, neither "effectiveness", nor "effectively", inform about the direction (positive or negative) or gives a comparison to a standard of the given effect. Efficacy, on the other hand, is the extent to which a desired effect is achieved; the ability to produce a desired amount of the desired effect, or the success in achieving a given goal. Contrary to the term efficiency, the focus of efficacy is the achievement as such, not the resources spent in achieving the desired effect. Therefore, what is effective is not necessarily efficacious, and what is efficacious is not necessarily efficient.
Other synonyms for effectiveness include: clout, capability, success, weight, performance.
Antonyms for effectiveness include: uselessness, ineffectiveness.
Simply stated, effective means achieving an effect, and efficient means getting a task or job done it with little waste. To illustrate: suppose, you build 10 houses, very fast and cheap (efficient), but no one buy them. In contrary to building 5 houses same budget and time as 10 houses but you get all 5 sold and the buyers are happy (effective). You get the desired result selling your houses and happy customers (effect).
See also
Effect (disambiguation)
References
Heat transfer
Goal | Effectiveness | [
"Physics",
"Chemistry"
] | 881 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamics"
] |
1,982,496 | https://en.wikipedia.org/wiki/Nuclear%20force | The nuclear force (or nucleon–nucleon interaction, residual strong force, or, historically, strong nuclear force) is a force that acts between hadrons, most commonly observed between protons and neutrons of atoms. Neutrons and protons, both nucleons, are affected by the nuclear force almost identically. Since protons have charge +1 e, they experience an electric force that tends to push them apart, but at short range the attractive nuclear force is strong enough to overcome the electrostatic force. The nuclear force binds nucleons into atomic nuclei.
The nuclear force is powerfully attractive between nucleons at distances of about 0.8 femtometre (fm, or ), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsion is responsible for the size of nuclei, since nucleons can come no closer than the force allows. (The size of an atom, of size in the order of angstroms (Å, or ), is five orders of magnitude larger.) The nuclear force is not simple, though, as it depends on the nucleon spins, has a tensor component, and may depend on the relative momentum of the nucleons.
The nuclear force has an essential role in storing energy that is used in nuclear power and nuclear weapons. Work (energy) is required to bring charged protons together against their electric repulsion. This energy is stored when the protons and neutrons are bound together by the nuclear force to form a nucleus. The mass of a nucleus is less than the sum total of the individual masses of the protons and neutrons. The difference in masses is known as the mass defect, which can be expressed as an energy equivalent. Energy is released when a heavy nucleus breaks apart into two or more lighter nuclei. This energy is the internucleon potential energy that is released when the nuclear force no longer holds the charged nuclear fragments together.
A quantitative description of the nuclear force relies on equations that are partly empirical. These equations model the internucleon potential energies, or potentials. (Generally, forces within a system of particles can be more simply modelled by describing the system's potential energy; the negative gradient of a potential is equal to the vector force.) The constants for the equations are phenomenological, that is, determined by fitting the equations to experimental data. The internucleon potentials attempt to describe the properties of nucleon–nucleon interaction. Once determined, any given potential can be used in, e.g., the Schrödinger equation to determine the quantum mechanical properties of the nucleon system.
The discovery of the neutron in 1932 revealed that atomic nuclei were made of protons and neutrons, held together by an attractive force. By 1935 the nuclear force was conceived to be transmitted by particles called mesons. This theoretical development included a description of the Yukawa potential, an early example of a nuclear potential. Pions, fulfilling the prediction, were discovered experimentally in 1947. By the 1970s, the quark model had been developed, by which the mesons and nucleons were viewed as composed of quarks and gluons. By this new model, the nuclear force, resulting from the exchange of mesons between neighbouring nucleons, is a multiparticle interaction, the collective effect of strong force on the underlining structure of the nucleons.
Description
While the nuclear force is usually associated with nucleons, more generally this force is felt between hadrons, or particles composed of quarks. At small separations between nucleons (less than ~ 0.7 fm between their centres, depending upon spin alignment) the force becomes repulsive, which keeps the nucleons at a certain average separation. For identical nucleons (such as two neutrons or two protons) this repulsion arises from the Pauli exclusion force. A Pauli repulsion also occurs between quarks of the same flavour from different nucleons (a proton and a neutron).
Field strength
At distances larger than 0.7 fm the force becomes attractive between spin-aligned nucleons, becoming maximal at a centre–centre distance of about 0.9 fm. Beyond this distance the force drops exponentially, until beyond about 2.0 fm separation, the force is negligible. Nucleons have a radius of about 0.8 fm.
At short distances (less than 1.7 fm or so), the attractive nuclear force is stronger than the repulsive Coulomb force between protons; it thus overcomes the repulsion of protons within the nucleus. However, the Coulomb force between protons has a much greater range as it varies as the inverse square of the charge separation, and Coulomb repulsion thus becomes the only significant force between protons when their separation exceeds about .
The nuclear force has a spin-dependent component. The force is stronger for particles with their spins aligned than for those with their spins anti-aligned. If two particles are the same, such as two neutrons or two protons, the force is not enough to bind the particles, since the spin vectors of two particles of the same type must point in opposite directions when the particles are near each other and are (save for spin) in the same quantum state. This requirement for fermions stems from the Pauli exclusion principle. For fermion particles of different types, such as a proton and neutron, particles may be close to each other and have aligned spins without violating the Pauli exclusion principle, and the nuclear force may bind them (in this case, into a deuteron), since the nuclear force is much stronger for spin-aligned particles. But if the particles' spins are anti-aligned, the nuclear force is too weak to bind them, even if they are of different types.
The nuclear force also has a tensor component which depends on the interaction between the nucleon spins and the angular momentum of the nucleons, leading to deformation from a simple spherical shape.
Nuclear binding
To disassemble a nucleus into unbound protons and neutrons requires work against the nuclear force. Conversely, energy is released when a nucleus is created from free nucleons or other nuclei: the nuclear binding energy. Because of mass–energy equivalence (i.e. Einstein's formula ), releasing this energy causes the mass of the nucleus to be lower than the total mass of the individual nucleons, leading to the so-called "mass defect".
The nuclear force is nearly independent of whether the nucleons are neutrons or protons. This property is called charge independence. The force depends on whether the spins of the nucleons are parallel or antiparallel, as it has a non-central or tensor component. This part of the force does not conserve orbital angular momentum, which under the action of central forces is conserved.
The symmetry resulting in the strong force, proposed by Werner Heisenberg, is that protons and neutrons are identical in every respect, other than their charge. This is not completely true, because neutrons are a tiny bit heavier, but it is an approximate symmetry. Protons and neutrons are therefore viewed as the same particle, but with different isospin quantum numbers; conventionally, the proton is isospin up, while the neutron is isospin down. The strong force is invariant under SU(2) isospin transformations, just as other interactions between particles are invariant under SU(2) transformations of intrinsic spin. In other words, both isospin and intrinsic spin transformations are isomorphic to the SU(2) symmetry group. There are only strong attractions when the total isospin of the set of interacting particles is 0, which is confirmed by experiment.
Our understanding of the nuclear force is obtained by scattering experiments and the binding energy of light nuclei.
The nuclear force occurs by the exchange of virtual light mesons, such as the virtual pions, as well as two types of virtual mesons with spin (vector mesons), the rho mesons and the omega mesons. The vector mesons account for the spin-dependence of the nuclear force in this "virtual meson" picture.
The nuclear force is distinct from what historically was known as the weak nuclear force. The weak interaction is one of the four fundamental interactions, and plays a role in processes such as beta decay. The weak force plays no role in the interaction of nucleons, though it is responsible for the decay of neutrons to protons and vice versa.
History
The nuclear force has been at the heart of nuclear physics ever since the field was born in 1932 with the discovery of the neutron by James Chadwick. The traditional goal of nuclear physics is to understand the properties of atomic nuclei in terms of the "bare" interaction between pairs of nucleons, or nucleon–nucleon forces (NN forces).
Within months after the discovery of the neutron, Werner Heisenberg and Dmitri Ivanenko had proposed proton–neutron models for the nucleus. Heisenberg approached the description of protons and neutrons in the nucleus through quantum mechanics, an approach that was not at all obvious at the time. Heisenberg's theory for protons and neutrons in the nucleus was a "major step toward understanding the nucleus as a quantum mechanical system". Heisenberg introduced the first theory of nuclear exchange forces that bind the nucleons. He considered protons and neutrons to be different quantum states of the same particle, i.e., nucleons distinguished by the value of their nuclear isospin quantum numbers.
One of the earliest models for the nucleus was the liquid-drop model developed in the 1930s. One property of nuclei is that the average binding energy per nucleon is approximately the same for all stable nuclei, which is similar to a liquid drop. The liquid-drop model treated the nucleus as a drop of incompressible nuclear fluid, with nucleons behaving like molecules in a liquid. The model was first proposed by George Gamow and then developed by Niels Bohr, Werner Heisenberg, and Carl Friedrich von Weizsäcker. This crude model did not explain all the properties of the nucleus, but it did explain the spherical shape of most nuclei. The model also gave good predictions for the binding energy of nuclei.
In 1934, Hideki Yukawa made the earliest attempt to explain the nature of the nuclear force. According to his theory, massive bosons (mesons) mediate the interaction between two nucleons. In light of quantum chromodynamics (QCD)—and, by extension, the Standard Model—meson theory is no longer perceived as fundamental. But the meson-exchange concept (where hadrons are treated as elementary particles) continues to represent the best working model for a quantitative NN potential. The Yukawa potential (also called a screened Coulomb potential) is a potential of the form
where g is a magnitude scaling constant, i.e., the amplitude of potential, is the Yukawa particle mass, r is the radial distance to the particle. The potential is monotone increasing, implying that the force is always attractive. The constants are determined empirically. The Yukawa potential depends only on the distance r between particles, hence it models a central force.
Throughout the 1930s a group at Columbia University led by I. I. Rabi developed magnetic-resonance techniques to determine the magnetic moments of nuclei. These measurements led to the discovery in 1939 that the deuteron also possessed an electric quadrupole moment. This electrical property of the deuteron had been interfering with the measurements by the Rabi group. The deuteron, composed of a proton and a neutron, is one of the simplest nuclear systems. The discovery meant that the physical shape of the deuteron was not symmetric, which provided valuable insight into the nature of the nuclear force binding nucleons. In particular, the result showed that the nuclear force was not a central force, but had a tensor character. Hans Bethe identified the discovery of the deuteron's quadrupole moment as one of the important events during the formative years of nuclear physics.
Historically, the task of describing the nuclear force phenomenologically was formidable. The first semi-empirical quantitative models came in the mid-1950s, such as the Woods–Saxon potential (1954). There was substantial progress in experiment and theory related to the nuclear force in the 1960s and 1970s. One influential model was the Reid potential (1968)
where and where the potential is given in units of MeV. In recent years, experimenters have concentrated on the subtleties of the nuclear force, such as its charge dependence, the precise value of the πNN coupling constant, improved phase-shift analysis, high-precision NN data, high-precision NN potentials, NN scattering at intermediate and high energies, and attempts to derive the nuclear force from QCD.
As a residual of strong force
The nuclear force is a residual effect of the more fundamental strong force, or strong interaction. The strong interaction is the attractive force that binds the elementary particles called quarks together to form the nucleons (protons and neutrons) themselves. This more powerful force, one of the fundamental forces of nature, is mediated by particles called gluons. Gluons hold quarks together through colour charge which is analogous to electric charge, but far stronger. Quarks, gluons, and their dynamics are mostly confined within nucleons, but residual influences extend slightly beyond nucleon boundaries to give rise to the nuclear force.
The nuclear forces arising between nucleons are analogous to the forces in chemistry between neutral atoms or molecules called London dispersion forces. Such forces between atoms are much weaker than the attractive electrical forces that hold the atoms themselves together (i.e., that bind electrons to the nucleus), and their range between atoms is shorter, because they arise from small separation of charges inside the neutral atom. Similarly, even though nucleons are made of quarks in combinations which cancel most gluon forces (they are "colour neutral"), some combinations of quarks and gluons nevertheless leak away from nucleons, in the form of short-range nuclear force fields that extend from one nucleon to another nearby nucleon. These nuclear forces are very weak compared to direct gluon forces ("colour forces" or strong forces) inside nucleons, and the nuclear forces extend only over a few nuclear diameters, falling exponentially with distance. Nevertheless, they are strong enough to bind neutrons and protons over short distances, and overcome the electrical repulsion between protons in the nucleus.
Sometimes, the nuclear force is called the residual strong force, in contrast to the strong interactions which arise from QCD. This phrasing arose during the 1970s when QCD was being established. Before that time, the strong nuclear force referred to the inter-nucleon potential. After the verification of the quark model, strong interaction has come to mean QCD.
Nucleon–nucleon potentials
Two-nucleon systems such as the deuteron, the nucleus of a deuterium atom, as well as proton–proton or neutron–proton scattering are ideal for studying the NN force. Such systems can be described by attributing a potential (such as the Yukawa potential) to the nucleons and using the potentials in a Schrödinger equation. The form of the potential is derived phenomenologically (by measurement), although for the long-range interaction, meson-exchange theories help to construct the potential. The parameters of the potential are determined by fitting to experimental data such as the deuteron binding energy or NN elastic scattering cross sections (or, equivalently in this context, so-called NN phase shifts).
The most widely used NN potentials are the Paris potential, the Argonne AV18 potential, the CD-Bonn potential, and the Nijmegen potentials.
A more recent approach is to develop effective field theories for a consistent description of nucleon–nucleon and three-nucleon forces. Quantum hadrodynamics is an effective field theory of the nuclear force, comparable to QCD for colour interactions and QED for electromagnetic interactions. Additionally, chiral symmetry breaking can be analyzed in terms of an effective field theory (called chiral perturbation theory) which allows perturbative calculations of the interactions between nucleons with pions as exchange particles.
From nucleons to nuclei
The ultimate goal of nuclear physics would be to describe all nuclear interactions from the basic interactions between nucleons. This is called the microscopic or ab initio approach of nuclear physics. There are two major obstacles to overcome:
Calculations in many-body systems are difficult (because of multi-particle interactions) and require advanced computation techniques.
There is evidence that three-nucleon forces (and possibly higher multi-particle interactions) play a significant role. This means that three-nucleon potentials must be included into the model.
This is an active area of research with ongoing advances in computational techniques leading to better first-principles calculations of the nuclear shell structure. Two- and three-nucleon potentials have been implemented for nuclides up to A = 12.
Nuclear potentials
A successful way of describing nuclear interactions is to construct one potential for the whole nucleus instead of considering all its nucleon components. This is called the macroscopic approach. For example, scattering of neutrons from nuclei can be described by considering a plane wave in the potential of the nucleus, which comprises a real part and an imaginary part. This model is often called the optical model since it resembles the case of light scattered by an opaque glass sphere.
Nuclear potentials can be local or global: local potentials are limited to a narrow energy range and/or a narrow nuclear mass range, while global potentials, which have more parameters and are usually less accurate, are functions of the energy and the nuclear mass and can therefore be used in a wider range of applications.
See also
Nuclear binding energy
References
Bibliography
Gerald Edward Brown and A. D. Jackson (1976). The Nucleon–Nucleon Interaction. Amsterdam North-Holland Publishing. .
R. Machleidt and I. Slaus, "The nucleon–nucleon interaction" , J. Phys. G 27 (May 2001) R69. . (Topical review.)
E. A. Nersesov (1990). Fundamentals of atomic and nuclear physics. Moscow: Mir Publishers. .
Further reading
Ruprecht Machleidt, "Nuclear Forces", Scholarpedia, 9(1):30710. .
Quantum chromodynamics
Nuclear physics
yi:שטארקע נוקלעארע קראפט | Nuclear force | [
"Physics"
] | 3,926 | [
"Nuclear physics"
] |
1,982,854 | https://en.wikipedia.org/wiki/Push%20technology | Push technology, also known as server Push, refers to a communication method, where the communication is initiated by a server rather than a client. This approach is different from the "pull" method where the communication is initiated by a client.
In push technology, clients can express their preferences for certain types of information or data, typically through a process known as the publish–subscribe model. In this model, a client "subscribes" to specific information channels hosted by a server. When new content becomes available on these channels, the server automatically sends, or "pushes," this information to the subscribed client.
Under certain conditions, such as restrictive security policies that block incoming HTTP requests, push technology is sometimes simulated using a technique called polling. In these cases, the client periodically checks with the server to see if new information is available, rather than receiving automatic updates.
General use
Synchronous conferencing and instant messaging are examples of push services. Chat messages and sometimes files are pushed to the user as soon as they are received by the messaging service. Both decentralized peer-to-peer programs (such as WASTE) and centralized programs (such as IRC or XMPP) allow pushing files, which means the sender initiates the data transfer rather than the recipient.
Email may also be a push system: SMTP is a push protocol (see Push e-mail). However, the last step—from mail server to desktop computer—typically uses a pull protocol like POP3 or IMAP. Modern e-mail clients make this step seem instantaneous by repeatedly polling the mail server, frequently checking it for new mail. The IMAP protocol includes the IDLE command, which allows the server to tell the client when new messages arrive. The original BlackBerry was the first popular example of push-email in a wireless context.
Another example is the PointCast Network, which was widely covered in the 1990s. It delivered news and stock market data as a screensaver. Both Netscape and Microsoft integrated push technology through the Channel Definition Format (CDF) into their software at the height of the browser wars, but it was never very popular. CDF faded away and was removed from the browsers of the time, replaced in the 2000s with RSS (a pull system.)
Other uses of push-enabled web applications include software updates distribution ("push updates"), market data distribution (stock tickers), online chat/messaging systems (webchat), auctions, online betting and gaming, sport results, monitoring consoles, and sensor network monitoring.
Examples
Web push
The Web push proposal of the Internet Engineering Task Force is a simple protocol using HTTP version 2 to deliver real-time events, such as incoming calls or messages, which can be delivered (or "pushed") in a timely fashion. The protocol consolidates all real-time events into a single session which ensures more efficient use of network and radio resources. A single service consolidates all events, distributing those events to applications as they arrive. This requires just one session, avoiding duplicated overhead costs.
Web Notifications are part of the W3C standard and define an API for end-user notifications. A notification allows alerting the user of an event, such as the delivery of an email, outside the context of a web page. As part of this standard, Push API is fully implemented in Chrome, Firefox, and Edge, and partially implemented in Safari .
HTTP server push
HTTP server push (also known as HTTP streaming) is a mechanism for sending unsolicited (asynchronous) data from a web server to a web browser. HTTP server push can be achieved through any of several mechanisms.
As a part of HTML5 the Web Socket API allows a web server and client to communicate over a full-duplex TCP connection.
Generally, the web server does not terminate a connection after response data has been served to a client. The web server leaves the connection open so that if an event occurs (for example, a change in internal data which needs to be reported to one or multiple clients), it can be sent out immediately; otherwise, the event would have to be queued until the client's next request is received. Most web servers offer this functionality via CGI (e.g., Non-Parsed Headers scripts on Apache HTTP Server). The underlying mechanism for this approach is chunked transfer encoding.
Another mechanism is related to a special MIME type called multipart/x-mixed-replace, which was introduced by Netscape in 1995. Web browsers interpret this as a document that changes whenever the server pushes a new version to the client. It is still supported by Firefox, Opera, and Safari today, but it is ignored by Internet Explorer and is only partially supported by Chrome. It can be applied to HTML documents, and also for streaming images in webcam applications.
The WHATWG Web Applications 1.0 proposal includes a mechanism to push content to the client. On September 1, 2006, the Opera web browser implemented this new experimental system in a feature called "Server-Sent Events". It is now part of the HTML5 standard.
Pushlet
In this technique, the server takes advantage of persistent HTTP connections, leaving the response perpetually "open" (i.e., the server never terminates the response), effectively fooling the browser to remain in "loading" mode after the initial page load could be considered complete. The server then periodically sends snippets of JavaScript to update the content of the page, thereby achieving push capability. By using this technique, the client doesn't need Java applets or other plug-ins in order to keep an open connection to the server; the client is automatically notified about new events, pushed by the server. One serious drawback to this method, however, is the lack of control the server has over the browser timing out; a page refresh is always necessary if a timeout occurs on the browser end.
Long polling
Long polling is itself not a true push; long polling is a variation of the traditional polling technique, but it allows emulating a push mechanism under circumstances where a real push is not possible, such as sites with security policies that require rejection of incoming HTTP requests.
With long polling, the client requests to get more information from the server exactly as in normal polling, but with the expectation that the server may not respond immediately. If the server has no new information for the client when the poll is received, then instead of sending an empty response, the server holds the request open and waits for response information to become available. Once it does have new information, the server immediately sends an HTTP response to the client, completing the open HTTP request. Upon receipt of the server response, the client often immediately issues another server request. In this way the usual response latency (the time between when the information first becomes available and the next client request) otherwise associated with polling clients is eliminated.
For example, BOSH is a popular, long-lived HTTP technique used as a long-polling alternative to a continuous TCP connection when such a connection is difficult or impossible to employ directly (e.g., in a web browser); it is also an underlying technology in the XMPP, which Apple uses for its iCloud push support.
Flash XML Socket relays
This technique, used by chat applications, makes use of the XML Socket object in a single-pixel Adobe Flash movie. Under the control of JavaScript, the client establishes a TCP connection to a unidirectional relay on the server. The relay server does not read anything from this socket; instead, it immediately sends the client a unique identifier. Next, the client makes an HTTP request to the web server, including this identifier with it. The web application can then push messages addressed to the client to a local interface of the relay server, which relays them over the Flash socket. The advantage of this approach is that it appreciates the natural read-write asymmetry that is typical of many web applications, including chat, and as a consequence it offers high efficiency. Since it does not accept data on outgoing sockets, the relay server does not need to poll outgoing TCP connections at all, making it possible to hold open tens of thousands of concurrent connections. In this model, the limit to scale is the TCP stack of the underlying server operating system.
Reliable Group Data Delivery (RGDD)
In services such as cloud computing, to increase reliability and availability of data, it is usually pushed (replicated) to several machines. For example, the Hadoop Distributed File System (HDFS) makes 2 extra copies of any object stored. RGDD focuses on efficiently casting an object from one location to many while saving bandwidth by sending minimal number of copies (only one in the best case) of the object over any link across the network. For example, Datacast is a scheme for delivery to many nodes inside data centers that relies on regular and structured topologies and DCCast is a similar approach for delivery across data centers.
Push notification
A push notification is a message that is "pushed" from a back-end server or application to a user interface, e.g. mobile applications or desktop applications. Apple introduced push notifications for iPhone in 2009, and in 2010 Google released "Google Cloud to Device Messaging" (superseded by Google Cloud Messaging and then by Firebase Cloud Messaging).
In November 2015, Microsoft announced that the Windows Notification Service would be expanded to make use of the Universal Windows Platform architecture, allowing for push data to be sent to Windows 10, Windows 10 Mobile, Xbox, and other supported platforms using universal API calls and POST requests.
Push notifications are mainly divided into two approaches, local notifications and remote notifications. For local notifications, the application schedules the notification with the local device's OS. The application sets a timer in the application itself, provided it is able to continuously run in the background. When the event's scheduled time is reached, or the event's programmed condition is met, the message is displayed in the application's user interface.
Remote notifications are handled by a remote server. Under this scenario, the client application needs to be registered on the server with a unique key (e.g., a UUID). The server then fires the message against the unique key to deliver it to the client via an agreed client/server protocol such as HTTP or XMPP, and the client displays the message received. When the push notification arrives, it can transmit short notifications and messages, set badges on application icons, blink or continuously light up the notification LED, or play alert sounds to attract user's attention. Push notifications are usually used by applications to bring information to users' attention. The content of the messages can be classified in the following example categories:
Chat messages from a messaging application such as Facebook Messenger sent by other users.
Vendor special offers: A vendor may want to advertise their offers to customers.
Event reminders: Some applications may allow the customer to create a reminder or alert for a specific time.
Subscribed topic changes: Users may want to get updates regarding the weather in their location, or monitor a web page to track changes, for instance.
Real-time push notifications may raise privacy issues since they can be used to bind virtual identities of social network pseudonyms to the real identities of the smartphone owners. The use of unnecessary push notifications for promotional purposes has been criticized as an example of attention theft.
See also
BlazeDS
BOSH (protocol)
Channel Definition Format
Client–server model
Comet (programming)
File transfer
GraniteDS
HTTP/2
Lightstreamer
Notification LED
Pull technology
Push Access Protocol
Push email
SQL Server Notification Services
Streaming media
WebSocket
WebSub
References
External links
W3C Push Workshop. A 1997 workshop that discussed push technology and some early examples thereof
HTTP Streaming with Ajax A description of HTTP Streaming from the Ajax Patterns website
The Web Socket API candidate recommendation
HTML5 Server-Sent Events draft specification
Ajax (programming)
Internet terminology
Mobile technology
Web development | Push technology | [
"Technology",
"Engineering"
] | 2,470 | [
"Computing terminology",
"Internet terminology",
"Web development",
"Software engineering",
"nan"
] |
1,983,053 | https://en.wikipedia.org/wiki/Difference%20gel%20electrophoresis | Difference gel electrophoresis (DIGE) is a form of gel electrophoresis where up to three different protein samples can be labeled with size-matched, charge-matched spectrally resolvable fluorescent dyes (for example Cy3, Cy5, Cy2) prior to two dimensional gel electrophoresis.
Procedure
The three samples are mixed and loaded onto IEF (isoelectric focusing chromatography) for first dimension and the strip is transferred to a SDS PAGE. After the gel electrophoresis, the gel is scanned with the excitation wavelength of each dye one after the other, so each sample can be seen separately (if we scan the gel at the excitation wavelength of the Cy3 dye, we will see in the gel only the sample that was labeled with that dye). This technique is used to see changes in protein abundance (for example, between a sample of a healthy person and a sample of a person with disease), post-translational modifications, truncations and any modification that might change the size or isoelectric point of proteins. The binary shifts might be left to right (change in isoelectric point), vertical (change in size) or diagonal (change in both size and isoelectric point). Reciprocal Labeling is done to make sure the changes seen are not due to dye-dependent interactions.
Advantages
It overcomes limitations in traditional 2D electrophoresis that are due to inter-gel variation. This can be considerable even with identical samples. Since the proteins from the different sample types (e.g. healthy/diseased, virulent/non-virulent) are run on the same gel they can be directly compared. To do this with traditional 2D electrophoresis requires large numbers of time-consuming repeats.
Standards
In experiments comprising several gels, a common technique is to include an internal standard in each gel. The internal standard is prepared by mixing together several or all of the samples in the experiment. This allows the measurement of the abundance of a protein in each sample relative to the internal standard. Since the amounts of each protein in the internal standard is known to be the same in every gel, this method reduces inter-gel variation.
See also
PROTOMAP
References
Electrophoresis | Difference gel electrophoresis | [
"Chemistry",
"Biology"
] | 470 | [
"Instrumental analysis",
"Molecular biology techniques",
"Electrophoresis",
"Biochemical separation processes"
] |
30,160,065 | https://en.wikipedia.org/wiki/Flory%E2%80%93Fox%20equation | In polymer chemistry and polymer physics, the Flory–Fox equation is a simple empirical formula that relates molecular weight to the glass transition temperature of a polymer system. The equation was first proposed in 1950 by Paul J. Flory and Thomas G. Fox while at Cornell University. Their work on the subject overturned the previously held theory that the glass transition temperature was the temperature at which viscosity reached a maximum. Instead, they demonstrated that the glass transition temperature is the temperature at which the free space available for molecular motions achieved a minimum value. While its accuracy is usually limited to samples of narrow range molecular weight distributions, it serves as a good starting point for more complex structure-property relationships.
Recent molecular simulations have demonstrated that while the functional form of the Flory-Fox relation holds for a wide range of molecular architectures (linear chain, bottlebrush, star, and ring polymers), however, the central free-volume argument of the Flory-Fox relation does not hold since branched polymers, despite having more free ends, form materials of higher density and glass transition temperature increases.
Overview
The Flory–Fox equation relates the number-average molecular weight, Mn, to the glass transition temperature, Tg, as shown below:
where Tg,∞ is the maximum glass transition temperature that can be achieved at a theoretical infinite molecular weight and K is an empirical parameter that is related to the free volume present in the polymer sample. It is this concept of “free volume” that is observed by the Flory–Fox equation.
Free volume can be most easily understood as a polymer chain's “elbow room” in relation to the other polymer chains surrounding it. The more elbow room a chain has, the easier it is for the chain to move and achieve different physical conformations. Free volume decreases upon cooling from the rubbery state until the glass transition temperature at which point it reaches some critical minimum value and molecular rearrangement is effectively “frozen” out, so the polymer chains lack sufficient free volume to achieve different physical conformations. This ability to achieve different physical conformations is called segmental mobility.
Free volume not only depends on temperature, but also on the number of polymer chain ends present in the system. End chain units exhibit greater free volume than units within the chain because the covalent bonds that make up the polymer are shorter than the intermolecular nearest neighbor distances found at the end of the chain. In other words, chain end units are less dense than the covalently bonded interchain units. This means that a polymer sample with low polydispersity and long chain lengths (high molecular weights) will have fewer chain ends per total units and less free volume than an equivalent polymer sample consisting of short chains. In short, chain ends can be viewed as an “impurity” when considering the packing of chains, and more impurity results in a lower Tg. Recent computer simulation study showed that the classical picture of mobility around polymer chain can differ in the presence of plasticizer, especially if molecules of plasticizer can create hydrogen bonds with specific sites of the polymer chain, such as hydrophilic or hydrophobic groups. In such a case, polymer chain ends exhibit only a mere increase of the associated free volume as compared to the average associated free volume around main chain monomers. In special cases, the free volume around hydrophilic main chain sites can exceed the free volume associated to the hydrophilic polymer ends.
Thus, glass transition temperature is dependent on free volume, which in turn is dependent on the average molecular weight of the polymer sample. This relationship is described by the Flory–Fox equation. Low molecular weight values result in lower glass transition temperatures whereas increasing values of molecular weight result in an asymptotic approach of the glass transition temperature to Tg,∞ .
Molecular-level derivation
The main shortcoming related to the free volume concept is that it is not so well defined at the molecular level. A more precise, molecular-level derivation of the Flory–Fox equation has been developed by Alessio Zaccone and Eugene Terentjev. The derivation is based on a molecular-level model of the temperature-dependent shear modulus G of glassy polymers. The shear modulus of glasses has two main contributions, one which is related to affine displacements of the monomers in response to the macroscopic strain, which is proportional to the local bonding environment and also to the non-covalent van der Waals-type interactions, and a negative contribution that corresponds to random (nonaffine) monomer-level displacements due to the local disorder. Due to thermal expansion, the first (affine) term decreases abruptly near the glass transition temperature Tg because of the weakening of the non-covalent interactions, while the negative nonaffine term is less affected by temperature. Experimentally, it is observed indeed that G drops sharply by many orders of magnitude at or near Tg (it does not really drop to zero but to the much lower value of the rubber elasticity plateau). By setting at the point where the G drops abruptly and solving for Tg, one obtains the following relation:
In this equation, is the maximum volume fraction, or packing fraction, occupied by the monomers at the glass transition if there were no covalent bonds, i.e. in the limit of average number of covalent bonds per monomer .
If the monomers can be approximated as soft spheres, then as in the jamming of soft frictionless spheres.
In the presence of covalent bonds between monomers, as is the case in the polymer, the packing fraction is lowered, hence , where is a parameter that expresses the effect of topological constraints due to covalent bonds on the total packing fraction occupied by the monomers in a given polymer.
Finally, the packing fraction occupied by the monomers in the absence of covalent bonds is related to via thermal expansion, according to , which comes from integrating the thermodynamic relation between thermal expansion coefficient and volume V, , where is the coefficient of thermal expansion of the polymer in the glassy state. Note the relation between packing fraction and total volume given by , where is the total number of monomers, with molecular volume , contained in the total volume of the material, which has been used above. Hence is the integration constant in , and it was found that for the case of polystyrene. Also, is the molecular weight of one monomer in the polymer chain.
Hence the above equation clearly recovers the Flory–Fox equation with its dependence on the number average molecular weight , and provides a molecular-level meaning to the empirical parameters present in the Fox-Flory equation. Furthermore, it predicts that , i.e. that the glass transition temperature is inversely proportional to the thermal expansion coefficient in the glass state.
Alternative equations
While the Flory–Fox equation describes many polymers very well, it is more reliable for large values of Mn and samples of narrow weight distribution. As a result, other equations have been proposed to provide better accuracy for certain polymers. For example:
This minor modification of the Flory–Fox equation, proposed by Ogawa, replaces the inverse dependence on Mn with the square of the product of the number-average molecular weight, Mn , and weight-average molecular weight, Mw . Additionally, the equation:
was proposed by Fox and Loshaek, and has been applied to polystyrene, polymethylmethacrylate, and polyisobutylene, among others.
However, it is important to note that despite the dependence of Tg on molecular weight that the Flory-Fox and related equations describe, molecular weight is not necessarily a practical design parameter for controlling Tg because the range over which it can be changed without altering the physical properties of the polymer due to molecular weight change is small.
The Fox equation
The Flory–Fox equation serves the purpose of providing a model for how glass transition temperature changes over a given molecular weight range. Another method to modify the glass transition temperature is to add a small amount of low molecular weight diluent, commonly known as a plasticizer, to the polymer. The presence of a low molecular weight additive increases the free volume of the system and subsequently lowers Tg , thus allowing for rubbery properties at lower temperatures. This effect is described by the Fox equation:
Where w1 and w2 are weight fractions of components 1 and 2, respectively. In general, the accuracy of the Fox equation is very good and it is commonly also applied to predict the glass transition temperature in (miscible) polymer blends and statistical copolymers.
References
Polymer chemistry
Polymer physics | Flory–Fox equation | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,759 | [
"Polymer physics",
"Materials science",
"Polymer chemistry"
] |
30,162,162 | https://en.wikipedia.org/wiki/British%20Institute%20of%20Non-Destructive%20Testing | The British Institute of Non-Destructive Testing or BINDT is a professional body for engineers and other technical professionals involved in non-destructive testing and condition monitoring in the United Kingdom. The institute was founded in 1976, by amalgamation of the Society of Non-Destructive Examination (SONDE) and the NDT Society of Great Britain (NDTS), which were both founded in 1954.
BINDT is a licensed member institution of the Engineering Council and a full member of the European Federation of NDT (EFNDT) and the International Committee for NDT (ICNDT). Their headquarters is located in Northampton, UK.
Certification
BINDT maintains PCN (Personnel Certification in Non-Destructive Testing). PCN is a personnel certification scheme in NDT methods, welding inspection and condition monitoring, which is accredited by UKAS according to ISO 17024. PCN certification schemes for the major NDT methods conform to BS EN ISO 9712 (2012). PCN condition monitoring certification schemes in Vibration Analysis, Acoustic Emission, Infrared Thermography and Lubrication Management and Analysis conform to the appropriate parts of ISO 18436.
Journal
BINDT publishes "Insight - Non-Destructive Testing and Condition Monitoring", which includes original research and development papers, technical and scientific reviews and case studies.
BINDT also publishes "NDT News". This is a trade magazine for NDT, inspection, condition monitoring and quality practitioners. It is sent free of charge to its members and to PCN certified personnel.
References
External links
BINDT website
Aerospace Workshop
NDT Conference
ECUK Licensed Members
Nondestructive testing
Northampton
Organisations based in Northamptonshire
Professional associations based in the United Kingdom
1976 establishments in the United Kingdom
Scientific organizations established in 1976 | British Institute of Non-Destructive Testing | [
"Materials_science"
] | 346 | [
"Nondestructive testing",
"Materials testing"
] |
27,432,870 | https://en.wikipedia.org/wiki/Isobaric%20labeling | Isobaric labeling is a mass spectrometry strategy used in quantitative proteomics. Peptides or proteins are labeled with chemical groups that have nominally identical mass (isobaric), but vary in terms of distribution of heavy isotopes in their structure. These tags, commonly referred to as tandem mass tags, are designed so that the mass tag is cleaved at a specific linker region upon high-energy collision-induced dissociation (HCD) during tandem mass spectrometry yielding reporter ions of different masses.
The most common isobaric tags are amine-reactive tags. However, tags that react with cysteine residues and carbonyl groups have also been described. These amine-reactive groups go through N-hydroxysuccinimide (NHS) reactions, which are based around three types of functional groups. Isobaric labeling methods include tandem mass tags (TMT), isobaric tags for relative and absolute quantification (iTRAQ), mass differential tags for absolute and relative quantification, and dimethyl labeling. TMTs and iTRAQ methods are most common and developed of these methods. Tandem mass tags have a mass reporter region, a cleavable linker region, a mass normalization region, and a protein reactive group and have the same total mass.
Workflow
A typical bottom-up proteomics workflow is described by (Yates, 2014). Protein samples are enzymatically digested by a protease to produce peptides. Each digested experimental sample is derivative from a set with a different isotopic variant of the tag. The samples are mixed in typically equal ratios and analyzed simultaneously in one MS run. Since the tags are isobaric and have identical chemical properties, the isotopic variants of the tags appear as a single composite peak at the same m/z value in an MS1 scan with identical liquid chromatography (LC) retention times. During the MS2 analysis and upon fragmentation, each isotopic variant of the tag produces sequence-specific product ions. These product ions are used to determine the peptide sequence and the reporter tags whose abundances reflect the relative ratio of the peptide in the combined samples. The use of MS/MS is required to detect the tags, therefore, unlabeled peptides are not quantified.
Advantages
Explained previously by (Lee, Choe, Aggarwal, 2017). A key benefit of isobaric labeling over other quantification techniques (e.g. label-free) is the multiplex capabilities and thus increased throughput potential. The ability to combine and analyze several samples simultaneously in one LC-MS run eliminates the need to analyze multiple data sets and eliminates run-to-run variation. Multiplexing reduces sample processing variability, improves specificity by quantifying the peptides from each condition simultaneously, and reduces turnaround time for multiple samples. Without multiplexing, information can be missed from run to run, affecting identification and quantification, as peptides selected for fragmentation on one LC-MS/MS run may not be present or of suitable quantity in subsequent sample runs. The current available isobaric chemical tags facilitate the simultaneous analysis of 2 to 11 experimental samples.
Applications
Isobaric labeling has been successfully used for many biological applications including protein identification and quantification, protein expression profiling of normal vs abnormal states, quantitative analysis of proteins for which no antibodies are available and identification and quantification of post-translationally modified proteins.
Availability
There are two types of isobaric tags commercially available: tandem mass tags (TMT) and isobaric tags for relative and absolute quantitation (iTRAQ). Amine-reactive TMT are available in duplex, 6-plex 10-plex, and now 11-plex sets. Amine-reactive iTRAQ are available in 4-plex and 8-plex forms.
References
Mass spectrometry | Isobaric labeling | [
"Physics",
"Chemistry"
] | 816 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
878,311 | https://en.wikipedia.org/wiki/Lenticular%20printing | Lenticular printing is a technology in which lenticular lenses (a technology also used for 3D displays) are used to produce printed images with an illusion of depth, or the ability to change or move as they are viewed from different angles.
Examples include flip and animation effects such as winking eyes, and modern advertising graphics whose messages change depending on the viewing angle. It can be used to create frames of animation, for a motion effect; offsetting the various layers at different increments, for a 3D effect; or simply to show sets of alternative images that appear to transform into each other.
Colloquial terms for lenticular prints include "flickers", "winkies", "wiggle pictures", and "tilt cards". The trademarks Vari-Vue and Magic Motion are often used for lenticular pictures, without regard to the actual manufacturer.
Process
Lenticular printing is a multi-step process that consists of creating a lenticular image from at least two images, and placing it behind a lenticular lens.
Creation and interlacing of images
Once the images are collected, they are arranged in individual frame files, then digitally combined into a single file in a process called interlacing. Interlacing can be done manually using a raster graphics editor or using dedicated interlacing software.
Printing and assembly
The interlaced image may be printed directly on the back (smooth side) of the lens, or on a substrate (ideally a synthetic paper) that is laminated to the lens. When printing on the backside of the lens, the critical registration of the fine "slices" of interlaced images must be absolutely correct during the lithographic or screen printing process to avoid "ghosting" and poor image definition.
Variations and effects
The combined lenticular print shows two or more images by changing the angle from which the print is viewed. If a sequence of images is used, it can even show a short animation.
Though normally produced in sheet form by interlacing simple images or colors throughout the artwork, lenticular images can also be created in roll form with 3D effects or multi-color changes. Alternatively, several images of the same object, taken from slightly different angles, can be used to create a lenticular print with a stereoscopic 3D effect. 3D effects can be achieved only in a lateral (side-by-side) orientation, as each of the viewer's eyes must see them from a slightly different angle to achieve the stereoscopic effect. Other effects, like morphs, motion, and zooms work better (with less ghosting or latent effects) in top-to-bottom orientation, but can be achieved in both orientations.
Materials and manufacturing processes
There are many commercial processes in the manufacture of lenticular images, which can be made from PVC, APET, acrylic, and PETG, as well as other materials. While PETG and APET are the most common, other materials are becoming popular to accommodate outdoor use and special forming due to the increasing use of lenticular images on items such as gift cards. Lithographic lenticular printing allows for the flat side of the lenticular sheet to have ink placed directly onto the lens, while high-resolution photographic lenticulars typically have the image laminated to the lens.
Lenticular images saw a surge in popularity in the first decade of the 21st century, appearing on the cover of the May 2006 issue of Rolling Stone, trading cards, sports posters, and signs in stores that help to attract buyers.
Construction
Each image is arranged (slicing) into strips, which are then interlaced with one or more similarly arranged images (splicing). These are printed on the back of a piece of plastic, with a series of thin lenses molded into the opposite side. Alternatively, the images can be printed on paper, which is then bonded to the plastic. With the new technology, lenses are printed in the same printing operation as the interlaced image, either on both sides of a flat sheet of transparent material, or on the same side of a sheet of paper, the image being covered with a transparent sheet of plastic or with a layer of transparent, which in turn is printed with several layers of varnish to create the lenses.
The lenses are accurately aligned with the interlaces of the image, so that light reflected off each strip is refracted in a slightly different direction, but the light from all pixels originating from the same original image is sent in the same direction. The result is that a single eye looking at the print sees a single whole image, but two eyes will see different images, which leads to stereoscopic 3D perception.
Types of lenticular prints
There are three distinct types of lenticular prints, distinguished by how great a change in angle of view is required to change the image:
Transforming prints Here two or more different pictures are used, and the lenses are designed to require a relatively large change in angle of view to switch from one image to another. This allows viewers to easily see the original images, since small movements cause no change. Larger movement of the viewer or the print causes the image to flip from one image to another (the "flip effect"). An example of this is the lenticular print of hockey player Mario Tremblay at Centre Mario-Tremblay in Alma, Quebec, where he is transformed from a minor hockey playing boy as an Alma Eagle into the professional hockey playing man, four years later, as a Montreal Canadien.
Animated prints Here the distance between different angles of view is "medium", so that while both eyes usually see the same picture, moving a little bit switches to the next picture in the series. Two or more sequential images are used, with only small differences between each image and the next. This can be used to create an image that moves ("motion effect"), or can create a "zoom" or "morph" effect, in which part of the image expands in size or changes shape as the angle of view changes. The movie poster of the film Species II is an example of this technique.
Stereoscopic effects Here the change in viewing angle needed to change images is small, so that each eye sees a slightly different view. This creates a 3D effect without requiring special glasses, using two or more images. For example, the Dolby-Philips Lenticular 3D display produces 28 different images.
Motorized lenticular
With static (non-motorized) lenticular, the viewer either moves the piece or moves past the piece in order to see the graphic effects. With motorized lenticular, a motor moves the graphics behind the lens, enabling the graphic effects while both the viewer and the display remain stationary.
History
Predecessors
Tabula scalata
Corrugated images that change when viewed from different angles predate the development of lenticular printing. A few examples from the paleolithic era exist in French caves. Tabula scalata or "turning pictures" were popular in England since the 16th century. Extant double paintings, with two distinct images on a corrugated panel, are known from the 17th century.
H.C.J. Deeks used a similar technique with minute vertical corrugations pressed into photographic paper and then exposed to two different images from two different angles. Under a 1906 patent H.C.J. Deeks & Co marketed a Puzzle Post Card or Photochange Post Card. In 1907 a Colorchange Post Card followed, featuring identical pictures on each side of the corrugations that were sprayed with different "liquid pigment or coloring matter" on (parts of) each side.
Barrier grid autostereograms and animation
The oldest known publication about using a line sheet as a parallax barrier to produce an autostereogram is found in an article by Auguste Berthier in the French scientific magazine "Le Cosmos" of May 1896. Berthier's idea was hardly noticed, but American inventor Frederic Eugene Ives had more success with his similar parallax stereogram since 1901. He also patented the technique for a "Changeable sign, picture, &c." in 1903, which showed different pictures from different angles (instead of one stereoscopic image from the right angle and distance). Léon Gaumont introduced Ives' pictures in France and encouraged Eugène Estanave to work on the technique. Estanave patented a barrier grid technique for animated autostereograms. Animated portrait photographs with line sheets were marketed for a while, mostly in the 1910s and 1920s. In the US "Magic Moving Picture" postcards with simple 3 phase animation or changing pictures were marketed after 1906. Maurice Bonnett improved barrier grid autostereography in the 1930s with his relièphographie technique and scanning cameras.
On 11 April 1898 John Jacobson filed an application for US patent No. 624,043 (granted 2 May 1899) for a Stereograph of an interlaced stereoscopic picture and "a transparent mount for said picture having a corrugated or channeled surface". The corrugated lines or channels were not yet really lenticular, but this is the first known autostereogram that used a corrugated transparent surface rather than the opaque lines of most barrier grid stereograms.
Gabriel Lippmann's integral photography
French Nobel Prize winning physicist Gabriel Lippmann represented Eugène Estanave at several presentations of Estanave's works at the French Academy of Sciences. On 2 March 1908 Lippmann presented his own ideas for "photographie intégrale", based on insect eyes. He suggested to use a screen of tiny lenses. Spherical segments should be pressed into a sort of film with photographic emulsion on the other side. The screen would be placed inside a lightproof holder and on a tripod for stability. When exposed each tiny lens would function as a camera and record the surroundings from a slightly different angle than neighboring lenses. When developed and lit from behind the lenses should project the life-size image of the recorded subject in space. He could not yet present concrete results in March 1908, but by the end of 1908 he claimed to have exposed some Integral photography plates and to have seen the "resulting single, full-sized image". However, the technique remained experimental since no material or technique seemed to deliver the optical quality desired. At the time of his death in 1921 Lippmann reportedly had a system with only twelve lenses.
Early lenticular methods
On 11 April 1898, John Jacobson filed an application for US patent No. 624,043 (granted 2 May 1899) for a Stereograph of an interlaced stereoscopic picture and "a transparent mount for said picture having a corrugated or channeled surface".
In 1912, Louis Chéron described in his French patent 443,216 a screen with long vertical lenses that would be sufficient for recording "stereoscopic depth and the shifting of the relations of objects to each other as the viewer moved", while he suggested pinholes for integral photography.
In June 1912, Swiss Nobel Prize winning physiologist Walter Rudolf Hess applied for a US patent for a Stereoscopic picture with a "celluloid covering having a surface composed of cylindrical lens elements". US patent 1,128,979 (published 16 February 1915) was one of several patents in different countries he would register for this technique. The company Stereo-Photographie A.G., registered in Zürich in 1914 and 1915, would produce pictures on transparencies through Hess' process. Few examples of these pictures are still known to have survived. They are circa 3 1/6 × 4 inches black and white pictures (with discolored or intentional hues) and labeled on their passe-partouts "Stereo-Photo nach W.R. Hess - Stereo-Photographie A.G. Zürich. Patente: "Schweiz / Deutschland / Frankreich / Italien / England / Oesterreich / Vereinigte Staaten angemeldet". The Société française de photographie has three lenticular "Stereo-photo" plates in their collection, three more were on auction in 2017.
Herbert E. Ives, son of Frederic Eugene Ives, was one of several researchers who worked on lenticular sheets in the 1920s. These were basically simpler versions of Lippmann's integral photography and had a linear array of small plano-convex cylindrical lenses (lenticules).
The first successful commercial application of the lenticular technique was not used for 3D or motion display but for color movies. Eastman Kodak's 1928 Kodacolor film was based on Keller-Dorian cinematography. It used 16 mm black and white sensitive film embossed with 600 lenses per square inch for use with a filter with RGB stripes. In the 1930s several US patents relating to lenticular techniques were granted, mostly for color film.
On 15 December 1936, Douglas F. Winnek Coffey was granted US patent 2,063,985 (application 24 May 1935) for an "Apparatus for making a composite stereograph". The description does not include changing pictures or animation concepts.
Further history
During World War II, research for military purposes was done into 3D imaging, including lenticular technologies. Mass production of plastics and the technique of injection moulding came about around the same period and enabled commercially viable production of lenticular sheets for novelty toys and advertisements.
Victor Anderson and Vari-Vue
Victor G. Anderson worked for the Sperry Corporation during World War II where 3D imaging was used for military instructional products, for instance on how to use a bomb sight. After the war Anderson started his company Pictorial Productions Inc. A patent application for a Process in the assembling of changeable picture display devices was filed on 1 March 1952 and granted on 3 December 1957 (US patent 2,815,310. Anderson stated in 1996 that the company's first product was the I Like Ike button. The presidential campaign button's image changed from the slogan "I Like Ike" (in black letters on white) into a black and white picture of Ike Eisenhower when viewed from different angles. It was copyrighted on 14 May 1952. In December 1953 the company registered their trademark Vari-Vue. Vari-Vue further popularized lenticular images during the 1950s and 1960s. By the late sixties, the company marketed about two thousand stock products including moving pattern and color sheets, large images (many religious), billboards, and novelty toys. The company went bankrupt in 1986.
Xograph
Look magazine of 25 February 1964 introduced the publisher's "parallax panoramagram" technology with 8 million copies of a 10x12 cm black and white card with a photographic 3D image of an Edison bust surrounded by some inventions. A 10 x 12 cm full color picture of a model promoting Kodel followed on 7 April. The technique was soon trademarked as "xograph" by Cowles' daughter company Visual Panographics Inc. Magazines like Look and Venture published xographs until the mid-1970s. Some baseball cards were produced as xographs. Images produced by the company ranged from just a few millimeters (0.1 inch) to .
Other early companies
In the 1960s, more companies manufactured lenticular products, including Hallmark Cards (registering the Magic Motion trademark in 1964), Reflexa (Nürnberg, Germany), Toppan (Tokyo, Japan) and Dai-Nippon (Japan).
OptiGraphics Corporation of Grand Prairie, Texas was formed in 1970 and—under the guidance of Victor Anderson, working well into his 80s. The company trademarked Magic Motion in 1976. Optigraphics produced the lenticular prizes for Cracker Jack in the 1980s, 7-Eleven Slurpee lenticular sports coins from 1983 to 1987, and in 1986 it produced the first set of 3D traditional baseball cards marketed as Sportflics, which ultimately led to the creation of Pinnacle Brands. In 1999 Performance Companies bought OptiGraphics after Pinnacle Trading Card Company went bankrupt in 1998.
While lenticular images were popular in the 1960s and 1970s, by the 1980s OptiGraphics was the only significant manufacturer remaining in the US.
21st century
The techniques for lenticular printing were further improved in the 21st century. Lenticular full motion video effects or "motion print" enabled viewing of up to 60 video frames within a print.
Common and notable products
Political campaign and pop star "flasher" badges
After their first presidential campaign badge I like Ike in 1952, Pictorial Productions Inc. made many more similar political campaign buttons, including presidential campaign badge like Don't blame me!I voted democratic (1956), John F. KennedyThe Man for the 60s (1960), I Like Ben (1963) and I'm for Nixon (1968?).
Official "flasher" badges for pop stars like Elvis Presley were manufactured by Vari-Vue at least since 1956, including badges for Beatles, Rolling Stones' and other bands in the 1960s.
Cheerios and Cracker Jack prizes
Pictorial Productions/Vari-Vue produced small animated picture cards for Cheerios in the 1950s, of which founder Victor Anderson claimed to have produced 40 million. He also stated that the cards were originally stuck to the outside of the packaging and were put inside the boxes only after too many cards were stolen before the boxes reached the store shelves.
Many different lenticular "tilt cards" were produced as prizes in Cracker Jack boxes. These were first produced by Vari-Vue (1950s-1970s), later by Toppan Printing, Ltd. (1980s), and Optigraphics Corporation (1980s-1990s).
Novelty toys
In 1958 Victor Anderson patented an Ocular Toy: an eye glass mount with lenticular winking eyes.
Lenticular images were used in many small and cheap plastic toys, often as gumball machine prizes. These include: miniature toy televisions with an animated lenticular screen, charms in the shape of animals with lenticular faces, "flicker rings", etc.
In 1960 Takara's Dakkochana little plastic golliwog toy with lenticular eyesoriginally intended for toddlers, became popular with Japanese teenagers as a fashion accessory worn around the arm.
Postcards
Around 1966 several companies started producing lenticular postcards. Common themes are winking girls, religious scenes, animals, dioramas with dolls, touristic sites and pin-up models wearing clothes when viewed from one angle and nude when viewed from another angle.
Covers for books, music albums and movies
The lenticular picture on the album cover for the Rolling Stones' 1967 LP Their Satanic Majesties Request was manufactured by Vari-Vue, as well as the postcards and other promotional items that accompanied the release. Other lenticular LP covers include Johnny Cash's The Holy Land (1969) and The Stranglers' The Raven (1979).
In the 2010s lenticular covers for LPs became a bit more common, especially for deluxe re-releases.
In 1973, the band Saturnalia had lenticular labels on their Magical Love picture disc LP.
From around the mid-1990s some lenticular CD covers were produced (mostly for limited editions), including Pet Shop Boys' Alternative (1995) with an image of Chris changing into Neil, the Supersuckers' The Sacrilicious Sounds of the Supersuckers (1995), Download's Furnace album (1995) and Microscopic EP (1996), Tool's Ænima (1996), The Wildhearts' Fishing for Luckies (1996), Kylie Minogue's Impossible Princess (1997), the Velvet Underground's Loaded 2CD version (1997), Kraftwerk's "Expo 2000" (1999) and David Bowie's Hours (1999). Ministry's The Last Sucker album (2007) had an image of George W. Bush changing into a monstrous, alien-like face. In 1996, alternative rock band Garbage produced a lenticular covered 7" vinyl for their "Milk" single release.
In the 2010s, lenticular covers for movies on DVD and Blu-ray became quite common.
Lenticular covers have also been used as a collectible cover variant for comic books since the 1990s; Marvel, DC, and other publishers have created such covers with animated or 3D effects.
Lentograph
In August 1967 the trademark Lentograph was filed by Victor Anderson 3D Studios, Inc. (registered in October 1968). Lentographs were marketed as relatively large lenticular plates (16 x 12 inches / 12 × 8 inches), often found in an illuminated brass frame. Commonly found are 3D pictures of Paul Cunningham's biblical displays with sculpted figurines in dramatic poses based on paintings (Plate 501–508), a family of teddy bears in a domestic scene, Plate No. 106 Evening Flowers, Plate No. 115 Goldilocks and 3 bears, Plate No. 124 Bijou (a white poodle), Plate No. 121 Midday Respite (a taxidermied young deer in a forest setting), Plate No. 213 Red Riding Hood. Also known are a harbor scene (Plate No. 114), Plate No. 118 Japanese Floral, Plate No. 123 Faustus (a yorky dog) and Plate No. 212 of a covered bridge.
Lenticular postage stamps
In 1967 Bhutan introduced lenticular 3D postage stamps as one of the many unusual stamp designs of the Bhutan Stamp Agency initiated by American businessman Burt Kerr Todd. Countries like Ajman, Yemen, Manama, Umm Al Qiwain and North Korea released lenticular stamps in the 1970s. Animated lenticular stamps have been issued since the early 1980s by countries like North Korea.
In 2004 full motion lenticular postage stamps were issued in New Zealand. Over the years many other countries have produced stamps with similar lenticular full motion effects, mostly depicting sport events. In 2010 Communications agency KesselsKramer produced the "Smallest Shortest Film" on a Dutch stamp, directed by Anton Corbijn and featuring actress Carice van Houten.
In 2012, Design Consultancy GBH.London created the UK's first 'Motion Stamps' for Royal Mail's Special Stamp Issue, The Genius of Gerry Anderson. The minisheet featured four fully lenticular stamps based on Gerry and Sylvia Anderson's Thunderbirds TV series. The Stamps and their background border used 48 frame 'MotionPrint’ technology and were produced by Outer Aspect from New Zealand.
In August 2018 the United States Postal Service introduced "The Art of Magic" lenticular stamp, sold in a souvenir sheet of three. The stamp was designed to celebrate the art of magic and "by rotating each stamp, you can see a white rabbit popping out of a black top hat."
In August 2019 the United States Postal Service introduced a second stamp with lenticular technology, this time featuring the dinosaur Tyrannosaurus Rex. The USPS explained that "two of the four designs show movement when rotated. See the skeletal remains with and without flesh and watch as an approaching T. rex suddenly lunges forward."
Books
In 2012, Dan Kainen's first "photicular" book Safari was published, with processed video images animated by having a lens sheet slide by turning the page, much like Rufus Butler Seder's "scanimation" process. It was followed by Ocean (2014), Polar (2015), Jungle (2016), Wild (2017), Dinosaur (2018) and Outback (2019).
Posters
In 2013, the Spanish ANAR Foundation (Aid to Children and Adolescents at Risk) released a lenticular poster with the image of a battered child and in Spanish, "If somebody hurts you, phone us and we'll help you," and a helpline number, visible only from the viewpoint of an average 10-year-old. People over 4'5" tall see an uninjured face and in Spanish, "Sometimes, child abuse is visible only to the child suffering it." The organisation claimed that an accompanying adult may dissuade an abused child from seeking help if the hidden message could be seen by the adult.
Related techniques
A related product, produced by a small company in New Jersey, was Rowlux. Unlike the Vari-Vue product, Rowlux used a microprismatic lens structure made by a process they patented in 1972, and no paper print. Instead, the plastic (polycarbonate, flexible PVC and later PETG) was dyed with translucent colors, and the film was usually thin and flexible (from 0.002" or in thickness).
While not a true lenticular process, the Dufex Process (manufactured by F.J. Warren Ltd.) does use a form of lens structure to animate the image. The process consists of imprinting a metallic foil with an image. The foil is then laminated onto a thin sheet of card stock that has been coated with a thick layer of wax. The heated lamination press has the Dufex embossing plate on its upper platen, which has been engraved with 'lenses' at different angles, designed to match the artwork and reflect light at different intensities depending on angle of view.
Lenticular cinema and television
Since at least the early 1930s many researchers have tried to develop lenticular cinema. Herbert E. Ives presented an apparatus on 31 October 1930 with small autostereoscopic motion pictures viewable only by small groups at a time. Ives would continue to improve his system over the years. However, producing autostereoscopic movies was deemed too costly for commercial purposes. A November 1931 New York Times article entitled New screens gives depth to movies describes a lenticular system by Douglas F. Winnek and also mentions an optical appliance fitted near the screen by South African astronomer R.T.A. Innes.
Lenticular arrays have also been used for 3D autostereoscopic television, which produces the illusion of 3D vision without the use of special glasses. At least as early as 1954 patents for lenticular television were filed, but it lasted until 2010 before a range of 3D televisions became available. Some of these systems used cylindrical lenses slanted from the vertical, or spherical lenses arranged in a honeycomb pattern, to provide a better resolution. While over 40 million 3D televisions were sold in 2012 (including systems that required glasses), by 2016 3D content became rare and manufacturers had stopped producing 3D TV sets. While the need to wear glasses for the more affordable systems seemed to have been a letdown for customers, affordable autostereoscopic televisions were seen as a future solution.
Manufacturing process
Printing
Lenticular front sheeting and image-processing software are both sold for home computer printing, where the interlaced image backing is inkjet printed in photo resolution and affixed behind the lenticular sheet.
Creation of lenticular images on a commercial scale requires printing presses that are adapted to print on sensitive thermoplastic materials. Lithographic offset printing is typically used, to ensure the images are good quality. Printing presses for lenticulars must be capable of adjusting image placement in steps, to allow good alignment of the image to the lens array.
Typically, ultraviolet-cured inks are used. These dry quickly by direct conversion of the liquid ink to a solid form, rather than by evaporation of liquid solvents from a mixture. Powerful () ultraviolet (UV) lamps have been used to rapidly cure the ink. This allowed lenticular images to be printed at high speed.
In some cases, electron beam lithography has been used instead. The curing of the ink was then initiated directly by an electron beam scanned across the surface.
Defects
Design defects
Double images on the relief and in depth
Double images are usually caused by an exaggeration of the 3D effect from some angles of view, or an insufficient number of frames. Poor design can lead to doubling, small jumps, or a fuzzy image, especially on objects in relief or in depth. For some visuals, where the foreground and background are fuzzy or shaded, this exaggeration can prove to be an advantage. In most cases, the detail and precision required do not allow this.
Image ghosting
Ghosting occurs due to poor treatment of the source images, and also due to transitions where demand for an effect goes beyond the limits and technical possibilities of the system. This causes some of the images to remain visible when they should disappear. These effects can depend on the lighting of the lenticular print.
Prepress defects
Synchronization of the print (master) with the pitch
This effect is also known as "banding". Poor calibration of the material can cause the passage from one image to another to not be simultaneous over the entire print. The image transition progresses from one side of the print to the other, giving the impression of a veil or curtain crossing the visual. This phenomenon is felt less for the 3D effects, but is manifested by a jump of the transverse image. In some cases, the transition starts in several places and progresses from each starting point toward the next, giving the impression of several curtains crossing the visual, as described above.
Discordant harmonics
This phenomenon is unfortunately common, and is explained either by incorrect calibration of the support or by incorrect parametrization of the prepress operations.
It is manifested in particular by streaks that appear parallel to the lenticules during transitions from one visual to the other.
Printing defects
Color synchronization
One of the main difficulties in lenticular printing is color synchronization. The causes are varied, they may come from a malleable material, incorrect printing conditions and adjustments, or again a dimensional differential of the engraving of the offset plates in each color.
This poor marking is shown by doubling of the visual; a lack of clarity; a streak of color or wavy colors (especially for four-color shades) during a change of phase by inclination of the visual.
Synchronization of parallelism of the printing to the lenticules
The origin of this problem is a fault in the printing and forcibly generates a phase defect.
The passage from one visual to another must be simultaneous over the entire format. But when this problem occurs, there is a lag in the effects on the diagonals. At the end of one diagonal of the visual, there is one effect, and at the other end, there is another.
Phasing
In most cases, the phasing problem comes from imprecise cutting of the material, as explained below. Nevertheless, poor printing and rectification conditions may also be behind it.
In theory, for a given angle of observation, one and the same visual must appear, for the entire batch. As a general rule, the angle of vision is around 45°, and this angle must be in agreement with the sequence provided by the master. If the images have a tendency to double perpendicularly (for 3D) or if the images provided for observation to the left appear to the right (top/bottom), then there is a phasing problem.
Cutting defects
Defects, in the way the lenticular lens has been cut, can lead to phase errors between the lens and the image.
Two examples, taken from the same production batch:
The first image shows a cut that removed about 150 μm of the first lens, and that shows irregular cutting of the lenticular lenses. The second image shows a cut that removed about 30 μm of the first lens. Defects in cutting such as these lead to a serious phase problem. In the printing press the image being printed is aligned relative to the edges of the sheet of material. If the sheet is not always cut in the same place relative to the first lenticule, a phase error is introduced between the lenses and the image slices.
See also
Autostereoscopy, any method of displaying stereoscopic images without the use of glasses
Integral imaging, a broader concept that includes lenticular printing
Lenticular lens, the technology used in lenticular printing and for 3D displays
Parallax barrier, another technology for displaying stereoscopic images without the use of glasses
References
Sources
Bordas Encyclopedia: Organic Chemistry (French).
Okoshi, Takanori Three-Dimensional Imaging Techniques Atara Press (2011),
External links
Vari-Vue Official Vari-Vue website
Patent 2063985: Apparatus for Making a Composite Stereograph filed 24 May 1935, issued 15 December 1936, by Douglas Fredwill Winnek Coffey.
Lecture slides covering lenticular lenses (PowerPoint) by John Canny
Optics
Printing
3D imaging | Lenticular printing | [
"Physics",
"Chemistry"
] | 6,608 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
878,340 | https://en.wikipedia.org/wiki/Ritonavir | Ritonavir, sold under the brand name Norvir, is an antiretroviral medication used along with other medications to treat HIV/AIDS. This combination treatment is known as highly active antiretroviral therapy (HAART). Ritonavir is a protease inhibitor, though it now mainly serves to boost the potency of other protease inhibitors. It may also be used in combination with other medications to treat hepatitis C and COVID-19. It is taken by mouth.
Common side effects of ritonavir include nausea, vomiting, loss of appetite, diarrhea, and numbness of the hands and feet. Serious side effects include liver complications, pancreatitis, allergic reactions, and arrhythmias. Serious interactions may occur with a number of other medications including amiodarone and simvastatin. At low doses, it is considered to be acceptable for use during pregnancy. Ritonavir is of the protease inhibitor class. However, it is also commonly used to inhibit the enzyme that metabolizes other protease inhibitors. This inhibition allows lower doses of these latter medications to be used.
Ritonavir was patented in 1989 and came into medical use in 1996. It is on the World Health Organization's List of Essential Medicines. Ritonavir capsules were approved as a generic medication in the United States in 2020.
Medical uses
HIV
Ritonavir is indicated in combination with other antiretroviral agents for the treatment of HIV-1-infected patients. Though initially developed as an independent antiviral treatment, it is most commonly used as a pharmacokinetic enhancer, in order to increase the plasma concentrations of other antiretrovirals. Ritonavir is effective in preventing the replication of HIV-1. Protease inhibitors, including ritonavir, effectively block HIV-1 protease, a crucial enzyme in the reproductive cycle of HIV-1.
COVID-19
Two SARS-CoV-2 3CLpro inhibitors are prepackaged with ritonavir to enhance their blood concentration.
In December 2021, the combination of nirmatrelvir and ritonavir was granted emergency use authorization by the US Food and Drug Administration (FDA) for the treatment of coronavirus disease COVID-19. The co-packaged medications are sold under the brand name Paxlovid. Paxlovid is not authorized for the pre-exposure or post-exposure prevention of COVID-19 or for initiation of treatment in those requiring hospitalization due to severe or critical COVID-19. On December 31, 2021, the UK Medicines and Healthcare products Regulatory Agency (MHRA) approved the same combination "for people with mild to moderate COVID-19 who are at high risk of developing severe COVID-19".
In January 2023, simnotrelvir/ritonavir was conditionally approved by China's National Medical Products Administration (NMPA) for COVID-19.
Other uses
The use of ritonavir as a CYP3A inhibitor is also seen in the Hepatitis C medication ombitasvir/paritaprevir/ritonavir.
Side effects
When administered at the initially tested higher doses effective for anti-HIV therapy, the side effects of ritonavir are those shown below.
Asthenia, malaise
Diarrhea
Nausea and vomiting
Abdominal pain
Dizziness
Insomnia
Kidney failure
Sweating
Taste abnormality
Metabolic effects, including
Hypercholesterolemia
Hypertriglyceridemia
Elevated transaminases
Elevated creatine kinase
Adverse drug reactions
Ritonavir exhibits hepatic activity. It induces CYP1A2 and inhibits CYP3A4 and CYP2D6. Concomitant therapy of ritonavir with a variety of medications may result in serious and sometimes fatal drug interactions.
Due to it being a strong inhibitor (that causes at least a five-fold increase in the plasma AUC values, or more than 80% decrease in clearance) of both cytochrome P450 enzymes CYP2D6 and CYP3A4, ritonavir can severely potentiate and prolong the half-life and/or increase the blood concentration of phenobarbital, primidone, carbamazepine, phenytoin, PDE5 inhibitors like sildenafil, opioids such as hydrocodone, oxycodone, pethidine and fentanyl, antiarrhythmic agents such as amiodarone, propafenone and disopyramide, immunosuppressants such as tacrolimus, voclosporin and sirolimus, neuroleptics like lurasidone and pimozide, as well as some chemotherapeutic agents, benzodiazepines and some ergot derivatives. The FDA has issued a boxed warning for this type of drug interactions.
CYP3A4 inducers can counteract the inhibiting effects of ritonavir and lead to drastically reduced levels of "boosted" drugs, increasing the risk of developing drug resistance. Other CYP3A4 inhibitors may have an additive effect with ritonavir, causing increased drug levels.
Pharmacology
Pharmacodynamics
Ritonavir was originally developed as an inhibitor of HIV protease, one of a family of pseudo-C-symmetric small molecule inhibitors.
Ritonavir is rarely used for its own antiviral activity but remains widely used as a booster of other protease inhibitors. More specifically, ritonavir is used to inhibit a particular enzyme, in intestines, liver, and elsewhere, that normally metabolizes protease inhibitors, cytochrome P450-3A4 (CYP3A4). The drug binds to and inhibits CYP3A4, so a low dose can be used to enhance other protease inhibitors. This discovery drastically reduced the adverse effects and improved the efficacy of protease inhibitors and HAART.. However, because of the general role of CYP3A4 in xenobiotic metabolism, dosing with ritonavir also affects the efficacy of numerous other medications, adding to the challenge of prescribing drugs concurrently. See above.
Pharmacokinetics
Ritonavir at a 200 mg dose reaches maximum plasma concentration in about 3 hours and has a half life of 3–4 hours. Importantly, the pharmacokinetics of Ritonavir are not linear—the half life increases at higher doses or under repeated dosing. For instance, the half life of a 500 mg tablet is 4–6 hours rather than 3–4 hours for a 200 mg tablet. The drug has high bioavailability but about 20% is lost due to first-pass metabolism. Rivonavir capsules are not absorbed as quickly as Ritonavir tablets and may exhibit different bioavailability.
Ritonavir was demonstrated to have an in vitro potency of EC50=0.02 μM on HIV-1 protease and highly sustained concentration in plasma after oral administration in several species.
Chemistry
Ritonavir was initially derived from a moderately potent and orally bioavailable small molecule, A-80987. The P3 and P2′ heterocyclic groups of A-80987 were redesigned to create an analogue, now known as ritonavir, with improved pharmacokinetic properties to the original.
Full details of the synthesis of ritonavir were first published by scientists from Abbott Laboratories.
In the first step shown, an aldehyde derived from phenylalanine is treated with zinc dust in the presence of vanadium(III) chloride. This results in a pinacol coupling reaction which dimerizes the material to provide an intermediate which is converted to its epoxide and then reduced to (2S,3S,5S)-2,5-diamino-1,6-diphenylhexan-3-ol. Importantly, this retains the absolute stereochemistry of the amino acid precursor. The diamine is then treated sequentially with two thiazole derivatives, each linked by an amide bond, to provide ritonavir.
History
Ritonavir is sold as Norvir by AbbVie, Inc. The US Food and Drug Administration (FDA) approved ritonavir on March 1, 1996, As a result of the introduction of "highly active antiretroviral thearap[ies]" the annual U.S. HIV-associated death rate fell from over 50,000 to about 18,000 over a period of two years.
In 2014, the FDA approved a combination of ombitasvir/paritaprevir/ritonavir for the treatment of hepatitis C virus (HCV) genotype 4.
After the start of the COVID pandemic in 2020, many antivirals, including protease inhibitors in general and ritonavir in particular, were repurposed in an effort to treat the new infection. Lopinavir/ritonavir was found not to work in severe COVID-19. Virtual screening followed by molecular dynamics analysis predicted ritonavir blocks the binding of the SARS-CoV-2 spike (S) protein to the human angiotensin-converting enzyme 2 (hACE2) receptor, which is critical for the virus entry into human cells.
Finally in 2021, a combination of ritonavir with nirmatrelvir, a newly developed orally active 3C-like protease inhibitor, was developed for the treatment of COVID-19. Ritonavir serves to slow down metabolism of nirmatrelvir by cytochrome enzymes to maintain higher circulating concentrations of the main drug. In November that year, Pfizer announced positive phase 2/3 results, including 89% reduction in hospitalizations when given within three days after symptom onset.
Polymorphism and temporary market withdrawal
Ritonavir was originally dispensed as a capsule that did not require refrigeration. This contained a crystal form of ritonavir that is now called form I. However, like many drugs, crystalline ritonavir can exhibit polymorphism, i.e., the same molecule can crystallize into more than one crystal type, or polymorph, each of which contains the same repeating molecule but in different crystal packings/arrangements. The solubility and hence the bioavailability can vary in the different arrangements, and this was observed for forms I and II of ritonavir.
During development—ritonavir was introduced in 1996—only the crystal form now called form I was found; however, in 1998, a lower free energy, more stable polymorph, form II, was discovered. This more stable crystal form was less soluble, which resulted in significantly lower bioavailability. The compromised oral bioavailability of the drug led to temporary removal of the oral capsule formulation from the market. As a consequence of the fact that even a trace amount of form II can result in the conversion of the more bioavailable form I into form II, the presence of form II threatened the ruin of existing supplies of the oral capsule formulation of ritonavir; and indeed, form II was found in production lines, effectively halting ritonavir production. Abbott (now AbbVie) withdrew the capsules from the market, and prescribing physicians were encouraged to switch to a Norvir suspension. It has been estimated that Abbott lost more than US$250 million as a result, and the incident is often cited as a high-profile example of disappearing polymorphs.
The company's research and development teams ultimately solved the problem by replacing the capsule formulation with a refrigerated gelcap. In 2000, Abbott (now AbbVie) received FDA-approval for a tablet formulation of lopinavir/ritonavir (Kaletra) which contained a preparation of ritonavir that did not require refrigeration. Ritonavir tablets produced in a solid dispersion by melt-extrusion was found to remain in form I, and was re-introduced commercially in 2010.
Society and culture
Economics
In 2003, Abbott (AbbVie, Inc.) raised the price of a Norvir course from per day to per day, leading to claims of price gouging by patients' groups and some members of Congress. Consumer group Essential Inventions petitioned the NIH to override the Norvir patent, but the NIH announced on August 4, 2004, that it lacked the legal right to allow generic production of Norvir.
References
Further reading
Drugs developed by AbbVie
Carbamates
CYP2D6 inhibitors
CYP3A4 inhibitors
Hepatotoxins
HIV protease inhibitors
Isopropyl compounds
Pregnane X receptor agonists
Thiazoles
Ureas
Wikipedia medicine articles ready to translate
World Health Organization essential medicines | Ritonavir | [
"Chemistry"
] | 2,730 | [
"Organic compounds",
"Ureas"
] |
878,978 | https://en.wikipedia.org/wiki/Conversion%20of%20scales%20of%20temperature | This is a collection of temperature conversion formulas and comparisons among eight different temperature scales, several of which have long been obsolete.
Temperatures on scales that either do not share a numeric zero or are nonlinearly related cannot correctly be mathematically equated (related using the symbol =), and thus temperatures on different scales are more correctly described as corresponding (related using the symbol ≘).
Celsius scale
Kelvin scale
Fahrenheit scale
Rankine scale
Delisle scale
Newton scale
Réaumur scale
Rømer scale
Comparison values chart
Comparison of temperature scales
* Normal human body temperature is 36.8 °C ±0.7 °C, or 98.2 °F ±1.3 °F. The commonly given value 98.6 °F is simply the exact conversion of the nineteenth-century German standard of 37 °C. Since it does not list an acceptable range, it could therefore be said to have excess (invalid) precision.
Some numbers in this table have been rounded.
Graphical representation
Conversion table between the different temperature units
Converting units of temperature differences
Converting units of temperature differences (also referred to as temperature deltas) is not the same as converting absolute temperature values, and different formulae must be used.
To convert a delta temperature from degrees Fahrenheit to degrees Celsius, the formula is
To convert a delta temperature from degrees Celsius to kelvin, it is 1:1
See also
Outline of metrology and measurement
Degree of frost
Conversion of units
Gas mark
Notes and references
Scales of temperature
Conversion of units of measurement | Conversion of scales of temperature | [
"Physics",
"Mathematics"
] | 307 | [
"Scales of temperature",
"Physical quantities",
"Quantity",
"Conversion of units of measurement",
"Units of measurement"
] |
879,282 | https://en.wikipedia.org/wiki/Neutron%20transport | Neutron transport (also known as neutronics) is the study of the motions and interactions of neutrons with materials. Nuclear scientists and engineers often need to know where neutrons are in an apparatus, in what direction they are going, and how quickly they are moving. It is commonly used to determine the behavior of nuclear reactor cores and experimental or industrial neutron beams. Neutron transport is a type of radiative transport.
Background
Neutron transport has roots in the Boltzmann equation, which was used in the 1800s to study the kinetic theory of gases. It did not receive large-scale development until the invention of chain-reacting nuclear reactors in the 1940s. As neutron distributions came under detailed scrutiny, elegant approximations and analytic solutions were found in simple geometries. However, as computational power has increased, numerical approaches to neutron transport have become prevalent. Today, with massively parallel computers, neutron transport is still under very active development in academia and research institutions throughout the world. It remains a computationally challenging problem since it depends on time and the 3 dimensions of space, and the variables of energy span several orders of magnitude (from fractions of meV to several MeV). Modern solutions use either discrete ordinates or Monte Carlo methods, or even a hybrid of both.
Neutron transport equation
The neutron transport equation is a balance statement that conserves neutrons. Each term represents a gain or a loss of a neutron, and the balance, in essence, claims that neutrons gained equals neutrons lost. It is formulated as follows:
Where:
The transport equation can be applied to a given part of phase space (time t, energy E, location and direction of travel ) The first term represents the time rate of change of neutrons in the system. The second terms describes the movement of neutrons into or out of the volume of space of interest. The third term accounts for all neutrons that have a collision in that phase space. The first term on the right hand side is the production of neutrons in this phase space due to fission, while the second term on the right hand side is the production of neutrons in this phase space due to delayed neutron precursors (i.e., unstable nuclei which undergo neutron decay). The third term on the right hand side is in-scattering, these are neutrons that enter this area of phase space as a result of scattering interactions in another. The fourth term on the right is a generic source. The equation is usually solved to find since that will allow for the calculation of reaction rates, which are of primary interest in shielding and dosimetry studies.
Types of neutron transport calculations
Several basic types of neutron transport problems exist, depending on the type of problem being solved.
Fixed source
A fixed source calculation involves imposing a known neutron source on a medium and determining the resulting neutron distribution throughout the problem. This type of problem is particularly useful for shielding calculations, where a designer would like to minimize the neutron dose outside of a shield while using the least amount of shielding material. For instance, a spent nuclear fuel cask requires shielding calculations to determine how much concrete and steel is needed to safely protect the truck driver who is shipping it.
Criticality
Fission is the process through which a nucleus splits into (typically two) smaller atoms. If fission is occurring, it is often of interest to know the asymptotic behavior of the system. A reactor is called “critical” if the chain reaction is self-sustaining and time-independent. If the system is not in equilibrium the asymptotic neutron distribution, or the fundamental mode, will grow or decay exponentially over time.
Criticality calculations are used to analyze steady-state multiplying media (multiplying media can undergo fission), such as a critical nuclear reactor. The loss terms (absorption, out-scattering, and leakage) and the source terms (in-scatter and fission) are proportional to the neutron flux, contrasting with fixed-source problems where the source is independent of the flux. In these calculations, the presumption of time invariance requires that neutron production exactly equals neutron loss.
Since this criticality can only be achieved by very fine manipulations of the geometry (typically via control rods in a reactor), it is unlikely that the modeled geometry will be truly critical. To allow some flexibility in the way models are set up, these problems are formulated as eigenvalue problems, where one parameter is artificially modified until criticality is reached. The most common formulations are the time-absorption and the multiplication eigenvalues, also known as the alpha and k eigenvalues. The alpha and k are the tunable quantities.
K-eigenvalue problems are the most common in nuclear reactor analysis. The number of neutrons produced per fission is multiplicatively modified by the dominant eigenvalue. The resulting value of this eigenvalue reflects the time dependence of the neutron density in a multiplying medium.
keff < 1, subcritical: the neutron density is decreasing as time passes;
keff = 1, critical: the neutron density remains unchanged; and
keff > 1, supercritical: the neutron density is increasing with time.
In the case of a nuclear reactor, neutron flux and power density are proportional, hence during reactor start-up keff > 1, during reactor operation keff = 1 and keff < 1 at reactor shutdown.
Computational methods
Both fixed-source and criticality calculations can be solved using deterministic methods or stochastic methods. In deterministic methods the transport equation (or an approximation of it, such as diffusion theory) is solved as a differential equation. In stochastic methods such as Monte Carlo discrete particle histories are tracked and averaged in a random walk directed by measured interaction probabilities. Deterministic methods usually involve multi-group approaches while Monte Carlo can work with multi-group and continuous energy cross-section libraries. Multi-group calculations are usually iterative, because the group constants are calculated using flux-energy profiles, which are determined as the result of the neutron transport calculation.
Discretization in deterministic methods
To numerically solve the transport equation using algebraic equations on a computer, the spatial, angular, energy, and time variables must be discretized.
Spatial variables are typically discretized by simply breaking the geometry into many small regions on a mesh. The balance can then be solved at each mesh point using finite difference or by nodal methods.
Angular variables can be discretized by discrete ordinates and weighting quadrature sets (giving rise to the SN methods), or by functional expansion methods with the spherical harmonics (leading to the PN methods).
Energy variables are typically discretized by the multi-group method, where each energy group represents one constant energy. As few as 2 groups can be sufficient for some thermal reactor problems, but fast reactor calculations may require many more.
The time variable is broken into discrete time steps, with time derivatives replaced with difference formulas.
Computer codes used in neutron transport
Probabilistic codes
COG - A LLNL developed Monte Carlo code for criticality safety analysis and general radiation transport (http://cog.llnl.gov)
MCBEND – A Monte Carlo code for general radiation transport developed and supported by the ANSWERS Software Service.
MCNP – A LANL developed Monte Carlo code for general radiation transport
MC21 – A general-purpose, 3D Monte Carlo code developed at NNL.
MCS – The Monte Carlo code MCS has been developed since 2013 at Ulsan National Institute of Science and Technology (UNIST), Republic of Korea.
Mercury – A LLNL developed Monte Carlo particle transport code.
MONK – A Monte Carlo Code for criticality safety and reactor physics analyses developed and supported by the ANSWERS Software Service.
MORET – Monte-Carlo code for the evaluation of criticality risk in nuclear installations developed at IRSN, France
OpenMC – An open source, community-developed open source Monte Carlo code
RMC – A Tsinghua University Department of Engineering Physics developed Monte Carlo code for general radiation transport
SCONE – The Stochastic Calculator Of the Neutron Transport Equation, an open-source Monte Carlo code developed at the University of Cambridge.
Serpent – A VTT Technical Research Centre of Finland developed Monte Carlo particle transport code
Shift/KENO – ORNL developed Monte Carlo codes for general radiation transport and criticality analysis
TRIPOLI – 3D general purpose continuous energy Monte Carlo Transport code developed at CEA, France
Deterministic codes
Ardra – A LLNL neutral particle transport code
Attila – A commercial transport code
DRAGON – An open-source lattice physics code
PHOENIX/ANC – A proprietary lattice-physics and global diffusion code suite from Westinghouse Electric
PARTISN – A LANL developed transport code based on the discrete ordinates method
NEWT – An ORNL developed 2-D SN code
DIF3D/VARIANT – An Argonne National Laboratory developed 3-D code originally developed for fast reactors
DENOVO – A massively parallel transport code under development by ORNL
Jaguar – A parallel 3-D Slice Balance Approach transport code for arbitrary polytope grids developed at NNL
DANTSYS
RAMA – A proprietary 3D method of characteristics code with arbitrary geometry modeling, developed for EPRI by TransWare Enterprises Inc.
RAPTOR-M3G – A proprietary parallel radiation transport code developed by Westinghouse Electric Company
OpenMOC – An MIT developed open source parallel method of characteristics code
MPACT – A parallel 3D method of characteristics code under development by Oak Ridge National Laboratory and the University of Michigan
DORT – Discrete Ordinates Transport
APOLLO – A lattice physics code used by CEA, EDF and Areva
CASMO/SIMULATE – A proprietary lattice-physics and diffusion code suite developed by Studsvik for LWR analysis including square and hex lattices
HELIOS – A proprietary lattice-physics code with generalized geometry developed by Studsvik for LWR analysis
milonga – A free nuclear reactor core analysis code
STREAM – A neutron transport analysis code, STREAM (Steady state and Transient REactor Analysis code with Method of Characteristics), has been developed since 2013 at Ulsan National Institute of Science and Technology (UNIST), Republic of Korea
See also
Nuclear reactor
Boltzmann equation
TINTE
Neutron scattering
Monte Carlo N-Particle Transport Code
References
Lewis, E., & Miller, W. (1993). Computational Methods of Neutron Transport. American Nuclear Society. .
Duderstadt, J., & Hamilton, L. (1976). Nuclear Reactor Analysis. New York: Wiley. .
Marchuk, G. I., & V. I. Lebedev (1986). Numerical Methods in the Theory of Neutron Transport. Taylor & Francis. p. 123. .
External links
ANSWERS Software Service website
LANL MCNP6 website
LANL MCNPX website
VTT Serpent website
OpenMC website
MIT CRPG OpenMOC website
TRIPOLI-4 website
Transport
Nuclear physics | Neutron transport | [
"Physics"
] | 2,232 | [
"Nuclear physics"
] |
880,393 | https://en.wikipedia.org/wiki/Antenna%20analyzer | An antenna analyzer or in British aerial analyser (also known as a noise bridge, RX bridge, SWR analyzer, or RF analyzer) is a device used for measuring the input impedance of antenna systems in radio electronics applications.
In radio communications systems, including amateur radio, an antenna analyzer is a common tool used for fine tuning antenna and feedline performance, as well as troubleshooting them.
Antenna bridges have long been used in the broadcast industry to tune antennas. A bridge is available which measures complex impedance while the transmitter is operating, practically a necessity when tuning multi-tower antenna systems. In more recent times the direct-reading network analyzers have become more common.
Types of analysers
There are several different instruments of varying complexity and accuracy for testing antennas and their feed lines. All can also be used to measure other electrical circuits and components (at least, in principle).
The simplest is an SWR meter, which only indicates the degree of mismatch; the actual mismatched impedance must be inferred by measuring several nearby frequencies and performing a few simple calculations. The SWR meter requires a transmitter or signal generator to provide a few watts power test signal.
An antenna bridge is able to measure at low power, but also requires a supplied test signal; depending on the bridge circuit, it can be used to measure both reactance and resistance by reading values marked on knobs that have been adjusted for a match.
The noise bridge and network analyzers both supply their own very low-power test signals; both are able to measure both resistance and reactance, either by calculation or by reading knobs adjusted for a match. Modern analyzers directly display resistance and reactance, with the calculations done internally by a microprocessor.
Antenna bridge
A bridge circuit has two legs which are frequency-dependent complex-valued impedances. One leg is a circuit in the analyzer with calibrated components whose combined impedance can be read on a scale. The other leg is the unknown – either an antenna or a reactive component.
To measure impedance, the bridge is adjusted, so that the two legs have the same impedance. When the two impedances are the same, the bridge is balanced. Using this circuit it is possible to either measure the impedance of the antenna connected between ANT and GND, or it is possible to adjust an antenna, until it has the same impedance as the network on the left side of the diagram below. The bridge can be driven either with white noise or a simple carrier (connected to drive). In the case of white noise the amplitude of the exciting signal can be very low and a radio receiver used as the detector. In the case where a simple carrier is used then depending on the level either a diode detector or a receiver can be used. In both cases a null will indicate when the bridge is balanced.
Complex voltage and current meters
A second type of antenna analyzer measures the complex voltage across and current into the antenna. The operator then uses mathematical methods to calculate complex impedance, or reads it off a calibrated meter or a digital display. Professional instruments of this type are usually called network analyzers.
Modern analyzers do not require the operator to adjust any and knobs as with the bridge-type analyzers. Many of these instruments have the ability to automatically sweep the frequency over a wide range and then plot the antenna characteristics on a graphical display. Doing this with a manually-operated bridge would be time-consuming, requiring one to change the frequency and adjust the knobs at each frequency for a match.
High and low power methods
Many transmitters include an SWR meter in the output circuits which works by measuring the reflected wave from the antenna back to the transmitter, which is minimal when the antenna is matched. Reflected power from a badly tuned antenna can present an improper load at the transmitter which can damage it. The SWR meter requires about 5–10 watts of outgoing signal from the radio to register the reflected power (if any), and then only indicates the relative degree of mismatch, not the reactive and resistive impedance seen at the end of the antenna's feedline.
A complex-impedance antenna analyzer typically only requires a few milliwatts of power be applied to the antenna, and typically provides its own signal, not requiring any test signal from a transmitter. Using a low-power test signal avoids damaging the analyzer when testing a badly-matched antenna. In addition, because its signal power is very low, the analyzer can be used for frequencies outside of the transmit bands licensed to its operator, and thus measure antenna performance over an unrestricted range of frequencies.
See also
Impedance matching
Transmitter
References
Electronic test equipment
Radio electronics
Impedance measurements | Antenna analyzer | [
"Physics",
"Technology",
"Engineering"
] | 971 | [
"Radio electronics",
"Physical quantities",
"Electronic test equipment",
"Measuring instruments",
"Impedance measurements",
"Electrical resistance and conductance"
] |
880,591 | https://en.wikipedia.org/wiki/Technetium%20%2899mTc%29%20sestamibi | {{DISPLAYTITLE:Technetium (99mTc) sestamibi}}
Technetium (99mTc) sestamibi (INN) (commonly sestamibi; USP: technetium Tc 99m sestamibi; trade name Cardiolite) is a pharmaceutical agent used in nuclear medicine imaging. The drug is a coordination complex consisting of the radioisotope technetium-99m bound to six (sesta=6) methoxyisobutylisonitrile (MIBI) ligands. The anion is not defined. The generic drug became available late September 2008. A scan of a patient using MIBI is commonly known as a "MIBI scan".
Sestamibi is taken up by tissues with large numbers of mitochondria and negative plasma membrane potentials. Sestamibi is mainly used to image the myocardium (heart muscle). It is also used in the work-up of primary hyperparathyroidism to identify parathyroid adenomas, for radioguided surgery of the parathyroid and in the work-up of possible breast cancer.
Cardiac imaging (MIBI scan)
A MIBI scan or sestamibi scan is now a common method of cardiac imaging. Technetium (99mTc) sestamibi is a lipophilic cation which, when injected intravenously into a patient, distributes in the myocardium proportionally to the myocardial blood flow. Single photon emission computed tomography (SPECT) imaging of the heart is performed using a gamma camera to detect the gamma rays emitted by the technetium-99m as it decays.
Two sets of images are acquired. For one set, 99mTc MIBI is injected while the patient is at rest and then the myocardium is imaged. In the second set, the patient is stressed either by exercising on a treadmill or pharmacologically. The drug is injected at peak stress and then imaging is performed. The resulting two sets of images are compared with each other to distinguish ischemic from infarcted areas of the myocardium. This imaging technique has a sensitivity of around 90%. Resting images are useful only for detecting tissue damage, while stress images will also provide evidence of coronary artery (ischemia) disease.
With dipyridamole (Persantine MIBI scan)
When combined with the drug dipyridamole, a brand name of which is Persantine, a MIBI scan is often referred to as a Persantine MIBI scan.
Parathyroid imaging
In primary hyperparathyroidism, one or more of the four parathyroid glands either develops a benign tumor called an adenoma or undergoes hyperplasia as a result of homeostatic dysregulation. The parathyroid gland takes up 99mTc MIBI following an intravenous injection, and the patient's neck is imaged with a gamma camera to show the location of all glands. A second image is obtained after a washout time (approximately 2 hours), and mitochondria in the oxyphil cells of the abnormal glands retaining the 99mTc are seen with the gamma camera. This imaging method will detect 75 to 90 percent of abnormal parathyroid glands in primary hyperparathyroidism. An endocrine surgeon can then perform a focused parathyroidectomy (less invasive than traditional surgery) to remove the abnormal gland.
Radioguided surgery of the parathyroids
Following administration, 99mTc MIBI collects in overactive parathyroid glands. During surgery, the surgeon can use a probe sensitive to gamma rays to locate the overactive parathyroid before removing it.
Thyroid imaging
Several case reports have demonstrated that 99mTc MIBI scan may be useful to differentiate the sub-type of amiodarone-induced thyrotoxicosis. Lack of MIBI uptake in the thyroid is compatible with a form of thyroiditis (type-2 AIT) which may respond to treatment with steroids.
Breast imaging
The drug is also used in the evaluation of breast nodules. Malignant breast tissues concentrate 99mTc MIBI to a much greater extent and more frequently than benign disease. As such, limited characterization of breast anomalies is possible. Scintimammography has a high sensitivity and specificity for breast cancer, both more than 85%.
More recently, breast radiologists administer lower doses of 99mTc sestamibi (approximately ) for Molecular Breast Imaging (MBI) scans which results in a high sensitivity (91%) and high specificity (93%) for breast cancer detection. It however carries a greater risk of causing cancer, making it not appropriate for general breast cancer screening in patients.
The last reference listed refers to a dose, which is given with the Dilon single-head system, which requires a higher dose since only one camera is utilized (meaning the camera needs to be able to see through more tissue). A dose, which is used in the other two commercially available MBI systems is essentially equivalent to a mammogram () or a tomosynthesis exam ().
In order to keep the radiation doses to patients as low as reasonably achievable, MBI is usually limited to women with dense breast tissue, where the medical benefit of the scan outweighs the potential risk of radiation exposure. For the same reason, the administered activity is kept low. This can potentially result in noisy images, which in turn causes inconclusive mammograms. Researchers continue to devote their time to improving the technology, changing scan parameters, and reducing dose to patients.
References
External links
Cardiolite official page
Molecular Breast Imaging
The Sestamibi Scan for Hyperparathyroidism on: Parathyroid.com
Drugs developed by Bristol Myers Squibb
Isocyanides
Diagnostic endocrinology
Diagnostic medical imaging
Methoxy compounds
Organometallic compounds
Radiopharmaceuticals
Technetium-99m
Technetium compounds | Technetium (99mTc) sestamibi | [
"Chemistry"
] | 1,255 | [
"Medicinal radiochemistry",
"Inorganic compounds",
"Organometallic compounds",
"Functional groups",
"Organic compounds",
"Radiopharmaceuticals",
"Isocyanides",
"Chemicals in medicine",
"Organometallic chemistry"
] |
881,148 | https://en.wikipedia.org/wiki/Monel | Monel is a group of alloys of nickel (from 52 to 68%) and copper, with small amounts of iron, manganese, carbon, and silicon. Monel is not a cupronickel alloy because it has less than 60% copper.
Stronger than pure nickel, Monel alloys are resistant to corrosion by many aggressive agents, including rapidly flowing seawater. They can be fabricated readily by hot- and cold-working, machining, and welding.
Monel was created in 1905 by Robert Crooks Stanley, who at the time worked at the International Nickel Company (Inco). Monel was named after company president Ambrose Monell, and patented in 1906. One L was dropped, because family names were not allowed as trademarks at that time. The trademark was registered in May 1921, and it is now a property of the Special Metals Corporation.
As an expensive alloy, it tends to be used in applications where it cannot be replaced with cheaper alternatives. For example, in 2015 Monel piping was more than three times as expensive as the equivalent piping made from carbon steel.
Properties
Monel is a solid-solution binary alloy. As nickel and copper are mutually soluble in all proportions, it is a single-phase alloy. Compared to steel, Monel is very difficult to machine as it work-hardens very quickly. It needs to be turned and worked at slow speeds and low feed rates. It is resistant to corrosion and acids, and some alloys can withstand a fire in pure oxygen. It is commonly used in applications with highly corrosive conditions. Small additions of aluminium and titanium form an alloy (K-500) with the same corrosion resistance but with much greater strength due to gamma prime formation on aging. Monel is typically much more expensive than stainless steel.
Monel alloy 400 has a specific gravity of 8.80, a melting range of 1300–1350 °C, an electrical conductivity of approximately 34% IACS, and (in the annealed state) a hardness of 65 Rockwell B. Monel alloy 400 is notable for its toughness, which is maintained over a considerable range of temperatures.
Monel alloy 400 has excellent mechanical properties at subzero temperatures. Strength and hardness increase with only slight impairment of ductility or impact resistance. The alloy does not undergo a ductile-to-brittle transition even when cooled to the temperature of liquid hydrogen. This is in marked contrast to many ferrous materials which are brittle at low temperatures despite their increased
strength.
Uses
Aerospace applications
In the 1960s, Monel metal found bulk uses in aircraft construction, especially in making the frames and skins of experimental rocket planes, such as the North American X-15, to resist the great heat generated by aerodynamic friction during extremely high speed flight. Monel metal retains its strength at very high temperatures, allowing it to maintain its shape at high atmospheric flight speeds, a trade-off against the increased weight of the parts due to Monel's high density.
Monel is used for safety wiring in aircraft maintenance to ensure that fasteners cannot come undone, usually in high-temperature areas; stainless wire is used in other areas for economy. In addition some fasteners used are made from the alloy.
Oil production and refining
Monel is used in the section of alkylation units in direct contact with concentrated hydrofluoric acid. Monel offers exceptional resistance to hydrofluoric acid in all concentrations up to the boiling point. It is perhaps the most resistant of all commonly used engineering alloys. The alloy is also resistant to many forms of sulfuric and hydrochloric acids under reducing conditions.
Marine applications
Monel's corrosion resistance makes it ideal in applications such as piping systems, pump shafts, seawater valves, trolling wire, and strainer baskets. Some alloys are completely non-magnetic and are used for anchor cable aboard minesweepers or in housings for magnetic-field measurement equipment. In recreational boating, Monel is used for wire to seize shackles for anchor ropes, for water and fuel tanks, and for underwater applications. It is also used for propeller shafts and for keel bolts. On the popular Hobiecat sailboats, Monel rivets are used where strength is needed but stainless steel cannot be used due to corrosion that would result from stainless steel being in contact with the aluminum mast, boom, and frame of the boat in a saltwater environment.
Because of the problem of electrolytic action in salt water (also known as Galvanic corrosion), in shipbuilding Monel must be carefully insulated from other metals such as steel. The New York Times on August 12, 1915 published an article about a 215-foot yacht, "the first ship that has ever been built with an entirely Monel hull," that "went to pieces" in just six weeks and had to be scrapped, "on account of the disintegration of her bottom by electrical action." The yacht's steel skeleton deteriorated due to electrolytic interaction with the Monel.
In seabird research, and bird banding or ringing in particular, Monel has been used to make bird bands or rings for many species, such as albatrosses, that live in a corrosive sea water environment.
Musical instruments
Monel is used as the material for valve pistons or rotors in some higher-quality musical instruments such as trumpets, tubas and French horns. RotoSound introduced the use of Monel for electric bass strings in 1962, and these strings have been used by numerous artists, including Steve Harris of Iron Maiden, The Who, Sting, John Deacon, John Paul Jones and the late Chris Squire. Monel was in use in the early 1930s by other musical string manufacturers, such as Gibson Guitar Corporation, who continue to offer them for mandolin as the Sam Bush signature set. Also, C.F. Martin & Co. uses Monel for their Martin Retro acoustic guitar strings. The Pyramid string factory (Germany) produces 'Monel classics' electric guitar strings, wound on a round core. In 2017, D'Addario string company released a line of violin strings using a Monel winding on the D and G string.
Other
Good resistance against corrosion by acids and oxygen makes Monel a good material for the chemical industry. Even corrosive fluorides can be handled within Monel apparatus; this was done in an extensive way in the enrichment of uranium in the Oak Ridge Gaseous Diffusion Plant. Here most of the larger-diameter tubing for the uranium hexafluoride was made from Monel. Regulators for reactive cylinder gases like hydrogen chloride form another example, where PTFE is not a suitable option when high delivery pressures are required. These will sometimes include a Monel manifold and taps prior to the regulator that allow the regulator to be flushed with a dry, inert gas after use to further protect the equipment.
In the early 20th century, when steam power was widely used, Monel was advertised as being desirable for use in superheated steam systems. During the world wars, Monel was used for US military dog tags.
Monel is often used for kitchen sinks and in the frames of eyeglasses. It has also been used for firebox stays in fire-tube boilers.
Parts of the Clock of the Long Now, which is intended to run for 10,000 years, are made from Monel because of the corrosion resistance without the use of precious metals.
Monel was used for much of the exposed metal used in the interior of the Bryn Athyn Cathedral in Pennsylvania, religious seat of the General Church of the New Jerusalem. This included large decorative screens, doorknobs, etc. Monel also has been used as roofing material in buildings such as the original Pennsylvania Station in New York City.
The 1991–1996 Acura (Honda) NSX came with a key made of Monel.
Oilfield applications include using Monel drill collars. Instruments which measure the Earth's magnetic field to obtain a direction are placed in a non-magnetic collar which isolates them from the magnetic pull of drilling tools located above and below the non-magnetic collars. Monel is now rarely used, usually replaced by non-magnetic stainless steels.
Monel is also used as a protective binding material on the outside of western style stirrups.
Monel is used by Arrow Fastener Co., Inc. for rustproof T50 staples.
Monel has also been used in Kelvinator refrigerators.
Monel was used in the Baby Alice Thumb Guard, a 1930s-era anti-thumb-sucking device.
Monel is used in motion picture film processing. Monel staple splices are ideal for resisting corrosion from use in continuous-run photochemical tanks.
Monel was latterly widely used to manufacture firebox stays in steam locomotive boilers.
Alloys
Monel is often traded under the ISO standards 6208 (plate, sheet and strip) 9723 (bars) 9724 (wire) 9725 (forgings) and the DIN 17751 (pipes and tubes).
Monel 400
Monel 400 shows high strength and excellent corrosion resistance in a range of acidic and alkaline environments and is especially suitable for reducing conditions. It also has good ductility and thermal conductivity. Monel 400 typically finds application in marine engineering, chemical and hydrocarbon processing, heat exchangers, valves, and pumps. It is covered by the following standards: BS 3075, 3076 NA 13, DTD 204B and ASTM B164.
Large use of Monel 400 is made in alkylation units, namely in the reacting section in contact with concentrated hydrofluoric acid.
Monel 401
This alloy is designed for use in specialized electric and electronic applications. Alloy 401 is readily autogenously welded by the gas-tungsten-arc process. Resistance welding is a very satisfactory method for joining the material. It also exhibits good brazing characteristics. It is covered by standard UNS N04401.
Monel 404
Monel 404 alloy is used primarily in specialized electrical and electronic applications. The composition of Monel 404 is carefully adjusted to provide a very low Curie temperature, low permeability, and good brazing characteristics.
Monel 404 can be welded using common welding techniques and forged but cannot be hot worked. Cold working may be done using standard tooling and soft die materials for better finish. It is covered by standards UNS N04404 and ASTM F96. Monel 404 is used in capsules for transistors and ceramic to metal seals and other things.
Monel 405
Monel alloy 405, also known as Monel R405, is the free-machining grade of alloy 400. The nickel, carbon, manganese, iron, silicon & copper percent remains the same as alloy 400, but the sulfur is increased from 0.024 max to 0.025-0.060%. Alloy 405 is used chiefly for automatic screw machine stock and is not generally recommended for other applications. The nickel–copper sulfides resulting from the sulfur in its composition act as chip breakers, but because of these inclusions the surface finish of the alloy is not as smooth as that of alloy 400. Monel 405 is designated UNS N04405 and is covered by ASME SB-164, ASTM B-164, Federal QQ-N-281, SAE AMS 4674 & 7234, Military MIL-N-894, and NACE MR-01-75.
Monel 450
This alloy exhibits good fatigue strength and has relatively high thermal conductivity. It is used for seawater condensers, condenser plates, distiller tubes, evaporator and heat exchanger tubes, and saltwater piping.
Monel K-500
Monel K-500 combines the excellent corrosion resistance characteristic of Monel alloy 400 with the added advantages of greater strength and hardness. The increased properties are obtained by adding aluminum and titanium to the nickel–copper base, and by heating under controlled conditions so that submicroscopic particles of Ni3 (Ti, Al) are precipitated throughout the matrix.
The corrosion resistance of Monel alloy K-500 is substantially equivalent to that of alloy 400 except that, when in the age-hardened condition, alloy K-500 has a greater tendency toward stress-corrosion cracking in some environments. Monel alloy K-500 has been found to be resistant to a sour-gas environment. The combination of very low corrosion rates in high-velocity sea water and high strength make alloy K-500 particularly suitable for shafts of centrifugal pumps in marine service. In stagnant or slow-moving sea water, fouling may occur followed by pitting, but this pitting slows down after a fairly rapid initial attack.
Typical applications for alloy K-500 are pump shafts and impellers, doctor blades and scrapers, and oil-well drill collars, instruments, and electronic components. It is also used in components for power plants, such as steam-turbine blades, heat exchangers, and condenser tubes. In the marine industry, it is utilized in components for marine hardware, propeller shafts, pump shafts and seawater valves exposed to harsh marine environments.
Monel 502
Monel 502 is a nickel–copper alloy and its UNS no is N05502. This grade also has good creep and oxidation resistance. Monel 502 can be formed in different shapes, and can be machined similar to austenitic stainless steels.
See also
Hastelloy
Inconel
References
Citations
General and cited references
External links
Monel Corrosion
monel - titaniumnazari
Monel 400 vs. Monel K-500 Strip: Which One Is Best for You?
What Is the Difference Between Monel Alloy 400 and Alloy K-500?
Building materials
Copper alloys
Nickel alloys | Monel | [
"Physics",
"Chemistry",
"Engineering"
] | 2,828 | [
"Nickel alloys",
"Copper alloys",
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Alloys",
"Matter",
"Building materials"
] |
881,545 | https://en.wikipedia.org/wiki/Proteoglycan | Proteoglycans are proteins that are heavily glycosylated. The basic proteoglycan unit consists of a "core protein" with one or more covalently attached glycosaminoglycan (GAG) chain(s). The point of attachment is a serine (Ser) residue to which the glycosaminoglycan is joined through a tetrasaccharide bridge (e.g. chondroitin sulfate-GlcA-Gal-Gal-Xyl-PROTEIN). The Ser residue is generally in the sequence -Ser-Gly-X-Gly- (where X can be any amino acid residue but proline), although not every protein with this sequence has an attached glycosaminoglycan. The chains are long, linear carbohydrate polymers that are negatively charged under physiological conditions due to the occurrence of sulfate and uronic acid groups. Proteoglycans occur in connective tissue.
Types
Proteoglycans are categorized by their relative size (large and small) and the nature of their glycosaminoglycan chains. Types include:
Certain members are considered members of the "small leucine-rich proteoglycan family" (SLRP). These include decorin, biglycan, fibromodulin and lumican.
Function
Proteoglycans are a major component of the animal extracellular matrix, the "filler" substance existing between cells in an organism. Here they form large complexes, both to other proteoglycans, to hyaluronan, and to fibrous matrix proteins, such as collagen. The combination of proteoglycans and collagen form cartilage, a sturdy tissue that is usually heavily hydrated (mostly due to the negatively charged sulfates in the glycosaminoglycan chains of the proteoglycans). They are also involved in binding cations (such as sodium, potassium and calcium) and water, and also regulating the movement of molecules through the matrix. Evidence also shows they can affect the activity and stability of proteins and signalling molecules within the matrix. Individual functions of proteoglycans can be attributed to either the protein core or the attached GAG chain. They can also serve as lubricants, by creating a hydrating gel that helps withstand high pressure.
Synthesis
The protein component of proteoglycans is synthesized by ribosomes and translocated into the lumen of the rough endoplasmic reticulum. Glycosylation of the proteoglycan occurs in the Golgi apparatus in multiple enzymatic steps. First, a special link tetrasaccharide is attached to a serine side chain on the core protein to serve as a primer for polysaccharide growth. Then sugars are added one at a time by glycosyl transferase. The completed proteoglycan is then exported in secretory vesicles to the extracellular matrix of the tissue.
Clinical significance
An inability to break down the proteoglycans is characteristic of a group of genetic disorders, called mucopolysaccharidoses. The inactivity of specific lysosomal enzymes that normally degrade glycosaminoglycans leads to the accumulation of proteoglycans within cells. This leads to a variety of disease symptoms, depending upon the type of proteoglycan that is not degraded. Mutations in the gene encoding the galactosyltransferase B4GALT7 result in a reduced substitution of the proteoglycans decorin and biglycan with glycosaminoglycan chains, and cause a spondylodysplastic form of Ehlers–Danlos syndrome.
Distinction between proteoglycans and glycoproteins
Quoting from recommendations for IUPAC:
References
External links
Diagram at nd.edu
Glycoproteins | Proteoglycan | [
"Chemistry"
] | 833 | [
"Glycoproteins",
"Glycobiology"
] |
881,602 | https://en.wikipedia.org/wiki/Hydrograph | A hydrograph is a graph showing the rate of flow (discharge) versus time past a specific point in a river, channel, or conduit carrying flow. The rate of flow is typically expressed in units of cubic meters per second (m³/s) or cubic feet per second (cfs).
Hydrographs often relate changes of precipitation to changes in discharge over time. The term can also refer to a graph showing the volume of water reaching a particular outfall, or location in a sewerage network. Graphs are commonly used in the design of sewerage, more specifically, the design of surface water sewerage systems and combined sewers.
Terminology
Source:
Discharge the rate of flow (volume per unit time) passing a specific location in a river, or other channel. The discharge is measured at a specific point in a river and is typically time variant.
Approach Segment the river flow before the storm (antecedent flow).
Rising limb The rising limb of the hydrograph, also known as concentration curve, reflects a prolonged increase in discharge from a catchment area, typically in response to a rainfall event.
Peak dischargethe highest point on the hydrograph when the rate of discharge is greatest.
Recession (or falling) limb The recession limb extends from the peak flow rate onward. The end of stormflow ( quickflow or direct runoff) and the return to groundwater-derived flow (base flow) is often taken as the point of inflection of the recession limb. The recession limb represents the withdrawal of water from the storage built up in the basin during the earlier phases of the hydrograph.
Lag-1 autocorrelation method to compare streamflow data to itself by shifting or "lagging" initial discharge dataset 1 time unit. A Lag-10 would mean the initial data is shifted 10 days, then is compared to an unshifted version of the data. Not to be confused with lag time.
Lag time the time interval from the maximum rainfall to the peak discharge.
Time to peak time interval from the start of rainfall to the peak discharge.
Time of concentration Time of concentration is the time from the end of the precipitation period to the end of the quick–response runoff in the hydrograph.
Types of hydrographs include:
Stream discharge hydrographs
Stream stage hydrographs
Precipitation hydrographs
Storm hydrographs
Flood hydrographs
Annual hydrographs a.k.a. regimes
Direct Runoff Hydrograph
Effective Runoff Hydrograph
Raster Hydrograph
Lag-1 Hydrograph
Storage opportunities in the drainage network (e.g., lakes, reservoirs, wetlands, channel and bank storage capacity)
Baseflow separation
A stream hydrograph is commonly determining the influence of different hydrologic processes on discharge from the subject catchment. Because the timing, magnitude, and duration of groundwater return flow differs so greatly from that of direct runoff, separating and understanding the influence of these distinct processes is key to analyzing and simulating the likely hydrologic effects of various land use, water use, weather, and climate conditions and changes.
However, the process of separating “baseflow” from “direct runoff” is an inexact science. In part this is because these two concepts are not, themselves, entirely distinct and unrelated. Return flow from groundwater increases along with overland flow from saturated or impermeable areas during and after a storm event; moreover, a particular water molecule can easily move through both pathways en route to the watershed outlet. Therefore, separation of a purely “baseflow component” in a hydrograph is a somewhat arbitrary exercise. Nevertheless, various graphical and empirical techniques have been developed to perform these hydrograph separations. The separation of base flow from direct runoff can be an important first step in developing rainfall-runoff models for a watershed of interest—for example, in developing and applying unit hydrographs as described below.
Unit hydrograph
A unit hydrograph (UH) is the hypothetical unit response of a watershed (in terms of runoff volume and timing) to a unit input of rainfall. It can be defined as the direct runoff hydrograph (DRH) resulting from one unit (e.g., one cm or one inch) of effective rainfall occurring uniformly over that watershed at a uniform rate over a unit period of time. As a UH is applicable only to the direct runoff component of a hydrograph (i.e., surface runoff), a separate determination of the baseflow component is required.
A UH is specific to a particular watershed, and specific to a particular length of time corresponding to the duration of the effective rainfall. That is, the UH is specified as being the 1-hour, 6-hour, or 24-hour UH, or any other length of time up to the time of concentration of direct runoff at the watershed outlet. Thus, for a given watershed, there can be many unit hydrographs, each one corresponding to a different duration of effective rainfall.
The UH technique provides a practical and relatively easy-to-apply tool for quantifying the effect of a unit of rainfall on the corresponding runoff from a particular drainage basin. UH theory assumes that a watershed's runoff response is linear and time-invariant, and that the effective rainfall occurs uniformly over the watershed. In the real world, none of these assumptions are strictly true. Nevertheless, application of UH methods typically yields a reasonable approximation of the flood response of natural watersheds. The linear assumptions underlying UH theory allows for the variation in storm intensity over time (i.e., the storm hyetograph) to be simulated by applying the principles of superposition and proportionality to separate storm components to determine the resulting cumulative hydrograph. This allows for a relatively straightforward calculation of the hydrograph response to any arbitrary rain event.
An instantaneous unit hydrograph is a further refinement of the concept; for an IUH, the input rainfall is assumed to all take place at a discrete point in time (obviously, this isn't the case for actual rainstorms). Making this assumption can greatly simplify the analysis involved in constructing a unit hydrograph, and it is necessary for the creation of a geomorphologic instantaneous unit hydrograph.
The creation of a GIUH is possible given nothing more than topologic data for a particular drainage basin. In fact, only the number of streams of a given order, the mean length of streams of a given order, and the mean land area draining directly to streams of a given order are absolutely required (and can be estimated rather than explicitly calculated if necessary). It is therefore possible to calculate a GIUH for a basin without any data about stream height or flow, which may not always be available.
Subsurface hydrology hydrograph
In subsurface hydrology (hydrogeology), a hydrograph is a record of the water level (the observed hydraulic head in wells screened across an aquifer).
Typically, a hydrograph is recorded for monitoring of heads in aquifers during non-test conditions (e.g., to observe the seasonal fluctuations in an aquifer). When an aquifer test is being performed, the resulting observations are typically called drawdown, since they are subtracted from pre-test levels and often only the change in water level is dealt with.
Raster hydrograph
Raster hydrographs are pixel-based plots for visualizing and identifying variations and changes in large multidimensional data sets. Originally developed by Keim (2000) they were first applied in hydrology by Koehler (2004) as a means of highlighting inter-annual and intra-annual changes in streamflow. The raster hydrographs in WaterWatch, like those developed by Koehler, depict years on the y-axis and days along the x-axis. Users can choose to plot streamflow (actual values or log values), streamflow percentile, or streamflow class (from 1, for low flow, to 7 for high flow), for Daily, 7-Day, 14-Day, and 28-Day streamflow. For a more comprehensive description of raster hydrographs, see Strandhagen et al. (2006).
Lag-1 hydrograph
A Lag-1 hydrograph is a graph of discharge which can be accomplished without a time axis (Koehler 2022). This technique allows data properties such as Q, dQ/dt, and d2Q/dt2, and trends of increasing, decreasing or no change flow to be readily seen and understood on a single graph. Flow pulse reference lines can easily be added and interpreted. The methodology is based on the time-series serial correlation lag-1 graph and uses the normally unwanted (but still valuable) autocorrelation present within the streamflow data.
The x-axis represents the discharge for a date, Qt, while the y-axis represents the discharge for the next day, Qt+1.
Data preparation and plotting methods are identical to an autocorrelation lag 1 plot, where 1 indicates a 1-day or daily time step. The table below shows how the time-series discharge are shifted. It is critical that the temporal sequence is maintained for the data. Thinking of the x values as “flow for today” and the y values as “flow for tomorrow” helps visualize the order of the data.
See also
Aquifer test
Hydrogeology
Baseflow
Routing (hydrology)
Runoff model (reservoir)
Stream gauge
Surface water
References
Keim, D.A. 2000. Designing pixel-oriented visualization techniques: theory and applications. IEEE Transactions on Visualization and Computer Graphics, 6(1), 59-78.
Koehler, R. 2004. Raster Based Analysis and Visualization of Hydrologic Time Series. Ph.D. dissertation, University of Arizona. Tucson, AZ, 189 p.
Koehler, R. 2022. In preparation, The Lag-1 Hydrograph – An Alternate Way to Plot Streamflow Time-Series Data. American Institute of Hydrology Bulletin, Winter 2022.
Strandhagen, E., Marcus, W.A., and Meacham, J.E. 2006. Views of the rivers: representing streamflow of the greater Yellowstone ecosystem (hotlink to http://geography.uoregon.edu/amarcus/Publications/Strandhagen-et-al_2006_Cart_Pers.pdf). Cartographic Perspectives, no. 55, Fall.
L. Sherman, "Stream Flow from Rainfall by the Unit Graph Method," Engineering News Record, No. 108, 1932, pp. 501-505.
External links
The U.S. Geological Survey (USGS) offers real-time streamflow data for thousands of streams in the United States.
The U.S. Geological Survey (USGS) also offers an online toolkit to create a raster hydrograph for any of its streamflow gaging stations in the United States.
SCS Dimensionless Unit Hydrograph.
SERC activity and Matlab code for calculating and using Unit Hydrograph.
Hydrology
Hydraulic engineering | Hydrograph | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,249 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Environmental engineering",
"Hydraulic engineering"
] |
5,033,416 | https://en.wikipedia.org/wiki/Beehive%20burner | A wood waste burner, known as a teepee burner or wigwam burner in the United States and a beehive burner in Canada, is a free-standing conical steel structure usually ranging from 30 to 60 feet in height. They are named for their resemblance to beehives, teepees or wigwams. A sawdust burner is cylindrical. They have an opening at the top that is covered with a steel grill or mesh to keep sparks and glowing embers from escaping. Sawdust and wood scraps are delivered to an opening near the top of the cone by means of a conveyor belt or Archimedes' screw, where they fall onto the fire near the center of the structure.
Teepee or beehive burners are used to dispose of waste wood in logging yards and sawdust from sawmills by incineration. As a result, they produce a large quantity of smoke and ash, which is vented directly into the atmosphere without filtering, contributing to poor air quality. The burners are considered to be a major source of air pollution and have been phased out in most areas.
Teepee burners went out of general use in the Northwestern United States by the mid 1970s, and are prohibited from operation in Oregon, as well as southwestern Washington State.
There are a few derelict beehive burners remaining in California, Oregon, Washington State and Western Canada. The majority of wood waste is now recycled and used as a component in various forest products, such as pellet fuel, particle board and mulch.
Gallery
See also
Air pollution in British Columbia
Clean Air Act of 1970
References
External links
Historic images of teepee burners in Oregon from the Salem, Oregon, Public Library
Rusty Relics: Teepee Burners
Environmental issues in Canada
History of the Pacific Northwest
Incineration
Logging | Beehive burner | [
"Chemistry",
"Engineering"
] | 371 | [
"Combustion engineering",
"Incineration"
] |
5,035,864 | https://en.wikipedia.org/wiki/Chipkill |
Chipkill is IBM's trademark for a form of advanced error checking and correcting (ECC) computer memory technology that protects memory systems from single memory chip failures and multi-bit errors from any portion of a single memory chip. One simple scheme to perform this function scatters the bits of a Hamming code ECC word across multiple memory chips, such that the failure of any single memory chip will affect only one ECC bit per word. This allows memory contents to be reconstructed despite the complete failure of one chip. Typical implementations use more advanced codes, such as a BCH code, that can correct multiple bits with less overhead.
Chipkill is frequently combined with dynamic bit-steering, so that if a chip fails (or has exceeded a threshold of bit errors), another, spare, memory chip is used to replace the failed chip. The concept is similar to that of RAID, which protects against disk failure, except that now the concept is applied to individual memory chips. The technology was developed by the IBM Corporation in the early and mid-1990s. An important RAS feature, Chipkill technology is deployed primarily on SSDs, mainframes, and midrange servers.
An equivalent system from Sun Microsystems is called Extended ECC, while equivalent systems from HP are called Advanced ECC and Chipspare. A similar system from Intel, called Lockstep memory, provides double-device data correction (DDDC) functionality. Similar systems from Micron, called redundant array of independent NAND (RAIN), and from SandForce, called RAISE level 2, protect data stored on SSDs from any single NAND flash chip failure.
A 2009 paper using data from Google's data centers provided evidence demonstrating that in observed Google systems, DRAM errors were recurrent at the same location, and that 8% of DIMMs were affected each year. Specifically, "In more than 85% of the cases a correctable error is followed by at least one more correctable error in the same month." DIMMs with Chipkill error correction showed a lower fraction of DIMMs reporting uncorrectable errors compared to DIMMs with error-correcting codes that can only correct single-bit errors. A 2010 paper from the University of Rochester also showed that Chipkill memory resulted in substantially fewer memory errors, using both real-world memory traces and simulations.
See also
ECC memory
Lockstep memory
Memory ProteXion
Redundant array of independent memory
Single-error correction and double-error detection (SECDED)
References
External links
Intel E7500 Chipset MCH Intelx4 Single Device Data Correction (x4 SDDC) Implementation and Validation, Intel Application note AP-726, August 2002.
DRAM study turns assumptions about errors upside down, Ars Technica, October 7, 2009
Enabling Memory Reliability, Availability, and Serviceability Features on Dell PowerEdge Servers, 2005
Chipkill correct memory architecture, August 2000, by David Locklear
The Mathematics of Chipkill ECC, October 2015, by Bob Day
Computer memory
Error detection and correction
IBM computer hardware | Chipkill | [
"Engineering"
] | 622 | [
"Error detection and correction",
"Reliability engineering"
] |
5,037,034 | https://en.wikipedia.org/wiki/P300-CBP%20coactivator%20family | The p300-CBP coactivator family in humans is composed of two closely related transcriptional co-activating proteins (or coactivators):
p300 (also called EP300 or E1A binding protein p300)
CBP (also known as CREB-binding protein or CREBBP)
Both p300 and CBP interact with numerous transcription factors and act to increase the expression of their target genes.
Protein structure
p300 and CBP have similar structures. Both contain five protein interaction domains: the nuclear receptor interaction domain (RID), the KIX domain (CREB and MYB interaction domain), the cysteine/histidine regions (TAZ1/CH1 and TAZ2/CH3) and the interferon response binding domain (IBiD). The last four domains, KIX, TAZ1, TAZ2 and IBiD of p300, each bind tightly to
a sequence spanning both transactivation domains 9aaTADs of transcription factor p53.
In addition p300 and CBP each contain a protein or histone acetyltransferase (PAT/HAT) domain and a bromodomain that binds acetylated lysines and a PHD finger motif with unknown function. The conserved domains are connected by long stretches of unstructured linkers.
Regulation of gene expression
p300 and CBP are thought to increase gene expression in three ways:
by relaxing the chromatin structure at the gene promoter through their intrinsic histone acetyltransferase (HAT) activity.
recruiting the basal transcriptional machinery including RNA polymerase II to the promoter.
acting as adaptor molecules.
p300 regulates transcription by directly binding to transcription factors (see external reference for explanatory image). This interaction is managed by one or more of the p300 domains: the nuclear receptor interaction domain (RID), the CREB and MYB interaction domain (KIX), the cysteine/histidine regions (TAZ1/CH1 and TAZ2/CH3) and the interferon response binding domain (IBiD). The last four domains, KIX, TAZ1, TAZ2 and IBiD of p300, each bind tightly to a sequence spanning both transactivation domains 9aaTADs of transcription factor p53.
Enhancer regions, which regulate gene transcription, are known to be bound by p300 and CBP, and ChIP-seq for these proteins has been used to predict enhancers.
Work done by Heintzman and colleagues showed that 70% of the p300 binding occurs in open chromatin regions as seen by the association with DNase I hypersensitive sites. Furthermore, they have described that most p300 binding (75%) occurs far away from transcription start sites (TSSs) and these binding sites are also associated with enhancer regions as seen by H3K4me1 enrichment. They have also found some correlation between p300 and RNAPII binding at enhancers, which can be explained by the physical interaction with promoters or by enhancer RNAs.
Function in G protein signaling
An example of a process involving p300 and CBP is G protein signaling. Some G proteins stimulate adenylate cyclase that results in elevation of cAMP. cAMP stimulates PKA, which consists of four subunits, two regulatory and two catalytic. Binding of cAMP to the regulatory subunits causes release of the catalytic subunits. These subunits can then enter the nucleus to interact with transcriptional factors, thus affecting gene transcription. The transcription factor CREB, which interacts with a DNA sequence called a cAMP response element (or CRE), is phosphorylated on a serine (Ser 133) in the KID domain. This modification is PKA mediated, and promotes the interaction of the KID domain of CREB with the KIX domain of CBP or p300 and enhances transcription of CREB target genes, including genes that aid gluconeogenesis. This pathway can be initiated by adrenaline activating β-adrenergic receptors on the cell surface.
Clinical significance
Mutations in CBP, and to a lesser extent p300, are the cause of Rubinstein-Taybi Syndrome, which is characterized by severe mental retardation. These mutations result in the loss of one copy of the gene in each cell, which reduces the amount of CBP or p300 protein by half. Some mutations lead to the production of a very short, nonfunctional version of the CBP or p300 protein, while others prevent one copy of the gene from making any protein at all. Although researchers do not know how a reduction in the amount of CBP or p300 protein leads to the specific features of Rubinstein-Taybi syndrome, it is clear that the loss of one copy of the CBP or p300 gene disrupts normal development.
Defects in CBP HAT activity appears to cause problems in long-term memory formation.
CBP and p300 have also been found to be involved in multiple rare chromosomal translocations that are associated with acute myeloid leukemia. For example, researchers have found a translocation between chromosomes 8 and 22 (in the region containing the p300 gene) in several people with a cancer of blood cells called acute myeloid leukemia (AML). Another translocation, involving chromosomes 11 and 22, has been found in a small number of people who have undergone cancer treatment. This chromosomal change is associated with the development of AML following chemotherapy for other forms of cancer.
Mutations in the p300 gene have been identified in several other types of cancer. These mutations are somatic, which means they are acquired during a person's lifetime and are present only in certain cells. Somatic mutations in the p300 gene have been found in a small number of solid tumors, including cancers of the colon and rectum, stomach, breast and pancreas. Studies suggest that p300 mutations may also play a role in the development of some prostate cancers, and could help predict whether these tumors will increase in size or spread to other parts of the body. In cancer cells, p300 mutations prevent the gene from producing any functional protein. Without p300, cells cannot effectively restrain growth and division, which can allow cancerous tumors to form.
Mouse models
CBP and p300 are critical for normal embryonic development, as mice completely lacking either CBP or p300 protein, die at an early embryonic stage. In addition, mice which lack one functional copy (allele) of both the CBP and p300 genes (i.e. are heterozygous for both CBP and p300) and thus have half of the normal amount of both CBP and p300, also die early in embryogenesis. This indicates that the total amount of CBP and p300 protein is critical for embryo development.
Data suggest that some cell types can tolerate loss of CBP or p300 better than the whole organism can. Mouse B cells or T cells lacking either CBP and p300 protein develop fairly normally, but B or T cells that lack both CBP and p300 fail to develop in vivo. Together, the data indicate that, while individual cell types require different amounts of CBP and p300 to develop or survive and some cell types are more tolerant of loss of CBP or p300 than the whole organism, it appears that many, if not all cell types may require at least some p300 or CBP to develop.
References
External links
p300-CBP regulatory mechanism in the IFN-β enhanceosome complex
Gene families
Membrane biology
EC 2.3.1
Transcription coregulators | P300-CBP coactivator family | [
"Chemistry"
] | 1,623 | [
"G proteins",
"Molecular biology",
"Membrane biology",
"Signal transduction"
] |
28,722,065 | https://en.wikipedia.org/wiki/Proteogenomics | Proteogenomics is a field of biological research that utilizes a combination of proteomics, genomics, and transcriptomics to aid in the discovery and identification of peptides. Proteogenomics is used to identify new peptides by comparing MS/MS spectra against a protein database that has been derived from genomic and transcriptomic information. Proteogenomics often refers to studies that use proteomic information, often derived from mass spectrometry, to improve gene annotations. The utilization of both proteomics and genomics data alongside advances in the availability and power of spectrographic and chromatographic technology led to the emergence of proteogenomics as its own field in 2004.
Proteomics deals with proteins in the same way that Genomics studies the genetic code of entire organisms, while Transcriptomics deals with the study of RNA sequencing and transcripts. While all three fields might use forms of mass spectrometry and chromatography to identify and study the functions of DNA, RNA, and proteins, proteomics relies on the assumption that current gene models are correct and that all relevant protein sequences can be found in a reference database such as the Proteomics Identifications Database. Proteogenomics helps eliminate this reliance on existing, limited genetic models by combining datasets from multiple fields in order to produce a database of proteins or genetic markers. In addition, the emergence of novel protein sequences due to mutations often cannot be accounted for in traditional proteomic databases, but can be predicted and studied using a synthesis of genomic and transcriptomic data.
The resulting research has applications in improving gene annotations, studying mutations, and understanding the effects of genetic manipulation.
More recently, the joint profiling of surface proteins and mRNA transcripts from single cells by methods such as CITE-Seq and ESCAPE has been referred to as single-cell proteogenomics, although the goals of these studies are not related to peptide identification. Since 2019 these methods are more commonly referred to as multimodal omics or multi-omics.
History
Proteogenomics emerged as an independent field in 2004, based on the integration of technological advancements in next-generation sequencing genomics, and mass spectrometry proteomics. The term itself came into use that year, with the publication of a paper by George Church’s research group describing their discovery of a proteogenomic mapping technique that utilized proteomics data to better annotate the genome of the bacteria M. pneumoniae. By using a modern protein database, the lab mapped peptides detected in a whole cell onto a genetic scaffold using tandem mass spectrometry, then used the generated "hits" in order to create a "proteogenomic map" based on traditional genetic signals. The resulting map proved extremely accurate, with over 81% of predicted genomic reading frames being detected in the bacterial cells studied. In addition, the lab discovered several new frames not predicted via purely genetic methods, as well as some evidence supporting the idea that several predictions based genetic models could be false, proving the accuracy and cost-effectiveness of the hybrid technique.
The field expanded over the next two decades, initially using proteomics data to aid in refining genetic models via protein databases. In 2020s, one of the most common technique for identifying peptides involves using tandem mass spectrometry. This technique originated with Eng and Yates in 1994 which involves comparing a theoretical peptide fragment spectrum to compare an experimentally derived peptide spectrum to and outputting the most likely matches found. However, in the absence of an established peptide database, Proteogenomics instead compares the experimental spectrum to a genomic database instead which can then be used for genome annotation - as described in George Church's work.[3] The latter technique has become more widely used over the last decade in large part due to the increasing affordability and speed of genomic sequencing techniques coupled with the increasing sensitivity of mass spectrometry-based proteomics.
Methodology
The main idea behind the proteogenomic approach is to identify peptides by comparing MS/MS data to protein databases that contain predicted protein sequences. The protein database is generated in a variety of ways through the utilization of genomic and transcriptomic data. Below are some of the ways in which protein databases are generated:
Six-frame translation
Six-frame translations can be utilized to generate a database that predicts protein sequences. The limitation of this method is that databases will be very large due to the number of sequences that are generated, some of which do not exist in nature.
Ab initio gene prediction
In this method, a protein base is generated by gene predicting algorithms that enable the identification of protein coding regions. The database is similar to one generated through six-frame translation in regards to the fact that the databases can be very large.
Expressed sequence tag data
Six-frame translations can utilize an expressed sequence tag (EST) to generate protein databases. EST data provide transcription information that can aid in the creation of the database. The database can be very large and has the disadvantage of having multiple copies of a given sequence present; however, this problem can be circumvented by compressing the protein sequence generated through computational strategies.
Other methods
Protein databases can also be created by using RNA sequencing data, annotated RNA transcripts, and variant protein sequences. Also, there are other more specialized protein databases that can be made to appropriately identify the peptide of interest.
Another method in the identification of proteins through proteogenomics is comparative proteogenomics. Comparative proteogenomics compares proteomic data from multiple related species concurrently and exploits the homology between their proteins to improve annotations with higher statistical confidence.
Applications
Proteogenomics can be applied in different ways. One application is the improvement of gene annotations in various organisms. Gene annotation involves discovering genes and their functions.
Proteogenomics has become especially useful in the discovery and improvement of gene annotations in prokaryotic organisms. For example, various microorganisms have had their genomic annotation studied through the proteogenomic approach including, Escherichia coli, Mycobacterium, and multiple species of Shewanella bacteria.
Besides improving gene annotations, proteogenomic studies can also provide valuable information about the presence of programmed frameshifts, N-terminal methionine excision, signal peptides, proteolysis and other post-translational modifications. Proteogenomics has potential applications in medicine, especially to oncology research. Cancer occurs through genetic mutations such as methylation, translocation, and somatic mutations. Research has shown that both genomic and proteomic information are needed to understand the molecular variations that lead to cancer. Proteogenomics has aided in this through the identification of protein sequences that may have functional roles in cancer. A specific example of this occurred in a study involving colon cancer that resulted in the discovery of potential targets for cancer treatment. Proteogenomics has also led to personalized cancer targeting immunotherapies, where antibody epitopes for cancer antigens are predicted using proteogenomics to create medicines that act on the patient's specific tumor. In addition to treatment, proteogenonomics may provide insight into cancer diagnosis. In studies involving colon and rectal cancer, proteogenomics was utilized to identify somatic mutations. The identification of somatic mutations in patients could be used to diagnose cancer in patients. In addition to direct applications in cancer treatment and diagnosis, a proteogenomic approach can be used to study proteins that result in resistance to chemotherapy.
Challenges
Proteogenomics may offer methods of peptide identification without having the disadvantage of incomplete or inaccurate protein databases faced by proteomics; however, there are incurring challenges with the proteogenomic approach. One of the biggest challenges of proteogenomics is the sheer size of protein databases generated. statistically, a large protein database is more likely to result in the incorrect matching of the data from the protein database to the MS/MS data, this issue can hinder the identification of new peptides. False positives are also an issue through proteogenomic approaches. false positives can occur as a result of extremely large protein data bases where miss-matched data leads to incorrect identification. Another issue is the incorrect matching of MS/MS spectra to protein sequence data that corresponds to a similar peptide instead of the actual peptide. There are cases of receiving data of a peptide located at multiple gene sites, this can lead to data that can be interpreted in different ways. Despite these challenges, there are ways to reduce many of the errors that occur. For example, when dealing with a very large protein database, one could compare the identified novel peptide sequences to all of the sequences within the database and then compare the post translational modifications. Next it can be determined if the two sequences represent the same peptide or if they are two different peptides.
References
Proteomics
Genomics
Mass spectrometry | Proteogenomics | [
"Physics",
"Chemistry"
] | 1,850 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
28,729,190 | https://en.wikipedia.org/wiki/Cardiovascular%20Cell%20Therapy%20Research%20Network | Cardiovascular Cell Therapy Research Network (CCTRN) is a network of physicians, scientists, and support staff dedicated to studying stem cell therapy for treating heart disease. The CCTRN is funded by the National Institutes of Health (NIH) and includes expert researchers with experience in cardiovascular care at seven stem cell centers in the United States. The goals of the Network are to complete research studies that will potentially lead to more effective treatments for patients with cardiovascular disease, and to share knowledge quickly with the healthcare community.
Mission statement
The mission of the CCTRN is to achieve public health advances for the treatment of cardiovascular diseases, through the conduct and dissemination of collaborative research leading to evidence-based treatment options and improved outcome for patients with heart disease.
Components of the Network
The sponsor
The National Heart, Lung, and Blood Institute (NHLBI) is one of 27 institutes/centers of the National Institutes of Health (NIH) and supports research related to the causes, prevention, diagnosis, and treatment of heart, blood vessel, lung, and blood diseases; and sleep disorders. The NHLBI plans and directs research in the development and evaluation of interventions and devices related to prevention, treatment, and rehabilitation of patients with such diseases and disorders.
The Coordinating Center for Clinical Trials
Since 1971, the Coordinating Center for Clinical Trials () at The University of Texas School of Public Health has played a leading role in cardiovascular disease and vision research by serving as a coordinating center for 25 nationwide multicenter clinical trials. The CCCT's primary function is to provide and coordinate all operations, procedures, and activities of a large-scale randomized controlled clinical trial. The CCCT serves as the Data Coordinating Center for the CCTRN. The DCC was led by Lemuel Moye (2006-2019) and Barry R. Davis (2019-2021).
The clinical sites
The CCTRN includes seven stem cell centers in the United States with experience and expertise in clinical trials studying treatments for heart disease and peripheral artery disease. These sites include:
Minneapolis Heart Institute Foundation ()
Texas Heart Institute Stem Cell Center()
University of Florida College of Medicine ()
University of Louisville ()
Vascular and Cardiac Center for Adult Stem Cell Therapy (VC-CAST)
University of Miami Miller School of Medicine ()
Stanford University ()
Body of work
In July 2008, the CCTRN opened enrollment in two studies in patients who had recently had heart attacks: TIME ((NCT00684021)) and LateTIME ((NCT00684060)). The purpose of these studies was to determine if stem cells safely taken from an individual's bone marrow could be transplanted back into the injured heart muscle of the individual and improve the heart's ability to pump following a heart attack, as well as to determine the best time for transplanting the cells following a heart attack. The results of both studies were presented at the American Heart Association (AHA) Scientific meetings in 2011 (LateTIME) and 2012 (TIME), and simultaneously published in JAMA.
In March 2009, the CCTRN opened enrollment in a heart failure study: FOCUS ((NCT00824005)). The purpose of this study was to determine the safety and effectiveness of injecting bone marrow stem cells into heart muscle in an attempt to promote blood vessel growth that could potentially improve the blood supply in hearts that are failing. This study recruited patients who had heart failure, but would no longer benefit from other forms of standard treatment such as surgery or coronary artery repair procedures such as balloon angioplasty or stent placement. The results of this study were presented at the American College of Cardiology (ACC) Annual Meeting in 2012 and simultaneously published in JAMA.
In June 2013, CCTRN opened enrollment in a study in peripheral artery disease: PACE ((NCT01774097)). The purpose of this study was to determine the safety and effectiveness of bone marrow-derived stem cell therapy on improving blood flow and walking ability in patients with peripheral artery disease. The results of this study were published in Circulation in 2017.
In October 2015, CCTRN opened enrollment in a study in heart failure: CONCERT-HF (NCT02501811). The purpose of the study was to determine whether giving autologous Mesenchymal Stem Cells (MSCs) and/or C-kit+ cells to patients with heart muscle damage is safe and to help us learn whether these treatments improve heart function for people who are not ideal candidates for other forms of standard therapy such as surgery. The results of this study were published in the European Journal of Heart Failure in 2021.
In September 2016, CCTRN opened enrollment in a study in anthracycline-induced cardiomyopathy (AIC): SENECA (NCT02509156). The purpose of the study was to determine whether giving allogeneic mesenchymal stem cells (MSCs) to patients with AIC is safe and whether these treatments improve heart function. The results of this study were published in JACC CardioOncology in 2020.
References
External links
CCTRN at UTHealth — School of Public Health
ClinicalTrials.gov
International Society for Stem Cell Research
Becoming a Research Volunteer (OHRP)
Heart disease organizations
Stem cell research | Cardiovascular Cell Therapy Research Network | [
"Chemistry",
"Biology"
] | 1,087 | [
"Translational medicine",
"Tissue engineering",
"Stem cell research"
] |
24,173,395 | https://en.wikipedia.org/wiki/Superconducting%20wire | Superconducting wires are electrical wires made of superconductive material. When cooled below their transition temperatures, they have zero electrical resistance. Most commonly, conventional superconductors such as niobium–titanium are used, but high-temperature superconductors such as YBCO are entering the market.
Superconducting wire's advantages over copper or aluminum include higher maximum current densities and zero power dissipation. Its disadvantages include the cost of refrigeration of the wires to superconducting temperatures (often requiring cryogens such as liquid nitrogen or liquid helium), the danger of the wire quenching (a sudden loss of superconductivity), the inferior mechanical properties of some superconductors, and the cost of wire materials and construction.
Its main application is in superconducting magnets, which are used in scientific and medical equipment where high magnetic fields are necessary.
Important parameters
The construction and operating temperature will typically be chosen to maximise:
Critical temperature Tc, the temperature below which the wire becomes a superconductor
Critical current density Jc, the maximum current a superconducting wire can carry per unit cross-sectional area (see images below for examples with 20 kA/cm2).
Superconducting wires/tapes/cables usually consist of two key features:
The superconducting compound (usually in the form of filaments/coating)
A conduction stabilizer, which carries the current in case of the loss of superconductivity (known as quenching) in the superconductoring material.
The current sharing temperature Tcs is the temperature at which the current transported through the superconductor also starts to flow through the stabilizer. However, Tcs is not the same as the quench temperature (or critical temperature) Tc; in the former case, there is partial loss of superconductivity, while in the latter case, the superconductivity is entirely lost.
LTS wire
Low-temperature superconductor (LTS) wires are made from superconductors with low critical temperature, such as Nb3Sn (niobium–tin) and NbTi (niobium–titanium). Often the superconductor is in filament form in a copper or aluminium matrix which carries the current should the superconductor quench for any reason. The superconductor filaments can form a third of the total volume of the wire.
Preparation
Wire drawing
The normal wire-drawing process can be used for malleable alloys such as niobium–titanium.
Surface diffusion
Vanadium–gallium (V3Ga) can be prepared by surface diffusion where the high temperature component as a solid is bathed in the other element as liquid or gas. When all components remain in the solid state during high temperature diffusion this is known as the bronze process.
HTS wire
High-temperature superconductor (HTS) wires are made from superconductors with high critical temperature (high-temperature superconductivity), such as YBCO and BSCCO.
Powder-in-tube
The powder-in-tube (PIT, or oxide powder in tube, OPIT) process is an extrusion process often used for making electrical conductors from brittle superconducting materials such as niobium–tin or magnesium diboride, and ceramic cuprate superconductors such as BSCCO. It has been used to form wires of the iron pnictides. (PIT is not used for yttrium barium copper oxide as it does not have the weak layers required to generate adequate 'texture' (alignment) in the PIT process.)
This process is used because the high-temperature superconductors are too brittle for normal wire forming processes. The tubes are metal, often silver. Often the tubes are heated to react the mix of powders. Once reacted the tubes are sometimes flattened to form a tape-like conductor. The resulting wire is not as flexible as conventional metal wire, but is sufficient for many applications.
There are in situ and ex situ variants of the process, as well a 'double core' method that combines both.
Coated superconductor tape or wire
These wires are in a form of a metal tape of about 10 mm width and about 100 micrometer thickness, coated with superconductor materials such as YBCO. A few years after the discovery of High-temperature superconductivity materials such as the YBCO, it was demonstrated that epitaxial YBCO thin films grown on lattice matched single crystals such as magnesium oxide MgO, strontium titanate (SrTiO3) and sapphire had high supercritical current densities of 10–40 kA/mm2. However, a lattice-matched flexible material was needed for producing a long tape. YBCO films deposited directly on metal substrate materials exhibit poor superconducting properties. It was demonstrated that a c-axis oriented yttria-stabilized zirconia (YSZ) intermediate layer on a metal substrate can yield YBCO films of higher quality, which had still one to two orders less critical current density than that produced on the single crystal substrates.
The breakthrough came with the invention of ion beam-assisted deposition (IBAD) technique to produce of biaxially aligned yttria-stabilized zirconia (YSZ) thin films on metal tapes and the Rolling-Assisted-Biaxially-Textured-Substrates (RABiTS) process to produce biaxially textured metallic substrates via thermomechanically processing.
In the IBAD process, the biaxially-textured YSZ film provided a single-crystal-like template for the epitaxial growth of the YBCO films. These YBCO films achieved critical current density of more than 1 MA/cm2. Other buffer layers such as cerium oxide (CeO2) and magnesium oxide (MgO) were produced using the IBAD technique for the superconductor films. Details of the IBAD substrates and technology were reviewed by Arendt. The process of LMO-enabled IBAD-MgO process was invented and developed at the Oak Ridge National Laboratory and won a R&D100 Award in 2007. This LMO-enabled substrate process is now being used by essentially all manufacturers of HST wire based on the IBAD substrate.
In the RABiTS substrates, the metallic template itself was biaxially-textured and heteroepitaxial buffer layers of Y2O3, YSZ and CeO2 were then deposited on the metallic template, followed by heterepitaxial deposition of the superconductor layer. Details of the RABiTS substrates and technology were reviewed by Goyal.
, YBCO coated superconductor tapes capable of carrying more than 500 A/cm-width at 77 K and 1000 A/cm-width at 30 K under high magnetic field have been demonstrated. In 2021 YBCO coated superconductor tapes capable of carrying more than 250 A/cm-width at 77 K and 2500 A/cm-width at 20 K were reported for commercially produced wires. In 2021 an experimental demonstration of an over-doped YBCO film reported 90 MA/cm2 at 5 K and 6 MA/cm2 at 77 K in a 7 T magnetic field.
Metal organic chemical vapor deposition
Metal organic chemical vapor deposition (MOCVD) is one of the deposition processes used for fabrication of YBCO coated conductor tapes. Ignatiev provides an overview of MOCVD processes used to deposit YBCO films via MOCVD deposition.
Reactive co-evaporation
Superconducting layer in the 2nd generation superconducting wires can also be grown by thermal evaporation of constituent metals, rare-earth element, barium, and copper. Prusseit provides an overview of the thermal evaporation process used to deposit high-quality YBCO films.
Pulsed laser deposition
Superconducting layer in the 2nd generation superconducting wires can also be grown by pulsed laser deposition (PLD). Christen provides an overview of the PLD process used to deposit high-quality YBCO films.
Standards
There are several IEC (International Electrotechnical Commission) standards related to superconducting wires under TC90.
See also
Copper-clad aluminium wire
Graphene-clad wire
Skin effect
Residual-resistivity ratio
References
Superconductors
Wire | Superconducting wire | [
"Chemistry",
"Materials_science"
] | 1,758 | [
"Superconductivity",
"Superconductors"
] |
24,174,843 | https://en.wikipedia.org/wiki/Shock%20polar | The term shock polar is generally used with the graphical representation of the Rankine–Hugoniot equations in either the hodograph plane or the pressure ratio-flow deflection angle plane. The polar itself is the locus of all possible states after an oblique shock. The shock polar was first introduced by Adolf Busemann in 1929.
Shock polar in the (φ, p) plane
The minimum angle, , which an oblique shock can have is the Mach angle , where is the initial Mach number before the shock and the greatest angle corresponds to a normal shock. The range of shock angles is therefore . To calculate the pressures for this range of angles, the Rankine–Hugoniot equations are solved for pressure:
To calculate the possible flow deflection angles, the relationship between shock angle and is used:
Where is the ratio of specific heats and is the flow deflection angle.
Uses of shock polars
One of the primary uses of shock polars is in the field of shock wave reflection. A shock polar is plotted for the conditions before the incident shock, and a second shock polar is plotted for the conditions behind the shock, with its origin located on the first polar, at the angle through which the incident shock wave deflects the flow. Based on the intersections between the incident shock polar and the reflected shock polar, conclusions as to which reflection patterns are possible may be drawn. Often, it is used to graphically determine whether regular shock reflection is possible, or whether Mach reflection occurs.
References
Fluid dynamics | Shock polar | [
"Chemistry",
"Engineering"
] | 306 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
24,175,195 | https://en.wikipedia.org/wiki/C9H6O4 | {{DISPLAYTITLE:C9H6O4}}
The molecular formula C9H6O4 (molar mass: 178.14 g/mol, exact mass 178.026609 u) may refer to:
Aesculetin, a coumarin
Daphnetin, a coumarin
Ninhydrin (2,2-dihydroxyindane-1,3-dione)
Molecular formulas | C9H6O4 | [
"Physics",
"Chemistry"
] | 93 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,177,288 | https://en.wikipedia.org/wiki/Franklin%20bells | Franklin bells (also known as lightning bells) are an early demonstration of electric charge designed to work with a Leyden jar or a lightning rod. Franklin bells are only a qualitative indicator of electric charge and were used for simple demonstrations rather than research. The bells are an adaptation to the first device that converted electrical energy into mechanical energy in the form of continuous mechanical motion: in this case, the moving of a bell clapper back and forth between two oppositely charged bells.
History
Scientific investigation of the phenomena of lightning originates with Benjamin Franklin. He accumulated analogical evidence favoring the supposition that lightning must be an electrical discharge on a large scale. In the mid-18th century, lightning strikes were a serious problem for buildings and structures, causing damage and sometimes even fires. Franklin set out to understand the nature of lightning and to find ways to protect buildings from its destructive effects. He began his investigations by observing how lightning strikes affected various types of buildings. He noticed that some buildings were more vulnerable to lightning strikes than others and that buildings with sharp pointed roofs were more likely to be struck than those with flat roofs. He also observed that lightning seemed to follow conductive paths, such as metal rods or wires, and that these paths could be used to divert lightning strikes away from buildings.
Based on these observations, Franklin developed the idea of the lightning rod. The lightning rod consists of a metal rod or conductor, typically made of copper or aluminum, that is mounted on the roof of a building and connected to the ground by means of a conductive wire. When lightning strikes, the rod provides a path of least resistance for the electrical charge, allowing it to be safely conducted to the ground rather than passing through the building and causing damage. The invention of the lightning rod was a significant breakthrough in the field of electrical engineering, and has saved countless buildings and lives from the destructive effects of lightning strikes.
The Franklin bells were named for Benjamin Franklin, an early adopter who used it during his experimentation with electricity. Its predecessor was invented by the Scottish inventor Andrew Gordon, Professor of Natural Philosophy at the University of Erfurt, Germany. In 1742 he invented a device known as the "electric chimes", which was widely described in textbooks of electricity. Franklin made use of Gordon's idea by connecting one bell to his pointed lightning rod, attached to a chimney, and a second bell to the ground. One of his papers contains the following description:
In September 1752, I erected an iron rod to draw the lightning down into my house, in order to make some experiments on it, with two bells to give notice when the rod should be electrified.
I found the bells rang sometimes when there was no lightning or thunder, but only a dark cloud over the rod; that sometimes after a flash of lightning they would suddenly stop; and at other times, when they had not rang before, they would, after a flash, suddenly begin to ring; that the electricity was sometimes very faint, so that when a small spark was obtained, another could not be got for sometime after; at other times the sparks would follow extremely quick, and once I had a continual stream from bell to bell, the size of a crow-quill. Even during the same gust there were considerable variations.
Through this experiment, Franklin was able to demonstrate that electricity behaves like a fluid, flowing through conductive materials and causing effects along the way. Franklin's experiment with the bells and the lightning rod was groundbreaking in its time, as it provided a clear demonstration of the nature of electricity and its properties and provided a foundation for further experiments and discoveries in the field.
Franklin's experimentation with the bell setup was pivotal to discovering that electricity exists outside of lightning and thunderstorms. The bells' odd properties intrigued Franklin and fueled further hypotheses.
Design and operation
The bells consist of a metal stand with a crossbar, from which hang three bells. The outer two bells hang from conductive metal chains, while the central bell hangs from a nonconductive thread. In the spaces between these bells hang two metal clappers, small pendulums, on nonconductive threads. A short metal chain hangs from the central bell.
The system of operation of the Franklin clock considers that the electrostatic force generated by an electric field is used to move the pendulums that strike two metal bells. The Franklin bells uses a metal rod as a lightning rod to attract current. One bell is connected to the lightning rod and the other bell is connected to the ground. A metal battering ram is suspended between the two bells by an insulated wire. The negatively charged clouds before the thunderstorm make the lightning rod negatively charged, and also make the bell connected to it negatively charged. The metal ball is attracted and crashes into the fully charged bell. When the ball hits the first bell, it will be charged with the same potential and will therefore be repelled again. Since the opposite bell is reversely charged, this will also attract the ball to it. When the ball hits the second bell, the charge is transferred and the process is repeated until the charges are balanced again. Before the storm, the device would ring to remind Franklin, who had been obsessed with the study of lightning, urging him to chase the lightning.
Modern Impact
Benjamin Franklin's experiment with bells and a lightning rod has remained a popular example of electric phenomena in modern times. The experiment has been adapted and updated, and is now commonly used in classrooms and demonstrations to illustrate a variety of concepts related to electricity.
For instance, the experiment can be used to demonstrate the concept of electric current and how it flows through a conductor. By connecting the bells with metal wires and charging the lightning rod, students can see the flow of electric charges through the wires and observe the resulting electromagnetic effects that cause the bells to ring.
The experiment can also be used to illustrate the properties of static electricity, and how it can be conducted through metal wires to create an electric current. By rubbing a balloon or other object to create a static charge, and then using the charge to activate the bells, students can see the effects of static electricity and learn how it can be harnessed and utilized. The Franklin Bell is now a common electrical experiment demonstration in high school and introductory college physics courses.
See also
Oxford Electric Bell, a set of electrostatic bells in the University of Oxford, has been ringing continuously since 1840.
Lightning-prediction system
References
External links
Ben Franklin's Lightning Bells(Franklin Institute)
Franklin’s Bells (Gordon’s Bells) (PV Scientific Instruments)
"Franklin’s Bells" and charge transport as an undergraduate lab (American Journal of Physics)
Franklin's Bells (Research Media & Cybernetics)
Benjamin Franklin
Electricity
Capacitors
Energy storage
Historical scientific instruments
Science demonstrations
Lightning | Franklin bells | [
"Physics"
] | 1,366 | [
"Physical phenomena",
"Physical quantities",
"Capacitors",
"Electrical phenomena",
"Lightning",
"Capacitance"
] |
24,178,404 | https://en.wikipedia.org/wiki/C26H30O11 | {{DISPLAYTITLE:C26H30O11}}
The molecular formula C26H30O11 (molar mass: 518.50 g/mol, exact mass: 518.1788 u) may refer to:
Phellamurin
Rubratoxin B
Molecular formulas | C26H30O11 | [
"Physics",
"Chemistry"
] | 63 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,179,739 | https://en.wikipedia.org/wiki/Cellulose%20insulating%20material%20plant | Cellulose insulating material plants are used for the production of a natural building insulation material known as cellulose insulation.
Assembly
Cellulose insulating material plants essentially exist of a feeding unit with primary reduction of the raw material, a dosing system for adding flame retarding agent, a defibration unit, a dedusting unit and the packaging unit.
Function
The raw material is given loose or in bales into the primary reduction unit (for example a shredder). After this reduction, the flame retardant is added.
The core of the cellulose insulating material plant is a Whirlwind Mill. This mill is able to fray out the material, what leads to fluffy, optimally defibrated flocks that contain only very small amounts of dust. Furthermore the flame retardant is stuck to the flocks during the defibration process.
After the mill the material is conveyed pneumatically to the dedusting unit, where the dusty air and the cellulose fibers get separated. The finished fibers come to the packaging station, where they will be weight and packaged.
Applications
Cellulose insulating material plants are used for the production of naturally insulating material out of raw material like newspapers, hemp, field grass and so forth.
The insulating materials which are produced with this systems, are characterized by low energy input during the production compared to the conventional insulating materials such as mineral wool etc.
References
External links
Pictures and explanations
Industrial equipment | Cellulose insulating material plant | [
"Engineering"
] | 306 | [
"nan"
] |
24,179,967 | https://en.wikipedia.org/wiki/Metal%20dusting | Metal dusting is "a catastrophic form of corrosion that occurs when susceptible materials are exposed to environments with high carbon activities." The corrosion manifests itself as a break-up of bulk metal to metal powder. The suspected mechanism is firstly the deposition of a graphite layer on the surface of the metal, usually from carbon monoxide (CO) in the vapour phase. This graphite layer is then thought to form metastable M3C species (where M is the metal), which migrate away from the metal surface. However, in some regimes no M3C species are observed indicating a direct transfer of metal atoms into the graphite layer.
The temperatures normally associated with metal dusting are high (300–850 °C). From a general understanding of chemistry, it can be deduced that at lower temperatures, the rate of reaction to form the metastable M3C species is too low to be significant, and at much higher temperatures the graphite layer is unstable and so CO deposition does not occur (at least to any appreciable degree).
Very briefly, there are several proposed methods for prevention or reduction of metal dusting; the most common seem to be aluminide coatings, alloying with copper and addition of steam.
There is a significant amount of literature in existence that describes proposed mechanisms, prevention methods, etc. There is also a good summary of metal dusting and some prevention methods in 'Corrosion by Carbon and Nitrogen - Metal Dusting, Carburisation and Nitridation'.
References
Corrosion | Metal dusting | [
"Chemistry",
"Materials_science"
] | 312 | [
"Metallurgy",
"Corrosion",
"Electrochemistry",
"Electrochemistry stubs",
"Materials degradation",
"Physical chemistry stubs",
"Chemical process stubs"
] |
24,183,731 | https://en.wikipedia.org/wiki/World%20Nuclear%20Industry%20Status%20Report | The World Nuclear Industry Status Report is a yearly report on the nuclear power industry. It is produced by Mycle Schneider, an anti-nuclear activist and a founding member of WISE-Paris, which he directed from 1983 to 2003.
2019 Report
The 2019 report reached the conclusion that 'Stabilizing the climate is urgent, nuclear power is slow. [Nuclear power] meets no technical or operational need that these low-carbon competitors [wind, solar and other renewable energy] cannot meet better, cheaper, and faster. Even sustaining economically distressed reactors saves less carbon per dollar and per year than reinvesting its avoidable operating cost (let alone its avoidable new subsidies) into cheaper efficiency and renewables.' The report also reached the conclusion that Small Modular Reactors are unlikely to play any significant role in the future energy landscape.
2017 Data-visualization tool
In January an interactive visualization on nuclear power construction was launched. This contains information on the 754 reactors that are or have been under-construction since 1951. The Global Nuclear Power Database is hosted by the Bulletin of the Atomic Scientists.
2016 report
As of the middle of 2016, 31 countries were operating nuclear reactors for energy purposes. Nuclear power plants generated 2,441 net terawatt-hours (TWh or billion kilowatt-hours) of electricity in 2015, a 1.3 percent increase.
2015 report
Globally, the nuclear industry's situation continued to deteriorate in 2015, except in China. Eight out of the ten nuclear power reactor startups in 2015 were in China.
2013 report
Written by Mycle Schneider and Antony Froggatt with contributions of four other experts from Japan, the UK and France, says that the nuclear industry was struggling with grave problems prior to the Fukushima accident, but that the impact of the accident has become increasingly visible. Global electricity generation from nuclear plants dropped by a historic 7 percent in 2012, adding to the record drop of 4 percent in 2011.
The 427 operating reactors worldwide, as of 1 July 2013, are 17 lower than the peak in 2002. The nuclear share in the world's power generation declined steadily from a historic peak of 17 percent in 1993 to about 10 percent in 2012. The report details a range of restart scenarios for Japan's nuclear reactor fleet which, as of September 2013, were all shutdown. Nuclear power's share of global commercial primary energy production plunged to 4.5 percent, a level last seen in 1984.
Besides an extensive update on nuclear economics, the report also includes an assessment of the major challenges at the Fukushima nuclear site, in particular the highly contaminated water on site. This water contained in the basement of reactors and in storage tanks contains 2.5 times the total amount of caesium-137 released at the Chernobyl accident.
The report says that China, Germany and Japan, three of the world's four largest economies, as well as India, now generate more power from renewables than from nuclear power. For the first time in 2012 China and India generated more power from wind alone than from nuclear plants, while in China solar electricity generation grew by 400 percent in one year.
2012 report
According to the World Nuclear Industry Status Report 2012, written by Mycle Schneider and Antony Froggatt, nuclear power accounted for 11 percent of worldwide electricity generation. World atomic power production dropped by a record 4.3% in 2011 as the Great Recession and the Fukushima nuclear accident in Japan prompted plant shutdowns and slowed construction of new sites. Seven reactors began operating in 2011 and 19 were shuttered.
The report shows that following the Fukushima crisis in March 2011, Germany, Switzerland and Taiwan announced their withdrawal from nuclear power. Output was further restricted as nations suspended construction plans amid safety concerns and economic stagnation, forcing utilities to study extending lifetimes, which raises considerable safety issues.
At least five countries, including Egypt and Kuwait, have suspended plans to build their first nuclear reactors. In the U.K., major companies like RWE, EON, and SSE have all abandoned new-build proposals in 2011/12, while companies in Japan and Bulgaria have suspended construction. The Fukushima disaster also created certification and licensing delays.
2010–11 report
The World Nuclear Industry Status Report 2010-2011 is authored by Mycle Schneider, Antony Froggatt, and Steve Thomas and published by the Washington-based Worldwatch Institute. The foreword is written by Amory Lovins.
According to the report, the international nuclear industry has been unable to stop the slow decline of nuclear energy. The world's reactor fleet is aging quickly and not enough new units are coming online. As of April 1, 2011, there were 437 nuclear reactors operating in the world, which was seven fewer than in 2002. The Olkiluoto plant has had particular problems:
The flagship EPR project at Olkiluoto in Finland, managed by the largest nuclear builder in the world, AREVA NP, has turned into a financial fiasco. The project is four years behind schedule and at least 90 percent over budget, reaching a total cost estimate of €5.7 billion ($8.3 billion) or close to €3,500 ($5,000) per kilowatt.
The report says that the Fukushima Daiichi nuclear disaster is exacerbating many of the problems that nuclear energy is facing. There is "no obvious sign that the international nuclear industry could eventually turn empirically evident downward trend into a promising future", and the Fukushima nuclear disaster is likely to accelerate the decline. With long lead times of 10 years and more, it will be difficult to maintain, let alone increase, the number of operating nuclear power plants over the next 20 years. Moreover, says the report, it is clear that nuclear power development cannot keep up with the pace of renewable energy commercialization. For the first time, in 2010 total installed nuclear power capacity in the world (375 gigawatts) fell behind aggregate installed capacity (381 GW) of three specific renewables — wind turbines (193 GW), biomass and waste-to-energy plants (65 GW), and solar power (43 GW).
2009 report
The World Nuclear Industry Status Report 2009 presents quantitative and qualitative information on the nuclear power plants in operation, under construction and in planning phases throughout the world. A detailed analyses of the economic performance of past and current nuclear projects is also given. The report was commissioned by the German Federal Ministry of Environment, Nature Conservation and Reactor Safety.
2008 report
The World Nuclear Industry Status Report 2008 focused on the difficulties facing nuclear power throughout the world, with particular reference to Western Europe and Asia.
2007 report
The World Nuclear Industry Status Report 2007 was commissioned by the Greens-EFA Group in the European Parliament.
Earlier reports
The first World Nuclear Industry Status Report was issued in 1992 in a joint publication with WISE-Paris, Greenpeace International and the World Watch Institute, Washington. The second report in 2004 was commissioned by the Greens-EFA Group in the European Parliament.
See also
International Atomic Energy Agency
Nuclear energy policy
Nuclear renaissance
Nuclear power in France
Nuclear power in China
References
Further reading
Mycle Schneider, Steve Thomas, Antony Froggatt, and Doug Koplow. (November 2009, Vol. 65 No. 6). 2009 World Nuclear Industry Status Report Bulletin of the Atomic Scientists, pp. 1–19.
International Atomic Energy Agency (2012). "IAEA Updates Its Projections for Nuclear Power in 2030" IAEA Updates Its Projections for Nuclear Power in 2030
Worldwatch Institute (2011). The End of Nuclear .
External links
The World Nuclear Industry Status Reports website
Nuclear energy
Nuclear history
Books about nuclear issues
Anti–nuclear power activists | World Nuclear Industry Status Report | [
"Physics",
"Chemistry"
] | 1,563 | [
"Nuclear energy",
"Radioactivity",
"Nuclear physics"
] |
21,208,499 | https://en.wikipedia.org/wiki/MATHLAB | MATHLAB is a computer algebra system created in 1964 by Carl Engelman at MITRE and written in Lisp.
"MATHLAB 68" was introduced in 1967 and became rather popular in university environments running on DECs PDP-6 and PDP-10 under TOPS-10 or TENEX. In 1969 this version was included in the DECUS user group's library (as 10-142) as royalty-free software.
Carl Engelman left MITRE for Symbolics where he contributed his expert knowledge in the development of Macsyma.
Features
Abstract from DECUS Library Catalog:
MATHLAB is an on-line system providing machine aid for the mechanical symbolic processes encountered in analysis. It is capable of performing, automatically and symbolically, such common procedures as simplification, substitution, differentiation, polynomial factorization, indefinite integration, direct and inverse Laplace transforms, the solution of linear differential equations with constant coefficients, the solution of simultaneous linear equations, and the inversion of matrices. It also supplies fairly elaborate bookkeeping facilities appropriate to its on-line operation.
Applications
MATHLAB 68 has been used to solve electrical linear circuits using an acausal modeling approach for symbolic circuit analysis. This application was developed as a plug-in for MATHLAB 68 (open-source), building on MATHLAB's linear algebra facilities (Laplace transforms, inverse Laplace transforms and linear algebra manipulation).
Print publications
References
Computer algebra systems
Notebook interface | MATHLAB | [
"Mathematics"
] | 291 | [
"Computer algebra systems",
"Mathematical software"
] |
21,209,174 | https://en.wikipedia.org/wiki/Computer-aided%20process%20planning | Computer-aided process planning (CAPP) is the use of computer technology to aid in the process planning of a part or product, in manufacturing.
CAPP is the link between CAD and CAM in that it provides for the planning of the process to be used in producing a designed part.
Computer-aided process planning
CAPP is a link between the CAD and CAM modules.
Process planning is concerned with determining the sequence of individual manufacturing operations needed to produce a given part or product.
The resulting operation sequence is documented on a form typically referred to as a " Route Sheet" (also called a process sheet/method sheet) containing a listing of the production operations and associated machine tools for a work part or assembly.
Process planning in manufacturing also refers to the planning of use of blanks, spare parts, packaging material, user instructions (manuals), etc.
As the term "computer-aided production planning" is used in different contexts on different parts of the production process; to some extent, CAPP overlaps with the term "PIC" (production and inventory control).
As the design process is supported by many computer-aided tools, computer-aided process planning (CAPP) has evolved to simplify and improve process planning and achieve more effective use of manufacturing resources.
Process Planning is of two types:
Generative type computer-aided process planning.
Variant type process planning.
Routings that specify operations, operation sequences, work centers, standards, tooling, and fixtures. This routing becomes a major input to the manufacturing resource planning system to define operations for production activity control purposes and define required resources for capacity requirements planning purposes.
Computer-aided process planning initially evolved as a means to electronically store a process plan once it was created, retrieve it, modify it for a new part and print the plan.
Other capabilities were table-driven cost and standard estimating systems, for sales representatives to create customer quotations and estimate delivery time.
Future development
Generative or dynamic CAPP is the main focus of development, which is the ability to automatically generate production plans for new products, or dynamically update production plans based on resource availability. Generative CAPP will probably use iterative methods, where simple production plans are applied to automatic CAD/CAM development to refine the initial production plan.
A Generative CAPP system was developed at Beijing No. 1 Machine Tool Plant (BYJC) in Beijing, China as part of a UNDP project (DG/CRP/87/027) from 1989 to 1995. The project was reported in "Machine Design Magazine; New Trends" May 9, 1994, P.22-23. The system was demonstrated to the CASA/SME Leadership in Excellence for Applications Development (LEAD) Award committee in July 1995. The committee awarded BYJC the LEAD Award in 1995 for this achievement. In order to accomplish Generative CAPP, modifications were made to the CAD, PDM, ERP, and CAM systems. In addition, a Manufacturing Execution System (MES) was built to handle the scheduling of tools, personnel, supply, and logistics, as well as maintain shop floor production capabilities.
Generative CAPP systems are built on a factory's production capabilities and capacities. In Discrete Manufacturing, Art-to-Part validations have been performed often, but when considering highly volatile engineering designs, and multiple manufacturing operations with multiple tooling options, the decision tables become longer and the vector matrices more complex. BYJC builds CNC machine tools and Flexible Manufacturing Systems (FMS) to customer specifications. Few are duplicates. The Generative CAPP System is based on the unique capabilities and capacities needed to produce those specific products at BYJC. Unlike a Variant Process Planning system that modifies existing plans, each process plan could be defined automatically, independent of past routings. As improvements are made to production efficiencies, the improvements are automatically incorporated into the current production mix. This generative system is a key component of the CAPP system for the Agile Manufacturing environment.
In order to achieve the Generative CAPP system, components were built to meet needed capabilities:
Shop floor manufacturing abilities of BYJC were defined. It was determined that there are 46 major operations and 84 dependent operations the shop floor could execute to produce the product mix. These operations are manufacturing primitive operations. As new manufacturing capabilities are incorporated into the factory's repertoire, they need to be accommodated in the spectrum of operations.
These factory operations are then used to define the features for the Feature Based Design extensions that are incorporated into the CAD system.
The combination of these feature extensions and the parametric data associated with them became part of the data that is passed from the CAD system to the modified PDM system as the data set content for the specific product, assembly, or part.
The ERP system was modified to handle the manufacturing abilities for each tool on the shop floor. This is an extension to the normal feeds and speeds that the ERP system has the capability of maintaining about each tool. In addition, personnel records are also enhanced to note special characteristics, talents, and education of each employee should it become relevant in the manufacturing process.
A Manufacturing Execution System (MES) was created. The MES's major component is an expert/artificial intelligent system that matches the engineering feature objects from the PDM system against the tooling, personnel, material, transportation needs, etc. needed to manufacture them in the ERP system. Once physical components are identified, the items are scheduled. The scheduling is continuously updated based on the real time conditions of the enterprise. Ultimately, the parameters for this system were based on:
a. Expenditures
b. Time
c. Physical dimensions
d. Availability
The parameters are used to produce multidimensional differential equations. Solving the partial differential equations will produce the optimum process and production planning at the time when the solution was generated. Solutions had the flexibility to change over time based on the ability to satisfy agile manufacturing criteria. Execution planning can be dynamic and accommodate changing conditions.
The system allows new products to be brought on line quickly based on their manufacturability. The more sophisticated CAD/CAM, PDM and ERP systems have the base work already incorporated into them for Generative Computer Aided Process Planning. The task of building and implementing the MES system still requires identifying the capabilities that exist within a given establishment, and exploiting them to the fullest potential. The system created is highly specific, the concepts can be extrapolated to other enterprises.
Traditional CAPP methods that optimize plans in a linear manner have not been able to satisfy the need for flexible planning, so new dynamic systems will explore all possible combinations of production processes, and then generate plans according to available machining resources. For example, K.S. Lee et al. states that "By considering the multi-selection tasks simultaneously, a specially designed genetic algorithm searches through the entire solution space to identify the optimal plan".
See also
CAD, (computer-aided design)
CAM, (computer-aided manufacturing)
CIM, (computer-integrated manufacturing)
References
Product lifecycle management
Workflow technology
Information technology management
Industrial computing | Computer-aided process planning | [
"Technology",
"Engineering"
] | 1,446 | [
"Information technology management",
"Industrial engineering",
"Automation",
"Information technology",
"Industrial computing"
] |
21,209,462 | https://en.wikipedia.org/wiki/Curium%28III%29%20hydroxide | Curium hydroxide is a radioactive compound first discovered in measurable quantities in 1947. It is composed of a single curium atom and three hydroxy groups. It was the first curium compound ever isolated.
Curium hydroxide is an anhydrous colorless or light-yellow amorphous gelatinous solid that is insoluble in water.
Due to self-irradiation, the crystal structure of decomposes within one day ( has a half-life of 18.11 years); for the same process takes 4 to 6 months ( has a half-life of 432.2 years).
See also
Curium(III) oxide
References
Curium compounds
Hydroxides
Substances discovered in the 1940s | Curium(III) hydroxide | [
"Chemistry"
] | 150 | [
"Inorganic compounds",
"Bases (chemistry)",
"Hydroxides",
"Inorganic compound stubs"
] |
25,579,424 | https://en.wikipedia.org/wiki/Gravitomagnetic%20clock%20effect | In physics, the gravitomagnetic clock effect is a deviation from Kepler's third law that, according to the weak-field and low-velocity approximation of general relativity, will be suffered by a particle in orbit around a (slowly) spinning body, such as a typical planet or star.
Explanation
According to general relativity, in its weak-field and low-velocity linearized approximation, a slowly spinning body induces an additional component of the gravitational field that acts on a freely-falling test particle with a non-central, gravitomagnetic Lorentz-like force.
Among its consequences on the particle's orbital motion there is a small correction to Kepler's third law, namely
where TKep is the particle's period, M is the mass of the central body, and a is the semimajor axis of the particle's ellipse. If the orbit of the particle is circular and lies in the equatorial plane of the central body, the correction is
where S is the central body's angular momentum and c is the speed of light in vacuum.
Particles orbiting in opposite directions experience gravitomagnetic corrections TGvm with opposite signs, so that the difference of their orbital periods would cancel the standard Keplerian terms and would add the gravitomagnetic ones.
Note that the + sign occurs for particle's corotation with respect to the rotation of the central body, whereas the − sign is for counter-rotation. That is, if the satellite orbits in the same direction as the planet spins, it takes more time to make a full orbit, whereas if it moves oppositely with respect to the planet's rotation its orbital period gets shorter.
See also
Introduction to general relativity
Gravitomagnetic time delay
References
Clocks | Gravitomagnetic clock effect | [
"Physics",
"Technology",
"Engineering"
] | 364 | [
"Physical systems",
"Machines",
"Clocks",
"Measuring instruments"
] |
25,584,664 | https://en.wikipedia.org/wiki/Discharge%20coefficient | In a nozzle or other constriction, the discharge coefficient (also known as coefficient of discharge or efflux coefficient) is the ratio of the actual discharge to the ideal discharge, i.e., the ratio of the mass flow rate at the discharge end of the nozzle to that of an ideal nozzle which expands an identical working fluid from the same initial conditions to the same exit pressures.
Mathematically the discharge coefficient may be related to the mass flow rate of a fluid through a straight tube of constant cross-sectional area through the following:
Where:
, discharge coefficient through the constriction (dimensionless).
, mass flow rate of fluid through constriction (mass per time).
, density of fluid (mass per volume).
, volumetric flow rate of fluid through constriction (volume per time).
, cross-sectional area of flow constriction (area).
, velocity of fluid through constriction (length per time).
, pressure drop across constriction (force per area).
This parameter is useful for determining the irrecoverable losses associated with a certain piece of equipment (constriction) in a fluid system, or the "resistance" that piece of equipment imposes upon the flow.
This flow resistance, often expressed as a dimensionless parameter, , is related to the discharge coefficient through the equation:
which may be obtained by substituting in the aforementioned equation with the resistance, , multiplied by the dynamic pressure of the fluid, .
An example in open channel flow
Due to complex behavior of fluids around some of the structures such as orifices, gates, and weirs etc., some assumptions are made for the theoretical analysis of the stage-discharge relationship. For example, in case of gates, the pressure at the gate opening is non-hydrostatic which is difficult to model; however, it is known that the pressure at the gate is very small. Therefore, engineers assume that the pressure is zero at the gate opening and following equation is obtained for discharge:
where:
Q, discharge
, area of flow
g, acceleration due to gravity
, head just upstream of the gate
However, the pressure is not actually zero at the gate; therefore, discharge coefficient, C is used as follows:
See also
Flow coefficient
Orifice plate
References
External links
Mass Flow Choking, Nancy Hall, 6 April 2018
Fluid dynamics | Discharge coefficient | [
"Chemistry",
"Engineering"
] | 489 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
25,587,550 | https://en.wikipedia.org/wiki/Suresh%20Venapally | Suresh Venepally (; born 1966) is an Indian mathematician known for his research work in algebra. He is a professor at Emory University.
Background
Suresh was born in Vangoor, Telangana, India and studied in ZPHS at Vangoor up to 9th standard. He did his M.Sc at University of Hyderabad.
He joined Tata Institute of Fundamental Research (TIFR) in 1989 and got his PhD in under the guidance of Raman Parimala (1994). He later joined the faculty at University of Hyderabad.
Honors
Shanti Swarup Bhatnagar Award for Mathematical Sciences in 2009
Invited speaker at the International Congress of Mathematicians held at Hyderabad, India in 2010
Fellow of the Indian Academy of Sciences
Andhra Pradesh Scientist Award, 2008
B. M. Birla Science prize, 2004
INSA Medal for Young Scientists, 1997
Selected publications
1995: "Zero-cycles on quadric fibrations: finiteness theorems and the cycle map", Invent. Math. 122, 83–117 (with Raman Parimala)
1998: "Isotropy of quadratic forms over function fields in one variable over p-adic fields", Publ. de I.H.E.S. 88, 129–150 (with Raman Parimala)
2001: Hermitian analogue of a theorem of Springer", J.Alg. 243(2), 780-789 (with Raman Parimala and Ramaiyengar Sridharan)
2010: "Bounding the symbol length in the Galois cohomology of function field of p-adic curves", Comment. Math. Helv. 85(2), 337–346 , "The u-invariant of the function fields of p-adic curves" Ann. Math. 172(2), 1391-1405 (with Raman Parimala)
References
External links
Emory University faculty web page
University of Hyderabad faculty web page
20th-century Indian mathematicians
Algebraists
Living people
Emory University faculty
1966 births
Tata Institute of Fundamental Research alumni
Scientists from Telangana
Recipients of the Shanti Swarup Bhatnagar Award in Mathematical Science | Suresh Venapally | [
"Mathematics"
] | 442 | [
"Algebra",
"Algebraists"
] |
25,587,651 | https://en.wikipedia.org/wiki/BioNumbers | BioNumbers is a free-access database of quantitative data in biology designed to provide the scientific community with access to the large amount of data now generated in the biological literature. The database aims to make quantitative values more easily available, to aid fields such as systems biology.
The BioNumbers project performs literature-based curation of various sources. It is a regularly updated online resource that contains >13,000 entries from ~1,000 distinct references. Examples of data include transcription and translation rates, organism and organelle sizes, metabolites concentrations and growth rates. Entries are provided with full reference and details such as measurement method and comments.
BioNumbers also publishes a monthly review of a problem in quantitative biology.
History
BioNumbers was created as a Wikipedia-format community collaborative initiative in 2007 by Ron Milo, Paul Jorgensen and Mike Springer at the Systems Biology Department at Harvard Medical School. It is currently managed and curated at the Milo Lab from the Weizmann Institute of Science.
The database is funded by the Systems Biology Department at Harvard Medical School, and Weizmann Institute of Science.
References
External links
BioNumbers
BioNumbers on OpenWetWare
Barry Schwartz BioNumbers – Specialty Biology Answer Search Engine March 24, 2009
Milo Lab
2007 establishments in the United States
Biological databases | BioNumbers | [
"Biology"
] | 262 | [
"Bioinformatics",
"Biological databases"
] |
147,909 | https://en.wikipedia.org/wiki/Linearity%20of%20differentiation | In calculus, the derivative of any linear combination of functions equals the same linear combination of the derivatives of the functions; this property is known as linearity of differentiation, the rule of linearity, or the superposition rule for differentiation. It is a fundamental property of the derivative that encapsulates in a single rule two simpler rules of differentiation, the sum rule (the derivative of the sum of two functions is the sum of the derivatives) and the constant factor rule (the derivative of a constant multiple of a function is the same constant multiple of the derivative). Thus it can be said that differentiation is linear, or the differential operator is a linear operator.
Statement and derivation
Let and be functions, with and constants. Now consider
By the sum rule in differentiation, this is
and by the constant factor rule in differentiation, this reduces to
Therefore,
Omitting the brackets, this is often written as:
Detailed proofs/derivations from definition
We can prove the entire linearity principle at once, or, we can prove the individual steps (of constant factor and adding) individually. Here, both will be shown.
Proving linearity directly also proves the constant factor rule, the sum rule, and the difference rule as special cases. The sum rule is obtained by setting both constant coefficients to . The difference rule is obtained by setting the first constant coefficient to and the second constant coefficient to . The constant factor rule is obtained by setting either the second constant coefficient or the second function to . (From a technical standpoint, the domain of the second function must also be considered - one way to avoid issues is setting the second function equal to the first function and the second constant coefficient equal to . One could also define both the second constant coefficient and the second function to be 0, where the domain of the second function is a superset of the first function, among other possibilities.)
On the contrary, if we first prove the constant factor rule and the sum rule, we can prove linearity and the difference rule. Proving linearity is done by defining the first and second functions as being two other functions being multiplied by constant coefficients. Then, as shown in the derivation from the previous section, we can first use the sum law while differentiation, and then use the constant factor rule, which will reach our conclusion for linearity. In order to prove the difference rule, the second function can be redefined as another function multiplied by the constant coefficient of . This would, when simplified, give us the difference rule for differentiation.
In the proofs/derivations below, the coefficients are used; they correspond to the coefficients above.
Linearity (directly)
Let . Let be functions. Let be a function, where is defined only where and are both defined. (In other words, the domain of is the intersection of the domains of and .) Let be in the domain of . Let .
We want to prove that .
By definition, we can see that
In order to use the limits law for the sum of limits, we need to know that and both individually exist. For these smaller limits, we need to know that and both individually exist to use the coefficient law for limits. By definition, and . So, if we know that and both exist, we will know that and both individually exist. This allows us to use the coefficient law for limits to write
and
With this, we can go back to apply the limit law for the sum of limits, since we know that and both individually exist. From here, we can directly go back to the derivative we were working on.Finally, we have shown what we claimed in the beginning: .
Sum
Let be functions. Let be a function, where is defined only where and are both defined.
(In other words, the domain of is the intersection of the domains of and .) Let be in the domain of . Let .
We want to prove that .
By definition, we can see that
In order to use the law for the sum of limits here, we need to show that the individual limits, and both exist. By definition, and , so the limits exist whenever the derivatives and exist. So, assuming that the derivatives exist, we can continue the above derivation
Thus, we have shown what we wanted to show, that: .
Difference
Let be functions. Let be a function, where is defined only where and are both defined. (In other words, the domain of is the intersection of the domains of and .) Let be in the domain of . Let .
We want to prove that .
By definition, we can see that:
In order to use the law for the difference of limits here, we need to show that the individual limits, and both exist. By definition, and that , so these limits exist whenever the derivatives and exist. So, assuming that the derivatives exist, we can continue the above derivation
Thus, we have shown what we wanted to show, that: .
Constant coefficient
Let be a function. Let ; will be the constant coefficient. Let be a function, where j is defined only where is defined. (In other words, the domain of is equal to the domain of .) Let be in the domain of . Let .
We want to prove that .
By definition, we can see that:
Now, in order to use a limit law for constant coefficients to show that
we need to show that exists.
However, , by the definition of the derivative. So, if exists, then exists.
Thus, if we assume that exists, we can use the limit law and continue our proof.
Thus, we have proven that when , we have .
See also
References
Articles containing proofs
Differential calculus
Differentiation rules
Theorems in analysis
Theorems in calculus | Linearity of differentiation | [
"Mathematics"
] | 1,150 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical theorems",
"Theorems in calculus",
"Calculus",
"Differential calculus",
"Articles containing proofs",
"Mathematical problems"
] |
147,912 | https://en.wikipedia.org/wiki/Power%20rule | In calculus, the power rule is used to differentiate functions of the form , whenever is a real number. Since differentiation is a linear operation on the space of differentiable functions, polynomials can also be differentiated using this rule. The power rule underlies the Taylor series as it relates a power series with a function's derivatives.
Statement of the power rule
Let be a function satisfying for all , where . Then,
The power rule for integration states that
for any real number . It can be derived by inverting the power rule for differentiation. In this equation C is any constant.
Proofs
Proof for real exponents
To start, we should choose a working definition of the value of where is any real number. Although it is feasible to define the v
If then where is the natural logarithm function,
or as was required.
Therefore, applying the chain rule to we see that
which simplifies to
When we may use the same definition with where we now have This necessarily leads to the same result. Note that because does not have a conventional definition when is not a rational number, irrational power functions are not well defined for negative bases. In addition, as rational powers of −1 with even denominators (in lowest terms) are not real numbers, these expressions are only real valued for rational powers with odd denominators (in lowest terms).
Finally, whenever the function is differentiable at the defining limit for the derivative is:
which yields 0 only when is a rational number with odd denominator (in lowest terms) and and 1 when For all other values of the expression is not well-defined for as was covered above, or is not a real number, so the limit does not exist as a real-valued derivative. For the two cases that do exist, the values agree with the value of the existing power rule at 0, so no exception need be made.
The exclusion of the expression (the case from our scheme of exponentiation is due to the fact that the function has no limit at (0,0), since approaches 1 as x approaches 0, while approaches 0 as y approaches 0. Thus, it would be problematic to ascribe any particular value to it, as the value would contradict one of the two cases, dependent on the application. It is traditionally left undefined.
Proofs for integer exponents
Proof by induction (natural numbers)
Let . It is required to prove that The base case may be when or , depending on how the set of natural numbers is defined.
When ,
When ,
Therefore, the base case holds either way.
Suppose the statement holds for some natural number k, i.e.
When ,By the principle of mathematical induction, the statement is true for all natural numbers n.
Proof by binomial theorem (natural number)
Let , where .
Then,
Since n choose 1 is equal to n, and the rest of the terms all contain h, which is 0, the rest of the terms cancel. This proof only works for natural numbers as the binomial theorem only works for natural numbers.
Generalization to negative integer exponents
For a negative integer n, let so that m is a positive integer.
Using the reciprocal rule,In conclusion, for any integer ,
Generalization to rational exponents
Upon proving that the power rule holds for integer exponents, the rule can be extended to rational exponents.
Proof by chain rule
This proof is composed of two steps that involve the use of the chain rule for differentiation.
Let , where . Then . By the chain rule, . Solving for , Thus, the power rule applies for rational exponents of the form , where is a nonzero natural number. This can be generalized to rational exponents of the form by applying the power rule for integer exponents using the chain rule, as shown in the next step.
Let , where so that . By the chain rule,
From the above results, we can conclude that when is a rational number,
Proof by implicit differentiation
A more straightforward generalization of the power rule to rational exponents makes use of implicit differentiation.
Let , where so that .
Then,Differentiating both sides of the equation with respect to ,Solving for ,Since ,Applying laws of exponents,Thus, letting , we can conclude that when is a rational number.
History
The power rule for integrals was first demonstrated in a geometric form by Italian mathematician Bonaventura Cavalieri in the early 17th century for all positive integer values of , and during the mid 17th century for all rational powers by the mathematicians Pierre de Fermat, Evangelista Torricelli, Gilles de Roberval, John Wallis, and Blaise Pascal, each working independently. At the time, they were treatises on determining the area between the graph of a rational power function and the horizontal axis. With hindsight, however, it is considered the first general theorem of calculus to be discovered. The power rule for differentiation was derived by Isaac Newton and Gottfried Wilhelm Leibniz, each independently, for rational power functions in the mid 17th century, who both then used it to derive the power rule for integrals as the inverse operation. This mirrors the conventional way the related theorems are presented in modern basic calculus textbooks, where differentiation rules usually precede integration rules.
Although both men stated that their rules, demonstrated only for rational quantities, worked for all real powers, neither sought a proof of such, as at the time the applications of the theory were not concerned with such exotic power functions, and questions of convergence of infinite series were still ambiguous.
The unique case of was resolved by Flemish Jesuit and mathematician Grégoire de Saint-Vincent and his student Alphonse Antonio de Sarasa in the mid 17th century, who demonstrated that the associated definite integral,
representing the area between the rectangular hyperbola and the x-axis, was a logarithmic function, whose base was eventually discovered to be the transcendental number e. The modern notation for the value of this definite integral is , the natural logarithm.
Generalizations
Complex power functions
If we consider functions of the form where is any complex number and is a complex number in a slit complex plane that excludes the branch point of 0 and any branch cut connected to it, and we use the conventional multivalued definition , then it is straightforward to show that, on each branch of the complex logarithm, the same argument used above yields a similar result: .
In addition, if is a positive integer, then there is no need for a branch cut: one may define , or define positive integral complex powers through complex multiplication, and show that for all complex , from the definition of the derivative and the binomial theorem.
However, due to the multivalued nature of complex power functions for non-integer exponents, one must be careful to specify the branch of the complex logarithm being used. In addition, no matter which branch is used, if is not a positive integer, then the function is not differentiable at 0.
See also
Differentiation rules
General Leibniz rule
Inverse functions and differentiation
Linearity of differentiation
Product rule
Quotient rule
Table of derivatives
Vector calculus identities
References
Notes
Citations
Further reading
Larson, Ron; Hostetler, Robert P.; and Edwards, Bruce H. (2003). Calculus of a Single Variable: Early Transcendental Functions (3rd edition). Houghton Mifflin Company. .
Articles containing proofs
Differentiation rules
Mathematical identities
Theorems in analysis
Theorems in calculus | Power rule | [
"Mathematics"
] | 1,528 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Theorems in calculus",
"Calculus",
"Mathematical problems",
"Articles containing proofs",
"Mathematical identities",
"Mathematical theorems",
"Algebra"
] |
147,918 | https://en.wikipedia.org/wiki/Industrial%20robot | An industrial robot is a robot system used for manufacturing. Industrial robots are automated, programmable and capable of movement on three or more axes.
Typical applications of robots include welding, painting, assembly, disassembly, pick and place for printed circuit boards, packaging and labeling, palletizing, product inspection, and testing; all accomplished with high endurance, speed, and precision. They can assist in material handling.
In the year 2023, an estimated 4,281,585 industrial robots were in operation worldwide according to International Federation of Robotics (IFR).
Types and features
There are six types of industrial robots.
Articulated robots
Articulated robots are the most common industrial robots. They look like a human arm, which is why they are also called robotic arm or manipulator arm. Their articulations with several degrees of freedom allow the articulated arms a wide range of movements.
Autonomous robot
An autonomous robot is a robot that acts without recourse to human control. The first autonomous robots environment were known as Elmer and Elsie, which were constructed in the late 1940s by W. Grey Walter. They were the first robots in history that were programmed to "think" the way biological brains do and meant to have free will. Elmer and Elsie were often labeled as tortoises because of how they were shaped and the manner in which they moved. They were capable of phototaxis which is the movement that occurs in response to light stimulus.
Cartesian coordinate robots
Cartesian robots, also called rectilinear, gantry robots, and x-y-z robots have three prismatic joints for the movement of the tool and three rotary joints for its orientation in space.
To be able to move and orient the effector organ in all directions, such a robot needs 6 axes (or degrees of freedom). In a 2-dimensional environment, three axes are sufficient, two for displacement and one for orientation.
Cylindrical coordinate robots
The cylindrical coordinate robots are characterized by their rotary joint at the base and at least one prismatic joint connecting its links. They can move vertically and horizontally by sliding. The compact effector design allows the robot to reach tight work-spaces without any loss of speed.
Spherical coordinate robots
Spherical coordinate robots only have rotary joints. They are one of the first robots to have been used in industrial applications. They are commonly used for machine tending in die-casting, plastic injection and extrusion, and for welding.
SCARA robots
SCARA is an acronym for Selective Compliance Assembly Robot Arm. SCARA robots are recognized by their two parallel joints which provide movement in the X-Y plane. Rotating shafts are positioned vertically at the effector. SCARA robots are used for jobs that require precise lateral movements. They are ideal for assembly applications.
Delta robots
Delta robots are also referred to as parallel link robots. They consist of parallel links connected to a common base. Delta robots are particularly useful for direct control tasks and high maneuvering operations (such as quick pick-and-place tasks). Delta robots take advantage of four bar or parallelogram linkage systems.
Furthermore, industrial robots can have a serial or parallel architecture.
Serial manipulators
Serial architectures a.k.a. serial manipulators are very common industrial robots; they are designed as a series of links connected by motor-actuated joints that extend from a base to an end-effector. SCARA, Stanford manipulators are typical examples of this category.
Parallel architecture
A parallel manipulator is designed so that each chain is usually short, simple and can thus be rigid against unwanted movement, compared to a serial manipulator. Errors in one chain's positioning are averaged in conjunction with the others, rather than being cumulative. Each actuator must still move within its own degree of freedom, as for a serial robot; however in the parallel robot the off-axis flexibility of a joint is also constrained by the effect of the other chains. It is this closed-loop stiffness that makes the overall parallel manipulator stiff relative to its components, unlike the serial chain that becomes progressively less rigid with more components.
Lower mobility parallel manipulators and concomitant motion
A full parallel manipulator can move an object with up to 6 degrees of freedom (DoF), determined by 3 translation 3T and 3 rotation 3R coordinates for full 3T3R mobility. However, when a manipulation task requires less than 6 DoF, the use of lower mobility manipulators, with fewer than 6 DoF, may bring advantages in terms of simpler architecture, easier control, faster motion and lower cost. For example, the 3 DoF Delta robot has lower 3T mobility and has proven to be very successful for rapid pick-and-place translational positioning applications. The workspace of lower mobility manipulators may be decomposed into 'motion' and 'constraint' subspaces. For example, 3 position coordinates constitute the motion subspace of the 3 DoF Delta robot and the 3 orientation coordinates are in the constraint subspace. The motion subspace of lower mobility manipulators may be further decomposed into independent (desired) and dependent (concomitant) subspaces: consisting of 'concomitant' or 'parasitic' motion which is undesired motion of the manipulator. The debilitating effects of concomitant motion should be mitigated or eliminated in the successful design of lower mobility manipulators. For example, the Delta robot does not have parasitic motion since its end effector does not rotate.
Autonomy
Robots exhibit varying degrees of autonomy.
Some robots are programmed to faithfully carry out specific actions over and over again (repetitive actions) without variation and with a high degree of accuracy. These actions are determined by programmed routines that specify the direction, acceleration, velocity, deceleration, and distance of a series of coordinated motions
Other robots are much more flexible as to the orientation of the object on which they are operating or even the task that has to be performed on the object itself, which the robot may even need to identify. For example, for more precise guidance, robots often contain machine vision sub-systems acting as their visual sensors, linked to powerful computers or controllers. Artificial intelligence is becoming an increasingly important factor in the modern industrial robot.
History
The earliest known industrial robot, conforming to the ISO definition was completed by
"Bill" Griffith P. Taylor in 1937 and published in Meccano Magazine, March 1938. The crane-like device was built almost entirely using Meccano parts, and powered by a single electric motor. Five axes of movement were possible, including grab and grab rotation. Automation was achieved using punched paper tape to energise solenoids, which would facilitate the movement of the crane's control levers. The robot could stack wooden blocks in pre-programmed patterns. The number of motor revolutions required for each desired movement was first plotted on graph paper. This information was then transferred to the paper tape, which was also driven by the robot's single motor. Chris Shute built a complete replica of the robot in 1997.
George Devol applied for the first robotics patents in 1954 (granted in 1961). The first company to produce a robot was Unimation, founded by Devol and Joseph F. Engelberger in 1956. Unimation robots were also called programmable transfer machines since their main use at first was to transfer objects from one point to another, less than a dozen feet or so apart. They used hydraulic actuators and were programmed in joint coordinates, i.e. the angles of the various joints were stored during a teaching phase and replayed in operation. They were accurate to within 1/10,000 of an inch (note: although accuracy is not an appropriate measure for robots, usually evaluated in terms of repeatability - see later). Unimation later licensed their technology to Kawasaki Heavy Industries and GKN, manufacturing Unimates in Japan and England respectively. For some time, Unimation's only competitor was Cincinnati Milacron Inc. of Ohio. This changed radically in the late 1970s when several big Japanese conglomerates began producing similar industrial robots.
In 1969 Victor Scheinman at Stanford University invented the Stanford arm, an all-electric, 6-axis articulated robot designed to permit an arm solution. This allowed it accurately to follow arbitrary paths in space and widened the potential use of the robot to more sophisticated applications such as assembly and welding. Scheinman then designed a second arm for the MIT AI Lab, called the "MIT arm." Scheinman, after receiving a fellowship from Unimation to develop his designs, sold those designs to Unimation who further developed them with support from General Motors and later marketed it as the Programmable Universal Machine for Assembly (PUMA).
Industrial robotics took off quite quickly in Europe, with both ABB Robotics and KUKA Robotics bringing robots to the market in 1973. ABB Robotics (formerly ASEA) introduced IRB 6, among the world's first commercially available all electric micro-processor controlled robot. The first two IRB 6 robots were sold to Magnusson in Sweden for grinding and polishing pipe bends and were installed in production in January 1974. Also in 1973 KUKA Robotics built its first robot, known as FAMULUS, also one of the first articulated robots to have six electromechanically driven axes.
Interest in robotics increased in the late 1970s and many US companies entered the field, including large firms like General Electric, and General Motors (which formed joint venture FANUC Robotics with FANUC LTD of Japan). U.S. startup companies included Automatix and Adept Technology, Inc. At the height of the robot boom in 1984, Unimation was acquired by Westinghouse Electric Corporation for 107 million U.S. dollars. Westinghouse sold Unimation to Stäubli Faverges SCA of France in 1988, which is still making articulated robots for general industrial and cleanroom applications and even bought the robotic division of Bosch in late 2004.
Only a few non-Japanese companies ultimately managed to survive in this market, the major ones being: Adept Technology, Stäubli, the Swedish-Swiss company ABB Asea Brown Boveri, the German company KUKA Robotics and the Italian company Comau.
Technical description
Defining parameters
Number of axes – two axes are required to reach any point in a plane; three axes are required to reach any point in space. To fully control the orientation of the end of the arm(i.e. the wrist) three more axes (yaw, pitch, and roll) are required. Some designs (e.g. the SCARA robot) trade limitations in motion possibilities for cost, speed, and accuracy.
Degrees of freedom – this is usually the same as the number of axes.
Working envelope – the region of space a robot can reach.
Kinematics – the actual arrangement of rigid members and joints in the robot, which determines the robot's possible motions. Classes of robot kinematics include articulated, cartesian, parallel and SCARA.
Carrying capacity or payload – how much weight a robot can lift.
Speed – how fast the robot can position the end of its arm. This may be defined in terms of the angular or linear speed of each axis or as a compound speed i.e. the speed of the end of the arm when all axes are moving.
Acceleration – how quickly an axis can accelerate. Since this is a limiting factor a robot may not be able to reach its specified maximum speed for movements over a short distance or a complex path requiring frequent changes of direction.
Accuracy – how closely a robot can reach a commanded position. When the absolute position of the robot is measured and compared to the commanded position the error is a measure of accuracy. Accuracy can be improved with external sensing for example a vision system or Infra-Red. See robot calibration. Accuracy can vary with speed and position within the working envelope and with payload (see compliance).
Repeatability – how well the robot will return to a programmed position. This is not the same as accuracy. It may be that when told to go to a certain X-Y-Z position that it gets only to within 1 mm of that position. This would be its accuracy which may be improved by calibration. But if that position is taught into controller memory and each time it is sent there it returns to within 0.1mm of the taught position then the repeatability will be within 0.1mm.
Accuracy and repeatability are different measures. Repeatability is usually the most important criterion for a robot and is similar to the concept of 'precision' in measurement—see accuracy and precision. ISO 9283 sets out a method whereby both accuracy and repeatability can be measured. Typically a robot is sent to a taught position a number of times and the error is measured at each return to the position after visiting 4 other positions. Repeatability is then quantified using the standard deviation of those samples in all three dimensions. A typical robot can, of course make a positional error exceeding that and that could be a problem for the process. Moreover, the repeatability is different in different parts of the working envelope and also changes with speed and payload. ISO 9283 specifies that accuracy and repeatability should be measured at maximum speed and at maximum payload. But this results in pessimistic values whereas the robot could be much more accurate and repeatable at light loads and speeds.
Repeatability in an industrial process is also subject to the accuracy of the end effector, for example a gripper, and even to the design of the 'fingers' that match the gripper to the object being grasped. For example, if a robot picks a screw by its head, the screw could be at a random angle. A subsequent attempt to insert the screw into a hole could easily fail. These and similar scenarios can be improved with 'lead-ins' e.g. by making the entrance to the hole tapered.
Motion control – for some applications, such as simple pick-and-place assembly, the robot need merely return repeatably to a limited number of pre-taught positions. For more sophisticated applications, such as welding and finishing (spray painting), motion must be continuously controlled to follow a path in space, with controlled orientation and velocity.
Power source – some robots use electric motors, others use hydraulic actuators. The former are faster, the latter are stronger and advantageous in applications such as spray painting, where a spark could set off an explosion; however, low internal air-pressurisation of the arm can prevent ingress of flammable vapours as well as other contaminants. Nowadays, it is highly unlikely to see any hydraulic robots in the market. Additional sealings, brushless electric motors and spark-proof protection eased the construction of units that are able to work in the environment with an explosive atmosphere.
Drive – some robots connect electric motors to the joints via gears; others connect the motor to the joint directly (direct drive). Using gears results in measurable 'backlash' which is free movement in an axis. Smaller robot arms frequently employ high speed, low torque DC motors, which generally require high gearing ratios; this has the disadvantage of backlash. In such cases the harmonic drive is often used.
Compliance - this is a measure of the amount in angle or distance that a robot axis will move when a force is applied to it. Because of compliance when a robot goes to a position carrying its maximum payload it will be at a position slightly lower than when it is carrying no payload. Compliance can also be responsible for overshoot when carrying high payloads in which case acceleration would need to be reduced.
Robot programming and interfaces
The setup or programming of motions and sequences for an industrial robot is typically taught by linking the robot controller to a laptop, desktop computer or (internal or Internet) network.
A robot and a collection of machines or peripherals is referred to as a workcell, or cell. A typical cell might contain a parts feeder, a molding machine and a robot. The various machines are 'integrated' and controlled by a single computer or PLC. How the robot interacts with other machines in the cell must be programmed, both with regard to their positions in the cell and synchronizing with them.
Software: The computer is installed with corresponding interface software. The use of a computer greatly simplifies the programming process. Specialized robot software is run either in the robot controller or in the computer or both depending on the system design.
There are two basic entities that need to be taught (or programmed): positional data and procedure. For example, in a task to move a screw from a feeder to a hole the positions of the feeder and the hole must first be taught or programmed. Secondly the procedure to get the screw from the feeder to the hole must be programmed along with any I/O involved, for example a signal to indicate when the screw is in the feeder ready to be picked up. The purpose of the robot software is to facilitate both these programming tasks.
Teaching the robot positions may be achieved a number of ways:
Positional commands The robot can be directed to the required position using a GUI or text based commands in which the required X-Y-Z position may be specified and edited.
Teach pendant: Robot positions can be taught via a teach pendant. This is a handheld control and programming unit. The common features of such units are the ability to manually send the robot to a desired position, or "inch" or "jog" to adjust a position. They also have a means to change the speed since a low speed is usually required for careful positioning, or while test-running through a new or modified routine. A large emergency stop button is usually included as well. Typically once the robot has been programmed there is no more use for the teach pendant. All teach pendants are equipped with a 3-position deadman switch. In the manual mode, it allows the robot to move only when it is in the middle position (partially pressed). If it is fully pressed in or completely released, the robot stops. This principle of operation allows natural reflexes to be used to increase safety.
Lead-by-the-nose: this is a technique offered by many robot manufacturers. In this method, one user holds the robot's manipulator, while another person enters a command which de-energizes the robot causing it to go into limp. The user then moves the robot by hand to the required positions and/or along a required path while the software logs these positions into memory. The program can later run the robot to these positions or along the taught path. This technique is popular for tasks such as paint spraying.
Offline programming is where the entire cell, the robot and all the machines or instruments in the workspace are mapped graphically. The robot can then be moved on screen and the process simulated. A robotics simulator is used to create embedded applications for a robot, without depending on the physical operation of the robot arm and end effector. The advantages of robotics simulation is that it saves time in the design of robotics applications. It can also increase the level of safety associated with robotic equipment since various "what if" scenarios can be tried and tested before the system is activated.[8] Robot simulation software provides a platform to teach, test, run, and debug programs that have been written in a variety of programming languages.
Robot simulation tools allow for robotics programs to be conveniently written and debugged off-line with the final version of the program tested on an actual robot. The ability to preview the behavior of a robotic system in a virtual world allows for a variety of mechanisms, devices, configurations and controllers to be tried and tested before being applied to a "real world" system. Robotics simulators have the ability to provide real-time computing of the simulated motion of an industrial robot using both geometric modeling and kinematics modeling.
Manufacturing independent robot programming tools are a relatively new but flexible way to program robot applications. Using a visual programming language, the programming is done via drag and drop of predefined template/building blocks. They often feature the execution of simulations to evaluate the feasibility and offline programming in combination. If the system is able to compile and upload native robot code to the robot controller, the user no longer has to learn each manufacturer's proprietary language. Therefore, this approach can be an important step to standardize programming methods.
Others in addition, machine operators often use user interface devices, typically touchscreen units, which serve as the operator control panel. The operator can switch from program to program, make adjustments within a program and also operate a host of peripheral devices that may be integrated within the same robotic system. These include end effectors, feeders that supply components to the robot, conveyor belts, emergency stop controls, machine vision systems, safety interlock systems, barcode printers and an almost infinite array of other industrial devices which are accessed and controlled via the operator control panel.
The teach pendant or PC is usually disconnected after programming and the robot then runs on the program that has been installed in its controller. However a computer is often used to 'supervise' the robot and any peripherals, or to provide additional storage for access to numerous complex paths and routines.
End-of-arm tooling
The most essential robot peripheral is the end effector, or end-of-arm-tooling (EOAT). Common examples of end effectors include welding devices (such as MIG-welding guns, spot-welders, etc.), spray guns and also grinding and deburring devices (such as pneumatic disk or belt grinders, burrs, etc.), and grippers (devices that can grasp an object, usually electromechanical or pneumatic). Other common means of picking up objects is by vacuum or magnets. End effectors are frequently highly complex, made to match the handled product and often capable of picking up an array of products at one time. They may utilize various sensors to aid the robot system in locating, handling, and positioning products.
Controlling movement
For a given robot the only parameters necessary to completely locate the end effector (gripper, welding torch, etc.) of the robot are the angles of each of the joints or displacements of the linear axes (or combinations of the two for robot formats such as SCARA). However, there are many different ways to define the points. The most common and most convenient way of defining a point is to specify a Cartesian coordinate for it, i.e. the position of the 'end effector' in mm in the X, Y and Z directions relative to the robot's origin. In addition, depending on the types of joints a particular robot may have, the orientation of the end effector in yaw, pitch, and roll and the location of the tool point relative to the robot's faceplate must also be specified. For a jointed arm these coordinates must be converted to joint angles by the robot controller and such conversions are known as Cartesian Transformations which may need to be performed iteratively or recursively for a multiple axis robot. The mathematics of the relationship between joint angles and actual spatial coordinates is called kinematics. See robot control
Positioning by Cartesian coordinates may be done by entering the coordinates into the system or by using a teach pendant which moves the robot in X-Y-Z directions. It is much easier for a human operator to visualize motions up/down, left/right, etc. than to move each joint one at a time. When the desired position is reached it is then defined in some way particular to the robot software in use, e.g. P1 - P5 below.
Typical programming
Most articulated robots perform by storing a series of positions in memory, and moving to them at various times in their programming sequence. For example, a robot which is moving items from one place (bin A) to another (bin B) might have a simple 'pick and place' program similar to the following:
Define points P1–P5:
Safely above workpiece (defined as P1)
10 cm Above bin A (defined as P2)
At position to take part from bin A (defined as P3)
10 cm Above bin B (defined as P4)
At position to take part from bin B. (defined as P5)
Define program:
Move to P1
Move to P2
Move to P3
Close gripper
Move to P2
Move to P4
Move to P5
Open gripper
Move to P4
Move to P1 and finish
For examples of how this would look in popular robot languages see industrial robot programming.
Singularities
The American National Standard for Industrial Robots and Robot Systems — Safety Requirements (ANSI/RIA R15.06-1999) defines a singularity as "a condition caused by the collinear alignment of two or more robot axes resulting in unpredictable robot motion and velocities." It is most common in robot arms that utilize a "triple-roll wrist". This is a wrist about which the three axes of the wrist, controlling yaw, pitch, and roll, all pass through a common point. An example of a wrist singularity is when the path through which the robot is traveling causes the first and third axes of the robot's wrist (i.e. robot's axes 4 and 6) to line up. The second wrist axis then attempts to spin 180° in zero time to maintain the orientation of the end effector. Another common term for this singularity is a "wrist flip". The result of a singularity can be quite dramatic and can have adverse effects on the robot arm, the end effector, and the process. Some industrial robot manufacturers have attempted to side-step the situation by slightly altering the robot's path to prevent this condition. Another method is to slow the robot's travel speed, thus reducing the speed required for the wrist to make the transition. The ANSI/RIA has mandated that robot manufacturers shall make the user aware of singularities if they occur while the system is being manually manipulated.
A second type of singularity in wrist-partitioned vertically articulated six-axis robots occurs when the wrist center lies on a cylinder that is centered about axis 1 and with radius equal to the distance between axes 1 and 4. This is called a shoulder singularity. Some robot manufacturers also mention alignment singularities, where axes 1 and 6 become coincident. This is simply a sub-case of shoulder singularities. When the robot passes close to a shoulder singularity, joint 1 spins very fast.
The third and last type of singularity in wrist-partitioned vertically articulated six-axis robots occurs when the wrist's center lies in the same plane as axes 2 and 3.
Singularities are closely related to the phenomena of gimbal lock, which has a similar root cause of axes becoming lined up.
Market structure
According to the International Federation of Robotics (IFR) study World Robotics 2024, there were about 4,281,585 operational industrial robots by the end of 2023. For the year 2018 the IFR estimates the worldwide sales of industrial robots with US$16.5 billion. Including the cost of software, peripherals and systems engineering, the annual turnover for robot systems is estimated to be US$48.0 billion in 2018.
China is the largest industrial robot market with 154,032 units sold in 2018. China had the largest operational stock of industrial robots, with 649,447 at the end of 2018. The United States industrial robot-makers shipped 35,880 robot to factories in the US in 2018 and this was 7% more than in 2017.
The biggest customer of industrial robots is automotive industry with 30% market share, then electrical/electronics industry with 25%, metal and machinery industry with 10%, rubber and plastics industry with 5%, food industry with 5%. In textiles, apparel and leather industry, 1,580 units are operational.
Estimated worldwide annual supply of industrial robots (in units):
Health and safety
The International Federation of Robotics has predicted a worldwide increase in adoption of industrial robots and they estimated 1.7 million new robot installations in factories worldwide by 2020 [IFR 2017] . Rapid advances in automation technologies (e.g. fixed robots, collaborative and mobile robots, and exoskeletons) have the potential to improve work conditions but also to introduce workplace hazards in manufacturing workplaces. Despite the lack of occupational surveillance data on injuries associated specifically with robots, researchers from the US National Institute for Occupational Safety and Health (NIOSH) identified 61 robot-related deaths between 1992 and 2015 using keyword searches of the Bureau of Labor Statistics (BLS) Census of Fatal Occupational Injuries research database (see info from Center for Occupational Robotics Research). Using data from the Bureau of Labor Statistics, NIOSH and its state partners have investigated 4 robot-related fatalities under the Fatality Assessment and Control Evaluation Program. In addition the Occupational Safety and Health Administration (OSHA) has investigated dozens of robot-related deaths and injuries, which can be reviewed at OSHA Accident Search page. Injuries and fatalities could increase over time because of the increasing number of collaborative and co-existing robots, powered exoskeletons, and autonomous vehicles into the work environment.
Safety standards are being developed by the Robotic Industries Association (RIA) in conjunction with the American National Standards Institute (ANSI). On October 5, 2017, OSHA, NIOSH and RIA signed an alliance to work together to enhance technical expertise, identify and help address potential workplace hazards associated with traditional industrial robots and the emerging technology of human-robot collaboration installations and systems, and help identify needed research to reduce workplace hazards. On October 16 NIOSH launched the Center for Occupational Robotics Research to "provide scientific leadership to guide the development and use of occupational robots that enhance worker safety, health, and wellbeing." So far, the research needs identified by NIOSH and its partners include: tracking and preventing injuries and fatalities, intervention and dissemination strategies to promote safe machine control and maintenance procedures, and on translating effective evidence-based interventions into workplace practice.
See also
Automation
Domestic robot
Drum handler
Intelligent industrial work assistant (iiwa)
Lights out (manufacturing)
Mobile industrial robots
Cartesian coordinate robot
Gantry robot
Workplace Robotics Safety
References
Further reading
Nof, Shimon Y. (editor) (1999). Handbook of Industrial Robotics, 2nd ed. John Wiley & Sons. 1378 pp. .
Lars Westerlund (author) (2000). The extended arm of man. .
Michal Gurgul (author) (2018). Industrial robots and cobots: Everything you need to know about your future co-worker. .
External links
Industrial robots and robot system safety (by OSHA, so in the public domain).
International Federation of Robotics IFR (worldwide)
Robotic Industries Association RIA (North America)
BARA, British Automation and Robotics Association (UK)
Center for Occupational Robotics Research by NIOSH
Safety standards applied to Robotics
Strategies for addressing new technologies from the INRS
Machine Guarding - Why It's a Legal Requirement
American inventions
Packaging machinery
Occupational safety and health | Industrial robot | [
"Engineering"
] | 6,394 | [
"Packaging machinery",
"Industrial robots",
"Industrial machinery"
] |
147,923 | https://en.wikipedia.org/wiki/Syringe | A syringe is a simple reciprocating pump consisting of a plunger (though in modern syringes, it is actually a piston) that fits tightly within a cylindrical tube called a barrel. The plunger can be linearly pulled and pushed along the inside of the tube, allowing the syringe to take in and expel liquid or gas through a discharge orifice at the front (open) end of the tube. The open end of the syringe may be fitted with a hypodermic needle, a nozzle or tubing to direct the flow into and out of the barrel. Syringes are frequently used in clinical medicine to administer injections, infuse intravenous therapy into the bloodstream, apply compounds such as glue or lubricant, and draw/measure liquids. There are also prefilled syringes (disposable syringes marketed with liquid inside).
The word "syringe" is derived from the Greek σῦριγξ (syrinx, meaning "Pan flute", "tube").
Medical syringes
Medical syringes include disposable and safety syringes, injection pens, needleless injectors, insulin pumps, and specialty needles. Hypodermic syringes are used with hypodermic needles to inject liquid or gases into body tissues, or to remove from the body. Injecting of air into a blood vessel is hazardous, as it may cause an air embolism; preventing embolisms by removing air from the syringe is one of the reasons for the familiar image of holding a hypodermic syringe pointing upward, tapping it, and expelling a small amount of liquid before an injection into the bloodstream.
The barrel of a syringe is made of plastic or glass, usually has graduated marks indicating the volume of fluid in the syringe, and is nearly always transparent. Glass syringes may be sterilized in an autoclave. Plastic syringes can be constructed as either two-part or three-part designs. A three-part syringe contains a plastic plunger/piston with a rubber tip to create a seal between the piston and the barrel, where a two-part syringe is manufactured to create a perfect fit between the plastic plunger and the barrel to create the seal without the need for a separate synthetic rubber piston. Two-part syringes have been traditionally used in European countries to prevent introduction of additional materials such as silicone oil needed for lubricating three-part plungers. Most modern medical syringes are plastic because they are cheap enough to dispose of after being used only once, reducing the risk of spreading blood-borne diseases. Reuse of needles and syringes has caused spread of diseases, especially HIV and hepatitis, among intravenous drug users. Syringes are also commonly reused by diabetics, as they can go through several in a day with multiple daily insulin injections, which becomes an affordability issue for many. Even though the syringe and needle are only used by a single person, this practice is still unsafe as it can introduce bacteria from the skin into the bloodstream and cause serious and sometimes lethal infections. In medical settings, single-use needles and syringes effectively reduce the risk of cross-contamination.
Medical syringes are sometimes used without a needle for orally administering liquid medicines to young children or animals, or milk to small young animals, because the dose can be measured accurately and it is easier to squirt the medicine into the subject's mouth instead of coaxing the subject to drink out of a measuring spoon.
Tip designs
Syringes come with a number of designs for the area in which the blade locks to the syringe body. Perhaps the most well known of these is the Luer lock, which simply twists the two together.
Bodies featuring a small, plain connection are known as slip tips and are useful for when the syringe is being connected to something not featuring a screw lock mechanism.
Similar to this is the catheter tip, which is essentially a slip tip but longer and tapered, making it good for pushing into things where there the plastic taper can form a tight seal. These can also be used for rinsing out wounds or large abscesses in veterinary use.
There is also an eccentric tip, where the nozzle at the end of the syringe is not in the centre of the syringe but at the side. This causes the blade attached to the syringe to lie almost in line with the walls of the syringe itself and they are used when the blade needs to get very close to parallel with the skin (when injecting into a surface vein or artery for example).
Standard U-100 insulin syringes
Syringes for insulin users are designed for standard U-100 insulin. The dilution of insulin is such that 1 mL of insulin fluid has 100 standard "units" of insulin. A typical insulin vial may contain 10 mL, for 1000 units.
Insulin syringes are made specifically for a patient to inject themselves, and have features to assist this purpose when compared to a syringe for use by a healthcare professional:
shorter needles, as insulin injections are subcutaneous (under the skin) rather than intramuscular,
finer gauge needles, for less pain,
markings in insulin units to simplify drawing a measured dose of insulin, and
low dead space to reduce complications caused by improper drawing order of different insulin strengths.
Multishot needle syringes
There are needle syringes designed to reload from a built-in tank (container) after each injection, so they can make several or many injections on a filling. These are not used much in human medicine because of the risk of cross-infection via the needle. An exception is the personal insulin autoinjector used by diabetic patients and in dual-chambered syringe designs intended to deliver a prefilled saline flush solution after the medication.
Venom extraction syringes
Venom extraction syringes are different from standard syringes, because they usually do not puncture the wound. The most common types have a plastic nozzle which is placed over the affected area, and then the syringe piston is pulled back, creating a vacuum that allegedly sucks out the venom. Attempts to treat snakebites in this way are specifically advised against, as they are ineffective and can cause additional injury.
Syringes of this type are sometimes used for extracting human botfly larvae from the skin.
Oral
An oral syringe is a measuring instrument used to accurately measure doses of liquid medication, expressed in millilitres (mL). They do not have threaded tips, because no needle or other device needs to be screwed onto them. The contents are simply squirted or sucked from the syringe directly into the mouth of the person or animal.
Oral syringes are available in various sizes, from 1–10 mL and larger. An oral syringe is typically purple in colour to distinguish it from a standard injection syringe with a luer tip. The sizes most commonly used are 1 mL, 2.5 mL, 3 mL, 5 mL and 10 mL.
Dental syringes
A dental syringe is used by dentists for the injection of an anesthetic. It consists of a breech-loading syringe fitted with a sealed cartridge containing an anesthetic solution.
In 1928, Bayer Dental developed, coined and produced a sealed cartridge system under the registered trademark Carpule®. The current trademark owner is Kulzer Dental GmbH.
The carpules have long been reserved for anesthetic products for dental use. It is practically a bottomless flask. The latter is replaced by an elastomer plug that can slide in the body of the cartridge. This plug will be pushed by the plunger of the syringe. The neck is closed with a rubber cap. The dentist places the cartridge directly into a stainless steel syringe, with a double-pointed (single-use) needle. The tip placed on the cartridge side punctures the capsule and the piston will push the product. There is therefore no contact between the product and the ambient air during use.
The ancillary tool (generally part of a dental engine) used to supply water, compressed air or mist (formed by combination of water and compressed air) to the oral cavity for the purpose of irrigation (cleaning debris away from the area the dentist is working on), is also referred to as a dental syringe or a dental irrigation nozzle.
A 3-way syringe/nozzle has separate internal channels supplying air, water or a mist created by combining the pressurized air with the waterflow. The syringe tip can be separated from the main body and replaced when necessary.
In the UK and Ireland, manually operated hand syringes are used to inject lidocaine into patients' gums.
Dose-sparing syringes
A dose-sparing syringe is one which minimises the amount of liquid remaining in the barrel after the plunger has been depressed. These syringes feature a combined needle and syringe, and a protrusion on the face of the plunger to expel liquid from the needle hub. Such syringes were particularly popular during the COVID-19 pandemic as vaccines were in short supply.
Regulation
In some jurisdictions, the sale or possession of hypodermic syringes may be controlled or prohibited without a prescription, due to its potential use with illegal intravenous drugs.
Non-medical uses
The syringe has many non-medical applications.
Laboratory applications
Medical-grade disposable hypodermic syringes are often used in research laboratories for convenience and low cost. Another application is to use the needle tip to add liquids to very confined spaces, such as washing out some scientific apparatus. They are often used for measuring and transferring solvents and reagents where a high precision is not required. Alternatively, microliter syringes can be used to measure and dose chemicals very precisely by using a small diameter capillary as the syringe barrel.
The polyethylene construction of these disposable syringes usually makes them rather chemically resistant. There is, however, a risk of the contents of the syringes leaching plasticizers from the syringe material. Non-disposable glass syringes may be preferred where this is a problem. Glass syringes may also be preferred where a very high degree of precision is important (i.e. quantitative chemical analysis), because their engineering tolerances are lower and the plungers move more smoothly. In these applications, the transfer of pathogens is usually not an issue.
Used with a long needle or cannula, syringes are also useful for transferring fluids through rubber septa when atmospheric oxygen or moisture are being excluded. Examples include the transfer of air-sensitive or pyrophoric reagents such as phenylmagnesium bromide and n-butyllithium respectively. Glass syringes are also used to inject small samples for gas chromatography (1 μl) and mass spectrometry (10 μl). Syringe drivers may be used with the syringe as well.
Cooking
Some culinary uses of syringes are injecting liquids (such as gravy) into other foods, or for the manufacture of some candies.
Syringes may also be used when cooking meat to enhance flavor and texture by injecting juices inside the meat, and in baking to inject filling inside a pastry. It is common for these syringes to be made of stainless steel components, including the barrel. Such facilitates easy disassembly and cleaning.
Others
Syringes are used to refill ink cartridges with ink in fountain pens.
Common workshop applications include injecting glue into tight spots to repair joints where disassembly is impractical or impossible; and injecting lubricants onto working surfaces without spilling.
Sometimes a large hypodermic syringe is used without a needle for very small baby mammals to suckle from in artificial rearing.
Historically, large pumps that use reciprocating motion to pump water were referred to as syringes. Pumps of this type were used as early firefighting equipment.
There are fountain syringes where the liquid is in a bag or can and goes to the nozzle via a pipe. In earlier times, clyster syringes were used for that purpose.
Loose snus is often applied using modified syringes. The nozzle is removed so the opening is the width of the chamber. The snus can be packed tightly into the chamber and plunged into the upper lip. Syringes, called portioners, are also manufactured for this particular purpose.
Historical timeline
Piston syringes were used in ancient times. During the 1st century AD Aulus Cornelius Celsus mentioned the use of them to treat medical complications in his De Medicina.
9th century: The Iraqi/Egyptian surgeon Ammar ibn 'Ali al-Mawsili' described a syringe in the 9th century using a hollow glass tube, and suction to remove cataracts from patients' eyes, a practice that remained in use until at least the 13th century.
Pre-Columbian Native Americans created early hypodermic needles and syringes using "hollow bird bones and small animal bladders".
1650: Blaise Pascal invented a syringe (not necessarily hypodermic) as an application of what is now called Pascal's law.
1844: Irish physician Francis Rynd invented the hollow needle and used it to make the first recorded subcutaneous injections, specifically a sedative to treat neuralgia.
1853: Charles Pravaz and Alexander Wood independently developed medical syringes with a needle fine enough to pierce the skin. Pravaz's syringe was made of silver and used a screw mechanism to dispense fluids. Wood's syringe was made of glass, enabling its contents to be seen and measured, and used a plunger to inject them. It is effectively the syringe that is used today.
1865: Charles Hunter coined the term "hypodermic", and developed an improvement to the syringe that locked the needle into place so that it would not be ejected from the end of the syringe when the plunger was depressed, and published research indicating that injections of pain relief could be given anywhere in the body, not just in the area of pain, and still be effective.
1867: The Medical and Chirurgical Society of London investigated whether injected narcotics had a general effect (as argued by Hunter) or whether they only worked locally (as argued by Wood). After conducting animal tests and soliciting opinions from the wider medical community, they firmly sided with Hunter.
1899: Letitia Mumford Geer patented a syringe which could be operated with one hand and which could be used for self-administered rectal injections.
1946: Chance Brothers in Smethwick, West Midlands, England, produced the first all-glass syringe with interchangeable barrel and plunger, thereby allowing mass-sterilisation of components without the need for matching them.
1949: Australian inventor Charles Rothauser created the world's first plastic, disposable hypodermic syringe at his Adelaide factory.
1951: Rothauser produced the first injection-moulded syringes made of polypropylene, a plastic that can be heat-sterilised. Millions were made for Australian and export markets.
1956: New Zealand pharmacist and inventor Colin Murdoch was granted New Zealand and Australian patents for a disposable plastic syringe.
See also
Fire syringe has two meanings:
A fire piston, a fire starting device
A squirt, in the form of a large syringe, one of the first firefighting devices in history used to squirt water onto the burning fuel.
Autoinjector, a device to ease injection, e.g. by the patient or other untrained personnel.
Hippy Sippy
Jet injector, injects without a needle, by squirting the injection fluid so fast that it makes a hole in the skin.
Luer taper, a standardized fitting system used for making leak-free connections between syringe tips and needles.
Needle exchange programme, is a social policy based on the philosophy of harm reduction where injecting drug users (IDUs) can obtain hypodermic needles and associated injection equipment at little or no cost.
Trypanophobia, a fairly common extreme fear of hypodermic syringes
Syrette, similar to a syringe except that it has a closed flexible tube (like that used for toothpaste) instead of a rigid tube and piston.
Syringing the ear to remove excess ear wax.
Syrinx, the nymph from classical mythology after which syringes were supposedly named.
Safety syringe, with features to prevent accidental needlesticks and reuse
Vaginal syringe
References
Further reading
Hans-Jürgen Bässler und Frank Lehmann : Containment Technology: Progress in the Pharmaceutical and Food Processing Industry. Springer, Berlin 2013,
External links
Inventors of the hypodermic syringe
Hypodermic syringe patents
Medical syringe patents
YouTube video of a juvenile red squirrel suckling milk from a hypodermic syringe without a needle
Medical equipment
Drug delivery devices
Drug paraphernalia
New Zealand inventions | Syringe | [
"Chemistry",
"Biology"
] | 3,662 | [
"Pharmacology",
"Drug delivery devices",
"Medical equipment",
"Medical technology"
] |
148,281 | https://en.wikipedia.org/wiki/VMEbus | VMEbus (Versa Module Eurocard bus) is a computer bus standard physically based on Eurocard sizes.
History
In 1979, during development of the Motorola 68000 CPU, one of their engineers, Jack Kister, decided to set about creating a standardized bus system for 68000-based systems. The Motorola team brainstormed for days to select the name VERSAbus. VERSAbus cards were large, , and used edge connectors. Only a few products adopted it, including the IBM System 9000 instrument controller and the Automatix robot and machine vision systems.
Kister was later joined by John Black, who refined the specifications and created the VERSAmodule product concept. A young engineer working for Black, Julie Keahey designed the first VERSAmodule card, the VERSAbus Adaptor Module, used to run existing cards on the new VERSAbus. Sven Rau and Max Loesel of Motorola-Europe added a mechanical specification to the system, basing it on the Eurocard standard that was then late in the standardization process. The result was first known as VERSAbus-E but was later renamed to VMEbus, for VERSAmodule Eurocard bus (although some refer to it as Versa Module Europa).
At this point, a number of other companies involved in the 68000's ecosystem agreed to use the standard, including Signetics, Philips, Thomson, and Mostek. Soon it was officially standardized by the IEC as the IEC 821 VMEbus and by ANSI and IEEE as ANSI/IEEE 1014-1987.
The original standard was a 16-bit bus, designed to fit within the existing Eurocard DIN connectors. However, there have been several updates to the system to allow wider bus widths. The current VME64 includes a full 64-bit bus in 6U-sized cards and 32-bit in 3U cards. The VME64 protocol has a typical performance of 40 MB/s. Other associated standards have added hot-swapping (plug-and-play) in VME64x, smaller 'IP' cards that plug into a single VMEbus card, and various interconnect standards for linking VME systems together.
In the late 1990s, synchronous protocols proved to be favourable. The research project was called VME320. The VITA Standards Organization called for a new standard for unmodified VME32/64 backplanes. The new 2eSST protocol was approved in ANSI/VITA 1.5 in 1999.
Over the years, many extensions have been added to the VME interface, providing 'sideband' channels of communication in parallel to VME itself. Some examples are IP Module, RACEway Interlink, SCSA, Gigabit Ethernet on VME64x Backplanes, PCI Express, RapidIO, StarFabric and InfiniBand.
VMEbus was also used to develop closely related standards, VXIbus and VPX.
The VMEbus had a strong influence on many later computer buses such as STEbus.
VME early years
The architectural concepts of the VMEbus are based on VERSAbus, developed in the late 1970s by Motorola. This was later renamed "VME", short for Versa Module European, by Lyman (Lym) Hevle, then a VP with the Motorola Microsystems Operation. (He was later the founder of the VME Marketing Group, itself subsequently renamed to VME International Trade Association, or VITA).
John Black of Motorola, Craig MacKenna of Mostek and Cecil Kaplinsky of Signetics developed the first draft of the VMEbus specification. In October 1981, at the System '81 trade show in Munich, West Germany, Motorola, Mostek, Signetics/Philips, and Thomson CSF announced their joint support of the VMEbus. They also placed Revision A of the specification in the public domain.
In 1985, Aitech developed, under contract for US Army TACOM, the first conduction-cooled 6U VMEbus board. Although electrically providing a compliant VMEbus protocol interface, mechanically, this board was not interchangeable for use in air-cooled lab VMEbus development chassis.
In late 1987, a technical committee was formed under VITA under the direction of IEEE to create the first military, conduction-cooled 6U× 160mm, fully electrically and mechanically compatible, VMEbus board co-chaired by Dale Young (DY4 Systems) and Doug Patterson (Plessey Microsystems, then Radstone Technology). ANSI/IEEE-1101.2-1992 was later ratified and released in 1992 and remains in place as the conduction-cooled, international standard for all 6U VMEbus products.
In 1989, John Peters of Performance Technologies Inc. developed the initial concept of VME64: multiplexing address and data lines (A64/D64) on the VMEbus. The concept was demonstrated the same year and placed in the VITA Technical Committee in 1990 as a performance enhancement to the VMEbus specification.
In 1993, new activities began on the base-VME architecture, involving the implementation of high-speed serial and parallel sub-buses for use as I/O interconnections and data mover subsystems. These architectures can be used as message switches, routers and small multiprocessor parallel architectures.
VITA's application for recognition as an accredited standards developer organization of ANSI was granted in June 1993. Numerous other documents ( including mezzanine, P2 and serial bus standards) have been placed with VITA as the Public Domain Administrator of these technologies.
Description
In many ways the VMEbus is equivalent or analogous to the pins of the 68000 run out onto a backplane.
However, one of the key features of the 68000 is a flat 32-bit memory model, free of memory segmentation and other "anti-features". The result is that, while VME is very 68000-like, the 68000 is generic enough to make this not an issue in most cases.
Like the 68000, VME uses separate 32-bit data and address buses. The 68000 address bus is actually 24-bit and the data bus 16-bit (although it is 32/32 internally) but the designers were already looking towards a full 32-bit implementation.
In order to allow both bus widths, VME uses two different Eurocard connectors, P1 and P2. P1 contains three rows of 32 pins each, implementing the first 24 address bits, 16 data bits and all of the control signals. P2 contains one more row, which includes the remaining 8 address bits and 16 data bits.
A block transfer protocol allows several bus transfers to occur with a single address cycle. In block transfer mode, the first transfer includes an address cycle and subsequent transfers require only data cycles. The slave is responsible for ensuring that these transfers use successive addresses.
Bus masters can release the bus in two ways. With Release When Done (RWD), the master releases the bus when it completes a transfer and must re-arbitrate for the bus before every subsequent transfer. With Release On Request (ROR), the master retains the bus by continuing to assert BBSY* between transfers. ROR allows the master to retain control over the bus until a Bus Clear (BCLR*) is asserted by another master that wishes to arbitrate for the bus. Thus a master that generates bursts of traffic can optimize its performance by arbitrating for the bus on only the first transfer of each burst. This decrease in transfer latency comes at the cost of somewhat higher transfer latency for other masters.
Address modifiers are used to divide the VME bus address space into several distinct sub-spaces. The address modifier is a 6 bit wide set of signals on the backplane. Address modifiers specify the number of significant address bits, the privilege mode (to allow processors to distinguish between bus accesses by user-level or system-level software), and whether or not the transfer is a block transfer.
Below is an incomplete table of address modifiers:
On the VME bus, all transfers are DMA and every card is a master or slave. In most bus standards, there is a considerable amount of complexity added in order to support various transfer types and master/slave selection. For instance, with the ISA bus, both of these features had to be added alongside the existing "channels" model, whereby all communications was handled by the host CPU. This makes VME considerably simpler at a conceptual level while being more powerful, though it requires more complex controllers on each card.
Development tools
When developing and/or troubleshooting the VME bus, examination of hardware signals can be very important. Logic analyzers and bus analyzers are tools that collect, analyze, decode, store signals so people can view the high-speed waveforms at their leisure.
VITA offers a comprehensive FAQ to assist with the front end design and development of VME systems.
Computers using a VMEbus
Computers using VMEbus include:
HP 743/744 PA-RISC Single-board computer
Sun-2 through Sun-4
HP 9000 Industrial Workstations
Atari TT030 and Atari MEGA STE
Motorola MVME
Symbolics
Advanced Numerical Research and Analysis Group's PACE.
ETAS ES1000 Rapid Prototyping System
Several Motorola 88000-based Data General AViiON computers
Early Silicon Graphics MIPS-based systems including Professional IRIS, Personal IRIS, Power Series, and Onyx systems
Convergent Technologies MightyFrame
Pinout
Seen looking into backplane socket.
P1
P2
P2 rows a and c can be used by a secondary bus, for example the STEbus.
See also
Data acquisition
VPX
VXS
Futurebus
CompactPCI
CAMAC
FPDP
List of device bandwidths
References
External links
VITA
VMEBUS TECHNOLOGY FAQ
Next Generation VME
VME bus pinout and signals
Computer buses
Experimental particle physics
IEEE standards
68k architecture | VMEbus | [
"Physics",
"Technology"
] | 2,051 | [
"Computer standards",
"Experimental physics",
"Particle physics",
"Experimental particle physics",
"IEEE standards"
] |
148,420 | https://en.wikipedia.org/wiki/Euler%20characteristic | In mathematics, and more specifically in algebraic topology and polyhedral combinatorics, the Euler characteristic (or Euler number, or Euler–Poincaré characteristic) is a topological invariant, a number that describes a topological space's shape or structure regardless of the way it is bent. It is commonly denoted by (Greek lower-case letter chi).
The Euler characteristic was originally defined for polyhedra and used to prove various theorems about them, including the classification of the Platonic solids. It was stated for Platonic solids in 1537 in an unpublished manuscript by Francesco Maurolico. Leonhard Euler, for whom the concept is named, introduced it for convex polyhedra more generally but failed to rigorously prove that it is an invariant. In modern mathematics, the Euler characteristic arises from homology and, more abstractly, homological algebra.
Polyhedra
The Euler characteristic was classically defined for the surfaces of polyhedra, according to the formula
where , , and are respectively the numbers of vertices (corners), edges and faces in the given polyhedron. Any convex polyhedron's surface has Euler characteristic
This equation, stated by Euler in 1758,
is known as Euler's polyhedron formula. It corresponds to the Euler characteristic of the sphere (i.e. ), and applies identically to spherical polyhedra. An illustration of the formula on all Platonic polyhedra is given below.
The surfaces of nonconvex polyhedra can have various Euler characteristics:
For regular polyhedra, Arthur Cayley derived a modified form of Euler's formula using the density , vertex figure density and face density
This version holds both for convex polyhedra (where the densities are all 1) and the non-convex Kepler–Poinsot polyhedra.
Projective polyhedra all have Euler characteristic 1, like the real projective plane, while the surfaces of toroidal polyhedra all have Euler characteristic 0, like the torus.
Plane graphs
The Euler characteristic can be defined for connected plane graphs by the same formula as for polyhedral surfaces, where is the number of faces in the graph, including the exterior face.
The Euler characteristic of any plane connected graph is 2. This is easily proved by induction on the number of faces determined by , starting with a tree as the base case. For trees, and If has components (disconnected graphs), the same argument by induction on shows that One of the few graph theory papers of Cauchy also proves this result.
Via stereographic projection the plane maps to the 2-sphere, such that a connected graph maps to a polygonal decomposition of the sphere, which has Euler characteristic 2. This viewpoint is implicit in Cauchy's proof of Euler's formula given below.
Proof of Euler's formula
There are many proofs of Euler's formula. One was given by Cauchy in 1811, as follows. It applies to any convex polyhedron, and more generally to any polyhedron whose boundary is topologically equivalent to a sphere and whose faces are topologically equivalent to disks.
Remove one face of the polyhedral surface. By pulling the edges of the missing face away from each other, deform all the rest into a planar graph of points and curves, in such a way that the perimeter of the missing face is placed externally, surrounding the graph obtained, as illustrated by the first of the three graphs for the special case of the cube. (The assumption that the polyhedral surface is homeomorphic to the sphere at the beginning is what makes this possible.) After this deformation, the regular faces are generally not regular anymore. The number of vertices and edges has remained the same, but the number of faces has been reduced by 1. Therefore, proving Euler's formula for the polyhedron reduces to proving for this deformed, planar object.
If there is a face with more than three sides, draw a diagonal—that is, a curve through the face connecting two vertices that are not yet connected. Each new diagonal adds one edge and one face and does not change the number of vertices, so it does not change the quantity (The assumption that all faces are disks is needed here, to show via the Jordan curve theorem that this operation increases the number of faces by one.) Continue adding edges in this manner until all of the faces are triangular.
Apply repeatedly either of the following two transformations, maintaining the invariant that the exterior boundary is always a simple cycle:
Remove a triangle with only one edge adjacent to the exterior, as illustrated by the second graph. This decreases the number of edges and faces by one each and does not change the number of vertices, so it preserves
Remove a triangle with two edges shared by the exterior of the network, as illustrated by the third graph. Each triangle removal removes a vertex, two edges and one face, so it preserves
These transformations eventually reduce the planar graph to a single triangle. (Without the simple-cycle invariant, removing a triangle might disconnect the remaining triangles, invalidating the rest of the argument. A valid removal order is an elementary example of a shelling.)
At this point the lone triangle has and so that Since each of the two above transformation steps preserved this quantity, we have shown for the deformed, planar object thus demonstrating for the polyhedron. This proves the theorem.
For additional proofs, see Eppstein (2013). Multiple proofs, including their flaws and limitations, are used as examples in Proofs and Refutations by Lakatos (1976).
Topological definition
The polyhedral surfaces discussed above are, in modern language, two-dimensional finite CW-complexes. (When only triangular faces are used, they are two-dimensional finite simplicial complexes.) In general, for any finite CW-complex, the Euler characteristic can be defined as the alternating sum
where kn denotes the number of cells of dimension n in the complex.
Similarly, for a simplicial complex, the Euler characteristic equals the alternating sum
where kn denotes the number of n-simplexes in the complex.
Betti number alternative
More generally still, for any topological space, we can define the nth Betti number bn as the rank of the n-th singular homology group. The Euler characteristic can then be defined as the alternating sum
This quantity is well-defined if the Betti numbers are all finite and if they are zero beyond a certain index n0. For simplicial complexes, this is not the same definition as in the previous paragraph but a homology computation shows that the two definitions will give the same value for .
Properties
The Euler characteristic behaves well with respect to many basic operations on topological spaces, as follows.
Homotopy invariance
Homology is a topological invariant, and moreover a homotopy invariant: Two topological spaces that are homotopy equivalent have isomorphic homology groups. It follows that the Euler characteristic is also a homotopy invariant.
For example, any contractible space (that is, one homotopy equivalent to a point) has trivial homology, meaning that the 0th Betti number is 1 and the others 0. Therefore, its Euler characteristic is 1. This case includes Euclidean space of any dimension, as well as the solid unit ball in any Euclidean space — the one-dimensional interval, the two-dimensional disk, the three-dimensional ball, etc.
For another example, any convex polyhedron is homeomorphic to the three-dimensional ball, so its surface is homeomorphic (hence homotopy equivalent) to the two-dimensional sphere, which has Euler characteristic 2. This explains why the surface of a convex polyhedron has Euler characteristic 2.
Inclusion–exclusion principle
If M and N are any two topological spaces, then the Euler characteristic of their disjoint union is the sum of their Euler characteristics, since homology is additive under disjoint union:
More generally, if M and N are subspaces of a larger space X, then so are their union and intersection. In some cases, the Euler characteristic obeys a version of the inclusion–exclusion principle:
This is true in the following cases:
if M and N are an excisive couple. In particular, if the interiors of M and N inside the union still cover the union.
if X is a locally compact space, and one uses Euler characteristics with compact supports, no assumptions on M or N are needed.
if X is a stratified space all of whose strata are even-dimensional, the inclusion–exclusion principle holds if M and N are unions of strata. This applies in particular if M and N are subvarieties of a complex algebraic variety.
In general, the inclusion–exclusion principle is false. A counterexample is given by taking X to be the real line, M a subset consisting of one point and N the complement of M.
Connected sum
For two connected closed n-manifolds one can obtain a new connected manifold via the connected sum operation. The Euler characteristic is related by the formula
Product property
Also, the Euler characteristic of any product space M × N is
These addition and multiplication properties are also enjoyed by cardinality of sets. In this way, the Euler characteristic can be viewed as a generalisation of cardinality; see .
Covering spaces
Similarly, for a k-sheeted covering space one has
More generally, for a ramified covering space, the Euler characteristic of the cover can be computed from the above, with a correction factor for the ramification points, which yields the Riemann–Hurwitz formula.
Fibration property
The product property holds much more generally, for fibrations with certain conditions.
If is a fibration with fiber F, with the base B path-connected, and the fibration is orientable over a field K, then the Euler characteristic with coefficients in the field K satisfies the product property:
This includes product spaces and covering spaces as special cases,
and can be proven by the Serre spectral sequence on homology of a fibration.
For fiber bundles, this can also be understood in terms of a transfer map – note that this is a lifting and goes "the wrong way" – whose composition with the projection map is multiplication by the Euler class of the fiber:
Examples
Surfaces
The Euler characteristic can be calculated easily for general surfaces by finding a polygonization of the surface (that is, a description as a CW-complex) and using the above definitions.
Soccer ball
It is common to construct soccer balls by stitching together pentagonal and hexagonal pieces, with three pieces meeting at each vertex (see for example the Adidas Telstar). If pentagons and hexagons are used, then there are faces, vertices, and edges. The Euler characteristic is thus
Because the sphere has Euler characteristic 2, it follows that That is, a soccer ball constructed in this way always has 12 pentagons. The number of hexagons can be any nonnegative integer except 1.
This result is applicable to fullerenes and Goldberg polyhedra.
Arbitrary dimensions
The dimensional sphere has singular homology groups equal to
hence has Betti number 1 in dimensions 0 and , and all other Betti numbers are 0. Its Euler characteristic is then that is, either 0 if is odd, or 2 if is even.
The dimensional real projective space is the quotient of the sphere by the antipodal map. It follows that its Euler characteristic is exactly half that of the corresponding sphere – either 0 or 1.
The dimensional torus is the product space of circles. Its Euler characteristic is 0, by the product property. More generally, any compact parallelizable manifold, including any compact Lie group, has Euler characteristic 0.
The Euler characteristic of any closed odd-dimensional manifold is also 0. The case for orientable examples is a corollary of Poincaré duality. This property applies more generally to any compact stratified space all of whose strata have odd dimension. It also applies to closed odd-dimensional non-orientable manifolds, via the two-to-one orientable double cover.
Relations to other invariants
The Euler characteristic of a closed orientable surface can be calculated from its genus (the number of tori in a connected sum decomposition of the surface; intuitively, the number of "handles") as
The Euler characteristic of a closed non-orientable surface can be calculated from its non-orientable genus (the number of real projective planes in a connected sum decomposition of the surface) as
For closed smooth manifolds, the Euler characteristic coincides with the Euler number, i.e., the Euler class of its tangent bundle evaluated on the fundamental class of a manifold. The Euler class, in turn, relates to all other characteristic classes of vector bundles.
For closed Riemannian manifolds, the Euler characteristic can also be found by integrating the curvature; see the Gauss–Bonnet theorem for the two-dimensional case and the generalized Gauss–Bonnet theorem for the general case.
A discrete analog of the Gauss–Bonnet theorem is Descartes' theorem that the "total defect" of a polyhedron, measured in full circles, is the Euler characteristic of the polyhedron.
Hadwiger's theorem characterizes the Euler characteristic as the unique (up to scalar multiplication) translation-invariant, finitely additive, not-necessarily-nonnegative set function defined on finite unions of compact convex sets in that is "homogeneous of degree 0".
Generalizations
For every combinatorial cell complex, one defines the Euler characteristic as the number of 0-cells, minus the number of 1-cells, plus the number of 2-cells, etc., if this alternating sum is finite. In particular, the Euler characteristic of a finite set is simply its cardinality, and the Euler characteristic of a graph is the number of vertices minus the number of edges. (Olaf Post calls this a "well-known formula".)
More generally, one can define the Euler characteristic of any chain complex to be the alternating sum of the ranks of the homology groups of the chain complex, assuming that all these ranks are finite.
A version of Euler characteristic used in algebraic geometry is as follows. For any coherent sheaf on a proper scheme , one defines its Euler characteristic to be
where is the dimension of the -th sheaf cohomology group of . In this case, the dimensions are all finite by Grothendieck's finiteness theorem. This is an instance of the Euler characteristic of a chain complex, where the chain complex is a finite resolution of by acyclic sheaves.
Another generalization of the concept of Euler characteristic on manifolds comes from orbifolds (see Euler characteristic of an orbifold). While every manifold has an integer Euler characteristic, an orbifold can have a fractional Euler characteristic. For example, the teardrop orbifold has Euler characteristic where is a prime number corresponding to the cone angle .
The concept of Euler characteristic of the reduced homology of a bounded finite poset is another generalization, important in combinatorics. A poset is "bounded" if it has smallest and largest elements; call them 0 and 1. The Euler characteristic of such a poset is defined as the integer , where is the Möbius function in that poset's incidence algebra.
This can be further generalized by defining a rational valued Euler characteristic for certain finite categories, a notion compatible with the Euler characteristics of graphs, orbifolds and posets mentioned above. In this setting, the Euler characteristic of a finite group or monoid is , and the Euler characteristic of a finite groupoid is the sum of , where we picked one representative group for each connected component of the groupoid.
See also
Euler calculus
Euler class
List of topics named after Leonhard Euler
List of uniform polyhedra
References
Notes
Bibliography
Further reading
Flegg, H. Graham; From Geometry to Topology, Dover 2001, p. 40.
External links
An animated version of a proof of Euler's formula using spherical geometry.
Algebraic topology
Topological graph theory
Polyhedral combinatorics
Articles containing proofs
Leonhard Euler | Euler characteristic | [
"Mathematics"
] | 3,371 | [
"Graph theory",
"Algebraic topology",
"Combinatorics",
"Polyhedral combinatorics",
"Fields of abstract algebra",
"Topology",
"Mathematical relations",
"Articles containing proofs",
"Topological graph theory"
] |
148,550 | https://en.wikipedia.org/wiki/Antiferromagnetism | In materials that exhibit antiferromagnetism, the magnetic moments of atoms or molecules, usually related to the spins of electrons, align in a regular pattern with neighboring spins (on different sublattices) pointing in opposite directions. This is, like ferromagnetism and ferrimagnetism, a manifestation of ordered magnetism. The phenomenon of antiferromagnetism was first introduced by Lev Landau in 1933.
Generally, antiferromagnetic order may exist at sufficiently low temperatures, but vanishes at and above the Néel temperature – named after Louis Néel, who had first in the West identified this type of magnetic ordering. Above the Néel temperature, the material is typically paramagnetic.
Measurement
When no external field is applied, the antiferromagnetic structure corresponds to a vanishing total magnetization. In an external magnetic field, a kind of ferrimagnetic behavior may be displayed in the antiferromagnetic phase, with the absolute value of one of the sublattice magnetizations differing from that of the other sublattice, resulting in a nonzero net magnetization. Although the net magnetization should be zero at a temperature of absolute zero, the effect of spin canting often causes a small net magnetization to develop, as seen for example in hematite.
The magnetic susceptibility of an antiferromagnetic material typically shows a maximum at the Néel temperature. In contrast, at the transition between the ferromagnetic to the paramagnetic phases the susceptibility will diverge. In the antiferromagnetic case, a divergence is observed in the staggered susceptibility.
Various microscopic (exchange) interactions between the magnetic moments or spins may lead to antiferromagnetic structures. In the simplest case, one may consider an Ising model on a bipartite lattice, e.g. the simple cubic lattice, with couplings between spins at nearest neighbor sites. Depending on the sign of that interaction, ferromagnetic or antiferromagnetic order will result. Geometrical frustration or competing ferro- and antiferromagnetic interactions may lead to different and, perhaps, more complicated magnetic structures.
The relationship between magnetization and the magnetizing field is non-linear like in ferromagnetic materials. This fact is due to the contribution of the hysteresis loop, which for ferromagnetic materials involves a residual magnetization.
Antiferromagnetic materials
Antiferromagnetic structures were first shown through neutron diffraction of transition metal oxides such as nickel, iron, and manganese oxides. The experiments, performed by Clifford Shull, gave the first results showing that magnetic dipoles could be oriented in an antiferromagnetic structure.
Antiferromagnetic materials occur commonly among transition metal compounds, especially oxides. Examples include hematite, metals such as chromium, alloys such as iron manganese (FeMn), and oxides such as nickel oxide (NiO). There are also numerous examples among high nuclearity metal clusters. Organic molecules can also exhibit antiferromagnetic coupling under rare circumstances, as seen in radicals such as 5-dehydro-m-xylylene.
Antiferromagnets can couple to ferromagnets, for instance, through a mechanism known as exchange bias, in which the ferromagnetic film is either grown upon the antiferromagnet or annealed in an aligning magnetic field, causing the surface atoms of the ferromagnet to align with the surface atoms of the antiferromagnet. This provides the ability to "pin" the orientation of a ferromagnetic film, which provides one of the main uses in so-called spin valves, which are the basis of magnetic sensors including modern hard disk drive read heads. The temperature at or above which an antiferromagnetic layer loses its ability to "pin" the magnetization direction of an adjacent ferromagnetic layer is called the blocking temperature of that layer and is usually lower than the Néel temperature.
Geometric frustration
Unlike ferromagnetism, anti-ferromagnetic interactions can lead to multiple optimal states (ground states—states of minimal energy). In one dimension, the anti-ferromagnetic ground state is an alternating series of spins: up, down, up, down, etc. Yet in two dimensions, multiple ground states can occur.
Consider an equilateral triangle with three spins, one on each vertex. If each spin can take on only two values (up or down), there are 23 = 8 possible states of the system, six of which are ground states. The two situations which are not ground states are when all three spins are up or are all down. In any of the other six states, there will be two favorable interactions and one unfavorable one. This illustrates frustration: the inability of the system to find a single ground state. This type of magnetic behavior has been found in minerals that have a crystal stacking structure such as a Kagome lattice or hexagonal lattice.
Other properties
Synthetic antiferromagnets (often abbreviated by SAF) are artificial antiferromagnets consisting of two or more thin ferromagnetic layers separated by a nonmagnetic layer. Dipole coupling of the ferromagnetic layers results in antiparallel alignment of the magnetization of the ferromagnets.
Antiferromagnetism plays a crucial role in giant magnetoresistance, as had been discovered in 1988 by the Nobel Prize winners Albert Fert and Peter Grünberg (awarded in 2007) using synthetic antiferromagnets.
There are also examples of disordered materials (such as iron phosphate glasses) that become antiferromagnetic below their Néel temperature. These disordered networks 'frustrate' the antiparallelism of adjacent spins; i.e. it is not possible to construct a network where each spin is surrounded by opposite neighbour spins. It can only be determined that the average correlation of neighbour spins is antiferromagnetic. This type of magnetism is sometimes called speromagnetism.
See also
References
External links
Magnetism: Models and Mechanisms in E. Pavarini, E. Koch, and U. Schollwöck: Emergent Phenomena in Correlated Matter, Jülich 2013,
Quantum phases
Magnetic ordering
Quantum lattice models
Physical phenomena | Antiferromagnetism | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,343 | [
"Quantum phases",
"Physical phenomena",
"Phases of matter",
"Quantum mechanics",
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic ordering",
"Condensed matter physics",
"Matter",
"Quantum lattice models"
] |
148,555 | https://en.wikipedia.org/wiki/Spin%20glass | In condensed matter physics, a spin glass is a magnetic state characterized by randomness, besides cooperative behavior in freezing of spins at a temperature called the "freezing temperature," Tf. In ferromagnetic solids, component atoms' magnetic spins all align in the same direction. Spin glass when contrasted with a ferromagnet is defined as "disordered" magnetic state in which spins are aligned randomly or without a regular pattern and the couplings too are random. A spin glass should not be confused with a "spin-on glass". The latter is a thin film, usually based on SiO2, which is applied via spin coating.
The term "glass" comes from an analogy between the magnetic disorder in a spin glass and the positional disorder of a conventional, chemical glass, e.g., a window glass. In window glass or any amorphous solid the atomic bond structure is highly irregular; in contrast, a crystal has a uniform pattern of atomic bonds. In ferromagnetic solids, magnetic spins all align in the same direction; this is analogous to a crystal's lattice-based structure.
The individual atomic bonds in a spin glass are a mixture of roughly equal numbers of ferromagnetic bonds (where neighbors have the same orientation) and antiferromagnetic bonds (where neighbors have exactly the opposite orientation: north and south poles are flipped 180 degrees). These patterns of aligned and misaligned atomic magnets create what are known as frustrated interactions distortions in the geometry of atomic bonds compared to what would be seen in a regular, fully aligned solid. They may also create situations where more than one geometric arrangement of atoms is stable.
There are two main aspects of spin glass. On the physical side, spin glasses are real materials with distinctive properties, a review of which was published in 1982. On the mathematical side, simple statistical mechanics models, inspired by real spin glasses, are widely studied and applied.
Spin glasses and the complex internal structures that arise within them are termed "metastable" because they are "stuck" in stable configurations other than the lowest-energy configuration (which would be aligned and ferromagnetic). The mathematical complexity of these structures is difficult but fruitful to study experimentally or in simulations; with applications to physics, chemistry, materials science and artificial neural networks in computer science.
Magnetic behavior
It is the time dependence which distinguishes spin glasses from other magnetic systems.
Above the spin glass transition temperature, Tc, the spin glass exhibits typical magnetic behaviour (such as paramagnetism).
If a magnetic field is applied as the sample is cooled to the transition temperature, magnetization of the sample increases as described by the Curie law. Upon reaching Tc, the sample becomes a spin glass, and further cooling results in little change in magnetization. This is referred to as the field-cooled magnetization.
When the external magnetic field is removed, the magnetization of the spin glass falls rapidly to a lower value known as the remanent magnetization.
Magnetization then decays slowly as it approaches zero (or some small fraction of the original value this remains unknown). This decay is non-exponential, and no simple function can fit the curve of magnetization versus time adequately. This slow decay is particular to spin glasses. Experimental measurements on the order of days have shown continual changes above the noise level of instrumentation.
Spin glasses differ from ferromagnetic materials by the fact that after the external magnetic field is removed from a ferromagnetic substance, the magnetization remains indefinitely at the remanent value. Paramagnetic materials differ from spin glasses by the fact that, after the external magnetic field is removed, the magnetization rapidly falls to zero, with no remanent magnetization. The decay is rapid and exponential.
If the sample is cooled below Tc in the absence of an external magnetic field, and a magnetic field is applied after the transition to the spin glass phase, there is a rapid initial increase to a value called the zero-field-cooled magnetization. A slow upward drift then occurs toward the field-cooled magnetization.
Surprisingly, the sum of the two complicated functions of time (the zero-field-cooled and remanent magnetizations) is a constant, namely the field-cooled value, and thus both share identical functional forms with time, at least in the limit of very small external fields.
Edwards–Anderson model
This is similar to the Ising model. In this model, we have spins arranged on a -dimensional lattice with only nearest neighbor interactions. This model can be solved exactly for the critical temperatures and a glassy phase is observed to exist at low temperatures. The Hamiltonian for this spin system is given by:
where refers to the Pauli spin matrix for the spin-half particle at lattice point , and the sum over refers to summing over neighboring lattice points and . A negative value of denotes an antiferromagnetic type interaction between spins at points and . The sum runs over all nearest neighbor positions on a lattice, of any dimension. The variables representing the magnetic nature of the spin-spin interactions are called bond or link variables.
In order to determine the partition function for this system, one needs to average the free energy where , over all possible values of . The distribution of values of is taken to be a Gaussian with a mean and a variance :
Solving for the free energy using the replica method, below a certain temperature, a new magnetic phase called the spin glass phase (or glassy phase) of the system is found to exist which is characterized by a vanishing magnetization along with a non-vanishing value of the two point correlation function between spins at the same lattice point but at two different replicas:
where are replica indices. The order parameter for the ferromagnetic to spin glass phase transition is therefore , and that for paramagnetic to spin glass is again . Hence the new set of order parameters describing the three magnetic phases consists of both and .
Under the assumption of replica symmetry, the mean-field free energy is given by the expression:
Sherrington–Kirkpatrick model
In addition to unusual experimental properties, spin glasses are the subject of extensive theoretical and computational investigations. A substantial part of early theoretical work on spin glasses dealt with a form of mean-field theory based on a set of replicas of the partition function of the system.
An important, exactly solvable model of a spin glass was introduced by David Sherrington and Scott Kirkpatrick in 1975. It is an Ising model with long range frustrated ferro- as well as antiferromagnetic couplings. It corresponds to a mean-field approximation of spin glasses describing the slow dynamics of the magnetization and the complex non-ergodic equilibrium state.
Unlike the Edwards–Anderson (EA) model, in the system though only two-spin interactions are considered, the range of each interaction can be potentially infinite (of the order of the size of the lattice). Therefore, we see that any two spins can be linked with a ferromagnetic or an antiferromagnetic bond and the distribution of these is given exactly as in the case of Edwards–Anderson model. The Hamiltonian for SK model is very similar to the EA model:
where have same meanings as in the EA model. The equilibrium solution of the model, after some initial attempts by Sherrington, Kirkpatrick and others, was found by Giorgio Parisi in 1979 with the replica method. The subsequent work of interpretation of the Parisi solution—by M. Mezard, G. Parisi, M.A. Virasoro and many others—revealed the complex nature of a glassy low temperature phase characterized by ergodicity breaking, ultrametricity and non-selfaverageness. Further developments led to the creation of the cavity method, which allowed study of the low temperature phase without replicas. A rigorous proof of the Parisi solution has been provided in the work of Francesco Guerra and Michel Talagrand.
Phase diagram
When there is a uniform external magnetic field of magnitude , the energy function becomesLet all couplings are IID samples from the gaussian distribution of mean 0 and variance . In 1979, J.R.L. de Almeida and David Thouless found that, as in the case of the Ising model, the mean-field solution to the SK model becomes unstable when under low-temperature, low-magnetic field state.
The stability region on the phase diagram of the SK model is determined by two dimensionless parameters . Its phase diagram has two parts, divided by the de Almeida-Thouless curve, The curve is the solution set to the equationsThe phase transition occurs at . Just below it, we haveAt low temperature, high magnetic field limit, the line is
Infinite-range model
This is also called the "p-spin model". The infinite-range model is a generalization of the Sherrington–Kirkpatrick model where we not only consider two-spin interactions but -spin interactions, where and is the total number of spins. Unlike the Edwards–Anderson model, but similar to the SK model, the interaction range is infinite. The Hamiltonian for this model is described by:
where have similar meanings as in the EA model. The limit of this model is known as the random energy model. In this limit, the probability of the spin glass existing in a particular state depends only on the energy of that state and not on the individual spin configurations in it.
A Gaussian distribution of magnetic bonds across the lattice is assumed usually to solve this model. Any other distribution is expected to give the same result, as a consequence of the central limit theorem. The Gaussian distribution function, with mean and variance , is given as:
The order parameters for this system are given by the magnetization and the two point spin correlation between spins at the same site , in two different replicas, which are the same as for the SK model. This infinite range model can be solved explicitly for the free energy in terms of and , under the assumption of replica symmetry as well as 1-Replica Symmetry Breaking.
Non-ergodic behavior and applications
A thermodynamic system is ergodic when, given any (equilibrium) instance of the system, it eventually visits every other possible (equilibrium) state (of the same energy). One characteristic of spin glass systems is that, below the freezing temperature , instances are trapped in a "non-ergodic" set of states: the system may fluctuate between several states, but cannot transition to other states of equivalent energy. Intuitively, one can say that the system cannot escape from deep minima of the hierarchically disordered energy landscape; the distances between minima are given by an ultrametric, with tall energy barriers between minima. The participation ratio counts the number of states that are accessible from a given instance, that is, the number of states that participate in the ground state. The ergodic aspect of spin glass was instrumental in the awarding of half the 2021 Nobel Prize in Physics to Giorgio Parisi.
For physical systems, such as dilute manganese in copper, the freezing temperature is typically as low as 30 kelvins (−240 °C), and so the spin-glass magnetism appears to be practically without applications in daily life. The non-ergodic states and rugged energy landscapes are, however, quite useful in understanding the behavior of certain neural networks, including Hopfield networks, as well as many problems in computer science optimization and genetics.
Spin-glass without structural disorder
Elemental crystalline neodymium is paramagnetic at room temperature and becomes an antiferromagnet with incommensurate order upon cooling below 19.9 K. Below this transition temperature it exhibits a complex set of magnetic phases that have long spin relaxation times and spin-glass behavior that does not rely on structural disorder.
History
A detailed account of the history of spin glasses from the early 1960s to the late 1980s can be found in a series of popular articles by Philip W. Anderson in Physics Today.
Discovery
In 1930s, material scientists discovered the Kondo effect, where the resistivity of nominally pure gold reaches a minimum at 10 K, and similarly for nominally pure Cu at 2 K. It was later understood that the Kondo effect occurs when a nonmagnetic metal contains a very small fraction of magnetic atoms (i.e., at high dilution).
Unusual behavior was observed in iron-in-gold alloy (AuFe) and manganese-in-copper alloy (CuMn) at around 1 to 10 atom percent. Cannella and Mydosh observed in 1972 that AuFe had an unexpected cusplike peak in the a.c. susceptibility at a well defined temperature, which would later be termed spin glass freezing temperature.
It was also called "mictomagnet" (micto- is Greek for "mixed"). The term arose from the observation that these materials often contain a mix of ferromagnetic () and antiferromagnetic () interactions, leading to their disordered magnetic structure. This term fell out of favor as the theoretical understanding of spin glasses evolved, recognizing that the magnetic frustration arises not just from a simple mixture of ferro- and antiferromagnetic interactions, but from their randomness and frustration in the system.
Sherrington–Kirkpatrick model
Sherrington and Kirkpatrick proposed the SK model in 1975, and solved it by the replica method. They discovered that at low temperatures, its entropy becomes negative, which they thought was because the replica method is a heuristic method that does not apply at low temperatures.
It was then discovered that the replica method was correct, but the problem lies in that the low-temperature broken symmetry in the SK model cannot be purely characterized by the Edwards-Anderson order parameter. Instead, further order parameters are necessary, which leads to replica breaking ansatz of Giorgio Parisi. At the full replica breaking ansatz, infinitely many order parameters are required to characterize a stable solution.
Applications
The formalism of replica mean-field theory has also been applied in the study of neural networks, where it has enabled calculations of properties such as the storage capacity of simple neural network architectures without requiring a training algorithm (such as backpropagation) to be designed or implemented.
More realistic spin glass models with short range frustrated interactions and disorder, like the Gaussian model where the couplings between neighboring spins follow a Gaussian distribution, have been studied extensively as well, especially using Monte Carlo simulations. These models display spin glass phases bordered by sharp phase transitions.
Besides its relevance in condensed matter physics, spin glass theory has acquired a strongly interdisciplinary character, with applications to neural network theory, computer science, theoretical biology, econophysics etc.
Spin glass models were adapted to the folding funnel model of protein folding.
See also
Amorphous magnet
Antiferromagnetic interaction
Cavity method
Crystal structure
Geometrical frustration
Orientational glass
Phase transition
Quenched disorder
Random energy model
Replica trick
Solid-state physics
Spin ice
Notes
References
Literature
Expositions
Popular exposition, with a minimal amount of mathematics.
A practical tutorial introduction.
1st 15 chapters of 2008 draft version, available at www.stat.ucla.edu Textbook that focuses on the cavity method and the applications to computer science, especially constraint satisfaction problems.
Introduction focused on computer science applications, including neural networks.
Focuses on the experimentally measurable properties of spin glasses (such as copper-manganese alloy).
Covers mean field theory, experimental data, and numerical simulations.
. Early exposition containing the pre-1990 breakthroughs, such as the replica trick.
Approach via statistical field theory.
and . Compendium of rigorously provable results.
Primary sources
. ShieldSquare Captcha
. Papercore Summary http://papercore.org/Sherrington1975
.
.
....
Papercore Summary http://papercore.org/Parisi1980.
.
External links
Statistics of frequency of the term "Spin glass" in arxiv.org
Magnetic ordering
Mathematical physics | Spin glass | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 3,289 | [
"Applied mathematics",
"Theoretical physics",
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic ordering",
"Condensed matter physics",
"Mathematical physics"
] |
148,819 | https://en.wikipedia.org/wiki/Foucault%27s%20measurements%20of%20the%20speed%20of%20light | In 1850, Léon Foucault used a rotating mirror to perform a differential measurement of the speed of light in water versus its speed in air. In 1862, he used a similar apparatus to measure the speed of light in the air.
Background
In 1834, Charles Wheatstone developed a method of using a rapidly rotating mirror to study transient phenomena, and applied this method to measure the velocity of electricity in a wire and the duration of an electric spark. He communicated to François Arago the idea that his method could be adapted to a study of the speed of light.
The early-to-mid 1800s were a period of intense debate on the particle-versus-wave nature of light. Although the observation of the Arago spot in 1819 may have seemed to settle the matter definitively in favor of Fresnel's wave theory of light, various concerns continued to appear to be addressed more satisfactorily by Newton's corpuscular theory. Arago expanded upon Wheatstone's concept in an 1838 publication, suggesting that a differential comparison of the speed of light in the air versus water would serve to distinguish between the particle and wave theories of light.
Foucault had worked with Hippolyte Fizeau on projects such as using the Daguerreotype process to take images of the Sun between 1843 and 1845 and characterizing absorption bands in the infrared spectrum of sunlight in 1847. In 1845, Arago suggested to Fizeau and Foucault that they attempt to measure the speed of light. Sometime in 1849, however, it appears that the two had a falling out, and they parted ways. In 1848−49, Fizeau used, not a rotating mirror, but a toothed wheel apparatus to perform an absolute measurement of the speed of light in air.
In 1850, Fizeau and Foucault both used rotating mirror devices to perform relative measures of the speed of light in the air versus water.
Foucault employed Paul-Gustave Froment to build a rotary-mirror apparatus in which he split a beam of light into two beams, passing one through the water while the other traveled through air. On 27 April 1850, he confirmed that the speed of light was greater as it traveled through the air, seemingly validating the wave theory of light.
With Arago's blessing, Fizeau employed L.F.C. Breguet to construct his apparatus. They achieved their result on 17 June 1850, seven weeks after Foucault.
To achieve the high rotational speeds necessary, Foucault abandoned clockwork and used a carefully balanced steam-powered apparatus designed by Charles Cagniard de la Tour. Foucault originally used tin-mercury mirrors, however at speeds exceeding 200 rps, the reflecting layer would break off, so he switched to using new silver mirrors.
Foucault's determination of the speed of light
1850 experiment
In 1850, Léon Foucault measured the relative speeds of light in air and water. The experiment was proposed by Arago, who wrote,
The apparatus (Figure 1) involves light passing through slit S, reflecting off a mirror R, and forming an image of the slit on the distant stationary mirror M. The light then passes back to mirror R and is reflected back to the original slit. If mirror R is stationary, then the slit image will reform at S.
However, if the mirror R is rotating, it will have moved slightly in the time it takes for the light to bounce from R to M and back, and the light will be deflected away from the original source by a small angle, forming an image to the side of the slit.
Foucault measured the differential speed of light through air versus water by using two distant mirrors (Figure 2). He placed a 3-meter tube of water before one of them. The light passing through the slower medium has its image more displaced. By partially masking the air-path mirror, Foucault was able to distinguish the two images super-imposed on top of one another. He found the speed of light was slower in water than in air.
This experiment did not determine the absolute speeds of light in water or air, only their relative speeds. The rotational speed of the mirror could not be sufficiently accurately measured to determine the absolute speeds of light in water or air. With a rotational speed of 600-800 revolutions per second, the displacement was 0.2 to 0.3 mm.
Guided by similar motivations as his former partner, Foucault in 1850 was more interested in settling the particle-versus-wave debate than in determining an accurate absolute value for the speed of light. His experimental results, announced shortly before Fizeau announced his results on the same topic, were viewed as "driving the last nail in the coffin" of Newton's corpuscle theory of light when it showed that light travels more slowly through water than through air. Newton had explained refraction as a pull of the medium upon the light, implying an increased speed of light in the medium. The corpuscular theory of light went into abeyance, completely overshadowed by the wave theory. This state of affairs lasted until 1905, when Einstein presented heuristic arguments that under various circumstances, such as when considering the photoelectric effect, light exhibits behaviors indicative of a particle nature.
For his efforts, Foucault was made chevalier of the Légion d'honneur, and in 1853 was awarded a doctorate from the Sorbonne.
1862 experiment
In Foucault's 1862 experiment, he desired to obtain an accurate absolute value for the speed of light, since his concern was to deduce an improved value for the astronomical unit. At the time, Foucault was working at the Paris Observatory under Urbain le Verrier. It was le Verrier's belief, based on extensive celestial mechanics calculations, that the consensus value for the speed of light was perhaps 4% too high. Technical limitations prevented Foucault from separating mirrors R and M by more than about 20 meters. Despite this limited path length, Foucault was able to measure the displacement of the slit image (less than 1 mm) with considerable accuracy. In addition, unlike the case with Fizeau's experiment (which required gauging the rotation rate of an adjustable-speed toothed wheel), he could spin the mirror at a constant, chronometrically determined speed. Foucault's measurement confirmed le Verrier's estimate. His 1862 figure for the speed of light (298000 km/s) was within 0.6% of the modern value.
As seen in Figure 3, the displaced image of the source (slit) is at an angle 2θ from the source direction.
Michelson's refinement of the Foucault experiment
It was seen in Figure 1 that Foucault placed the rotating mirror R as close as possible to lens L so as to maximize the distance between R and the slit S. As R rotates, an enlarged image of slit S sweeps across the face of the distant mirror M. The greater the distance RM, the more quickly that the image sweeps across mirror M and the less light is reflected back. Foucault could not increase the RM distance in his folded optical arrangement beyond about 20 meters without the image of the slit becoming too dim to accurately measure.
Between 1877 and 1931, Albert A. Michelson made multiple measurements of the speed of light. His 1877–79 measurements were performed under the auspices of Simon Newcomb, who was also working on measuring the speed of light. Michelson's setup incorporated several refinements on Foucault's original arrangement. As seen in Figure 4, Michelson placed the rotating mirror R near the principal focus of lens L (i.e. the focal point given incident parallel rays of light). If the rotating mirror R were exactly at the principal focus, the moving image of the slit would remain upon the distant plane mirror M (equal in diameter to lens L) as long as the axis of the pencil of light remained on the lens, this being true regardless of the RM distance. Michelson was thus able to increase the RM distance to nearly 2000 feet. To achieve a reasonable value for the RS distance, Michelson used an extremely long focal length lens (150 feet) and compromised on the design by placing R about 15 feet closer to L than the principal focus. This allowed an RS distance of between 28.5 to 33.3 feet. He used carefully calibrated tuning forks to monitor the rotation rate of the air-turbine-powered mirror R, and he would typically measure displacements of the slit image on the order of 115 mm. His 1879 figure for the speed of light, 299944±51 km/s, was within about 0.05% of the modern value. His 1926 repeat of the experiment incorporated still further refinements such as the use of polygonal prism-shaped rotating mirrors (enabling a brighter image) having from eight through sixteen facets and a 22 mile baseline surveyed to fractional parts-per-million accuracy. His figure of 299,796±4 km/s was only about 4 km/s higher than the current accepted value. Michelson's final 1931 attempt to measure the speed of light in vacuum was interrupted by his death. Although his experiment was completed posthumously by F. G. Pease and F. Pearson, various factors militated against a measurement of highest accuracy, including an earthquake which disturbed the baseline measurement.
See also
Speed of light § Measurement
Fizeau's measurement of the speed of light in water
Fizeau's measurement of the speed of light in air
Notes
References
External links
Relative speed of light measurements
"Sur un système d'expériences à l'aide duquel la théorie de l'émission et celle des ondes seront soumises à des épreuves décisives." by F. Arago (1838)
Sur les vitesses relatives de la lumière dans l'air et dans l'eau / par Léon Foucault (1853)
"Sur l'Experience relative a la vitesse comparative de la lumiere dans l'air et dans l'eau." by H. Fizeau and L. Breguet (1850)
Absolute speed of light measurements
Mesure de la vitesse de la lumière ; Étude optique des surfaces / mémoires de Léon Foucault (1913)
Classroom demonstrations
Speed of Light (The Foucault Method)
Measuring the Speed of Light (video, Foucault method) BYU Physics & Astronomy
Optical metrology
Physics experiments | Foucault's measurements of the speed of light | [
"Physics"
] | 2,181 | [
"Experimental physics",
"Physics experiments"
] |
149,261 | https://en.wikipedia.org/wiki/Coastal%20erosion | Coastal erosion is the loss or displacement of land, or the long-term removal of sediment and rocks along the coastline due to the action of waves, currents, tides, wind-driven water, waterborne ice, or other impacts of storms. The landward retreat of the shoreline can be measured and described over a temporal scale of tides, seasons, and other short-term cyclic processes. Coastal erosion may be caused by hydraulic action, abrasion, impact and corrosion by wind and water, and other forces, natural or unnatural.
On non-rocky coasts, coastal erosion results in rock formations in areas where the coastline contains rock layers or fracture zones with varying resistance to erosion. Softer areas become eroded much faster than harder ones, which typically result in landforms such as tunnels, bridges, columns, and pillars. Over time the coast generally evens out. The softer areas fill up with sediment eroded from hard areas, and rock formations are eroded away. Also erosion commonly happens in areas where there are strong winds, loose sand, and soft rocks. The blowing of millions of sharp sand grains creates a sandblasting effect. This effect helps to erode, smooth and polish rocks. The definition of erosion is grinding and wearing away of rock surfaces through the mechanical action of other rock or sand particles.
According to the IPCC, sea level rise caused by climate change will increase coastal erosion worldwide, significantly changing the coasts and low-lying coastal areas.
Coastal erosion
Hydraulic action
Hydraulic action occurs when waves striking a cliff face compress air in cracks on the cliff face. This exerts pressure on the surrounding rock, and can progressively splinter and remove pieces. Over time, the cracks can grow, sometimes forming a cave. The splinters fall to the sea bed where they are subjected to further wave action.
Attrition
Attrition occurs when waves cause loose pieces of rock debris (scree) to collide with each other, grinding and chipping each other, progressively becoming smaller, smoother and rounder. Scree also collides with the base of the cliff face, chipping small pieces of rock from the cliff or have a corrasion (abrasion) effect, similar to sandpapering.
Solution
Solution is the process in which acids contained in sea water will dissolve some types of rock such as chalk or limestone.
Abrasion
Abrasion, also known as corrasion, occurs when waves break on cliff faces and slowly erode it. As the sea pounds cliff faces it also uses the scree from other wave actions to batter and break off pieces of rock from higher up the cliff face which can be used for this same wave action and attrition.
Corrosion
Corrosion or solution/chemical weathering occurs when the sea's pH (anything below pH 7.0) corrodes rocks on a cliff face. Limestone cliff faces, which have a moderately high pH, are particularly affected in this way. Wave action also increases the rate of reaction by removing the reacted material.
Factors that influence erosion rates
Primary factors
The ability of waves to cause erosion of the cliff face depends on many factors.
The hardness (or inversely, the erodibility) of sea-facing rocks is controlled by the rock strength and the presence of fissures, fractures, and beds of non-cohesive materials such as silt and fine sand.
The rate at which cliff fall debris is removed from the foreshore depends on the power of the waves crossing the beach. This energy must reach a critical level to remove material from the debris lobe. Debris lobes can be very persistent and can take many years to completely disappear.
Beaches dissipate wave energy on the foreshore and provide a measure of protection to the adjoining land.
The stability of the foreshore, or its resistance to lowering. Once stable, the foreshore should widen and become more effective at dissipating the wave energy, so that fewer and less powerful waves reach beyond it. The provision of updrift material coming onto the foreshore beneath the cliff helps to ensure a stable beach.
The adjacent bathymetry, or configuration of the seafloor, controls the wave energy arriving at the coast, and can have an important influence on the rate of cliff erosion. Shoals and bars offer protection from wave erosion by causing storm waves to break and dissipate their energy before reaching the shore. Given the dynamic nature of the seafloor, changes in the location of shoals and bars may cause the locus of beach or cliff erosion to change position along the shore.
Coastal erosion has been greatly affected by the rising sea levels globally. There has been great measures of increased coastal erosion on the Eastern seaboard of the United States. Locations such as Florida have noticed increased coastal erosion. In reaction to these increases Florida and its individual counties have increased budgets to replenish the eroded sands that attract visitors to Florida and help support its multibillion-dollar tourism industries.
Secondary factors
Weathering and transport slope processes
Slope hydrology
Vegetation
Cliff foot erosion
Cliff foot sediment accumulation
Resistance of cliff foot sediment to attrition and transport
Human Activities
Tertiary factors
Resource extraction
Coastal management
Control methods
There are three common forms of coastal erosion control methods. These three include: soft-erosion controls, hard-erosion controls, and relocation.
Hard-erosion controls
Hard-erosion control methods provide a more permanent solution than soft-erosion control methods. Seawalls and groynes serve as semi-permanent infrastructure. These structures are not immune from normal wear-and-tear and will have to be refurbished or rebuilt. It is estimated the average life span of a seawall is 50–100 years and the average for a groyne is 30–40 years. Because of their relative permanence, it is assumed that these structures can be a final solution to erosion. Seawalls can also deprive public access to the beach and drastically alter the natural state of the beach. Groynes also drastically alter the natural state of the beach. Some claim that groynes could reduce the interval between beach nourishment projects though they are not seen as a solution to beach nourishment. Other criticisms of seawalls are that they can be expensive, difficult to maintain, and can sometimes cause further damage to the beach if built improperly. As we learn more about hard erosion controls it can be said for certain that these structural solutions cause more problems than they solve. They interfere with the natural water currents and prevent sand from shifting along coasts, along with the high costs to install and maintain them, their tendency to cause erosion in adjacent beaches and dunes, and the unintended diversion of stormwater and into other properties.
Natural forms of hard-erosion control include planting or maintaining native vegetation, such as mangrove forests and coral reefs.
Soft-erosion controls
Soft erosion strategies refer to temporary options of slowing the effects of erosion. These options, including Sandbag and beach nourishment, are not intended to be long-term solutions or permanent solutions. Another method, beach scraping or beach bulldozing allows for the creation of an artificial dune in front of a building or as means of preserving a building foundation. However, there is a U.S. federal moratorium on beach bulldozing during turtle nesting season, 1 May – 15 November. One of the most common methods of soft erosion control is beach nourishment projects. These projects involve dredging sand and moving it to the beaches as a means of reestablishing the sand lost due to erosion. In some situations, beach nourishment is not a suitable measure to take for erosion control, such as in areas with sand sinks or frequent and large storms.
Dynamic revetment, which uses loose cobble to mimic the function of a natural storm beach, may be a soft-erosion control alternative in high energy environments such as open coastlines.
Over the years beach nourishment has become a very controversial shore protection measure: It has the potential to negatively impact several of the natural resources. Some large issues with these beach nourishment projects are that they must follow a wide range of complex laws and regulations, as well as the high expenses it takes to complete these projects. Just because sand is added to a beach does not mean it will stay there. Some communities will bring in large volumes of sand repeatedly only for it to be washed away with the next big storm. Despite these factors, beach nourishment is still used often in many communities. Lately, the U.S. Army Corps of Engineers emphasized the need to consider a whole new range of solutions to coastal erosion, not just structural solutions. Solutions that have potential include native vegetation, wetland protection and restoration, and relocation or removal of structures and debris.
Living Shorelines
The solutions to coastal erosion that include vegetation are called "living shorelines". Living shorelines use plants and other natural elements. Living shorelines are found to be more resilient against storms, improve water quality, increase biodiversity, and provide fishery habitats. Marshes and oyster reefs are examples of vegetation that can be used for living shorelines; they act as natural barriers to waves. Fifteen feet of marsh can absorb fifty percent of the energy of incoming waves.
Relocation
Relocation of infrastructure any housing farther away from the coast is also an option. The natural processes of both absolute and relative sea level rise and erosion are considered in rebuilding. Depending on factors such as the severity of the erosion, as well as the natural landscape of the property, relocation could simply mean moving inland by a short distance or relocation can be to completely remove improvements from an area. A coproduction approach combined with managed retreat has been proposed as a solution that keeps in mind environmental justice. Typically, there has been low public support for "retreating". However, if a community does decide to relocate their buildings along the coast it is common that they will then turn the land into public open space or transfer it into land trusts in order to protect it. These relocation practices are very cost-efficient, can buffer storm surges, safeguard coastal homes and businesses, lower carbon and other pollutants, create nursery habitats for important fish species, restore open space and wildlife, and bring back the culture of these coastal communities.
Tracking
Storms can cause erosion hundreds of times faster than normal weather. Before-and-after comparisons can be made using data gathered by manual surveying, laser altimeter, or a GPS unit mounted on an ATV. Remote sensing data such as Landsat scenes can be used for large scale and multi year assessments of coastal erosion. Moreover, geostatistical models can be applied to quantify erosion effects and the natural temporal and spatial evolution of tracked coastal coastal profiles. The results can be used to determine the required temporal and spatial distances between the measured profiles for ecomic tracking.
Examples
A place where erosion of a cliffed coast has occurred is at Wamberal in the Central Coast region of New South Wales where houses built on top of the cliffs began to collapse into the sea. This is due to waves causing erosion of the primarily sedimentary material on which the buildings foundations sit.
Dunwich, the capital of the English medieval wool trade, disappeared over the period of a few centuries due to redistribution of sediment by waves. Human interference can also increase coastal erosion: Hallsands in Devon, England, was a coastal village washed away over the course of a year, 1917, directly due to earlier dredging of shingle in the bay in front of it.
The California coast, which has soft cliffs of sedimentary rock and is heavily populated, regularly has incidents of house damage as cliffs erodes. Devil's Slide, Santa Barbara, the coast just north of Ensenada, and Malibu are regularly affected.
The Holderness coastline on the east coast of England, just north of the Humber Estuary, is one of the fastest eroding coastlines in Europe due to its soft clay cliffs and powerful waves. Groynes and other artificial measures to keep it under control has only accelerated the process further down the coast, because longshore drift starves the beaches of sand, leaving them more exposed. The white cliffs of Dover have also been affected.
The coastline of North Cove, Washington has been eroding at a rate of over 100 feet per year, earning the area the nickname "Washaway Beach". Much of the original town has collapsed into the ocean. The area is said to be the fastest-eroding shore of the United States' West Coast. Measures were finally taken to slow the erosion, with substantial slowing of the process noted in 2018.
Fort Ricasoli, a historic 17th century fortress in Malta is being threatened by coastal erosion, as it was built on a fault in the headland which is prone to erosion. A small part of one of the bastion walls has already collapsed since the land under it has eroded, and there are cracks in other walls as well.
In El Campello, Spain, the erosion and failure of a Roman fish farm excavated from rock during the first century B.C. was exacerbated by the construction of a close sport harbour.
Hampton-on-Sea is suffering from this problem as well. Hampton-on-Sea is located in Kent, England. It was at one time very popular for its oyster fishing and was very reliant on the sea. Hampton-on-Sea has undergone the effects of coastal erosion since before the 1800s. Hampton-on-Sea's coastal erosion worsened with the increase in global warming and climate change. Global warming is causing a rise in sea level, more intense and frequent storms, and an increase in ocean temperature and precipitation levels. Another reason Hampton-on-Sea had such a horrific case of coastal erosion is due to an increase in the frequency and the intensity of storms it experienced. These natural events had destroyed the Hampton Pier, Hernecliffe Gardens, a set of villas, several roads, and many other structures that once lay on Hampton-On-Sea. After this destruction, in 1899 they started building a sea wall to protect the rest of the remaining land and buildings. However, the sea wall did not offer much help: buildings continued to be affected by the erosion. Then a storm came and broke the sea wall, it then flooded the land behind it. These events cause many land investors to back out. Eventually, Hampton-on-Sea had to be abandoned because the erosion overtook so much of the land. By 1916 Hampton-on-Sea had been completely abandoned. By the 1920s only a couple of structures still stood. It was at that point that Hampton-on-Sea was said to have been finally drowned. Today only three landmarks have survived the tragedy that Hampton-on-Sea had faced. These landmarks include The Hampton Inn, The Hampton Pier, and a few roads. Although The Hampton Pier is not the same size as the original it is still available for people to fish from.
See also
Beach erosion and accretion
Beach evolution
Beach morphodynamics
Beach nourishment
Raised beach
Modern recession of beaches
Paleoshoreline
Integrated coastal zone management
Coastal management, to prevent coastal erosion and creation of beach
Coastal and oceanic landforms
Coastal development hazards
Coastal geography
Coastal engineering
Coastal and Estuarine Research Federation (CERF)
Erosion
Bioerosion
Blowhole
Natural arch
Wave-cut platform
Longshore drift
Deposition (sediment)
Coastal sediment supply
Sand dune stabilization
Submersion
References
Works cited
External links
Sustainable coastal erosion management in Europe
Environment Agency guide to coastal erosion
Wave Erosion
Examine an example of wave erosion
Erosion & Flooding in the Parish of Easington
Some interesting teaching resources
Examples of coastal landforms
US Economic Costs of Coastal Erosion & Inundation NOAA Economics
British Geological Survey coastal erosion and landslides case studies
Shoreline retreat and beach nourishment are projected to increase in Southern California
Images
Images of Coastal features
Coastal construction
Coastal engineering
Coastal geography
Geomorphology | Coastal erosion | [
"Engineering"
] | 3,208 | [
"Construction",
"Coastal engineering",
"Coastal construction",
"Civil engineering"
] |
149,289 | https://en.wikipedia.org/wiki/Sequence%20alignment | In bioinformatics, a sequence alignment is a way of arranging the sequences of DNA, RNA, or protein to identify regions of similarity that may be a consequence of functional, structural, or evolutionary relationships between the sequences. Aligned sequences of nucleotide or amino acid residues are typically represented as rows within a matrix. Gaps are inserted between the residues so that identical or similar characters are aligned in successive columns.
Sequence alignments are also used for non-biological sequences such as calculating the distance cost between strings in a natural language, or to display financial data.
Interpretation
If two sequences in an alignment share a common ancestor, mismatches can be interpreted as point mutations and gaps as indels (that is, insertion or deletion mutations) introduced in one or both lineages in the time since they diverged from one another. In sequence alignments of proteins, the degree of similarity between amino acids occupying a particular position in the sequence can be interpreted as a rough measure of how conserved a particular region or sequence motif is among lineages. The absence of substitutions, or the presence of only very conservative substitutions (that is, the substitution of amino acids whose side chains have similar biochemical properties) in a particular region of the sequence, suggest that this region has structural or functional importance. Although DNA and RNA nucleotide bases are more similar to each other than are amino acids, the conservation of base pairs can indicate a similar functional or structural role.
Alignment methods
Very short or very similar sequences can be aligned by hand. However, most interesting problems require the alignment of lengthy, highly variable or extremely numerous sequences that cannot be aligned solely by human effort. Various algorithms were devised to produce high-quality sequence alignments, and occasionally in adjusting the final results to reflect patterns that are difficult to represent algorithmically (especially in the case of nucleotide sequences). Computational approaches to sequence alignment generally fall into two categories: global alignments and local alignments. Calculating a global alignment is a form of global optimization that "forces" the alignment to span the entire length of all query sequences. By contrast, local alignments identify regions of similarity within long sequences that are often widely divergent overall. Local alignments are often preferable, but can be more difficult to calculate because of the additional challenge of identifying the regions of similarity. A variety of computational algorithms have been applied to the sequence alignment problem. These include slow but formally correct methods like dynamic programming. These also include efficient, heuristic algorithms or probabilistic methods designed for large-scale database search, that do not guarantee to find best matches.
Representations
Alignments are commonly represented both graphically and in text format. In almost all sequence alignment representations, sequences are written in rows arranged so that aligned residues appear in successive columns. In text formats, aligned columns containing identical or similar characters are indicated with a system of conservation symbols. As in the image above, an asterisk or pipe symbol is used to show identity between two columns; other less common symbols include a colon for conservative substitutions and a period for semiconservative substitutions. Many sequence visualization programs also use color to display information about the properties of the individual sequence elements; in DNA and RNA sequences, this equates to assigning each nucleotide its own color. In protein alignments, such as the one in the image above, color is often used to indicate amino acid properties to aid in judging the conservation of a given amino acid substitution. For multiple sequences the last row in each column is often the consensus sequence determined by the alignment; the consensus sequence is also often represented in graphical format with a sequence logo in which the size of each nucleotide or amino acid letter corresponds to its degree of conservation.
Sequence alignments can be stored in a wide variety of text-based file formats, many of which were originally developed in conjunction with a specific alignment program or implementation. Most web-based tools allow a limited number of input and output formats, such as FASTA format and GenBank format and the output is not easily editable. Several conversion programs that provide graphical and/or command line interfaces are available , such as READSEQ and EMBOSS. There are also several programming packages which provide this conversion functionality, such as BioPython, BioRuby and BioPerl. The SAM/BAM files use the CIGAR (Compact Idiosyncratic Gapped Alignment Report) string format to represent an alignment of a sequence to a reference by encoding a sequence of events (e.g. match/mismatch, insertions, deletions).
CIGAR Format
Ref. : GTCGTAGAATA
Read: CACGTAG—TA
CIGAR: 2S5M2D2M
where:
2S = 2 soft clipping (could be mismatches, or a read longer than the matched sequence)
5M = 5 matches or mismatches
2D = 2 deletions
2M = 2 matches or mismatches
The original CIGAR format from the exonerate alignment program did not distinguish between mismatches or matches with the M character.
The SAMv1 spec document defines newer CIGAR codes. In most cases it is preferred to use the '=' and 'X' characters to denote matches or mismatches rather than the older 'M' character, which is ambiguous.
“Consumes query” and “consumes reference” indicate whether the CIGAR operation causes the alignment to step along the query sequence and the reference sequence respectively.
H can only be present as the first and/or last operation.
S may only have H operations between them and the ends of the CIGAR string.
For mRNA-to-genome alignment, an N operation represents an intron. For other types of alignments, the interpretation of N is not defined.
Sum of lengths of the M/I/S/=/X operations shall equal the length of SEQ
Global and local alignments
Global alignments, which attempt to align every residue in every sequence, are most useful when the sequences in the query set are similar and of roughly equal size. (This does not mean global alignments cannot start and/or end in gaps.) A general global alignment technique is the Needleman–Wunsch algorithm, which is based on dynamic programming. Local alignments are more useful for dissimilar sequences that are suspected to contain regions of similarity or similar sequence motifs within their larger sequence context. The Smith–Waterman algorithm is a general local alignment method based on the same dynamic programming scheme but with additional choices to start and end at any place.
Hybrid methods, known as semi-global or "glocal" (short for global-local) methods, search for the best possible partial alignment of the two sequences (in other words, a combination of one or both starts and one or both ends is stated to be aligned). This can be especially useful when the downstream part of one sequence overlaps with the upstream part of the other sequence. In this case, neither global nor local alignment is entirely appropriate: a global alignment would attempt to force the alignment to extend beyond the region of overlap, while a local alignment might not fully cover the region of overlap. Another case where semi-global alignment is useful is when one sequence is short (for example a gene sequence) and the other is very long (for example a chromosome sequence). In that case, the short sequence should be globally (fully) aligned but only a local (partial) alignment is desired for the long sequence.
Fast expansion of genetic data challenges speed of current DNA sequence alignment algorithms. Essential needs for an efficient and accurate method for DNA variant discovery demand innovative approaches for parallel processing in real time. Optical computing approaches have been suggested as promising alternatives to the current electrical implementations, yet their applicability remains to be tested .
Pairwise alignment
Pairwise sequence alignment methods are used to find the best-matching piecewise (local or global) alignments of two query sequences. Pairwise alignments can only be used between two sequences at a time, but they are efficient to calculate and are often used for methods that do not require extreme precision (such as searching a database for sequences with high similarity to a query). The three primary methods of producing pairwise alignments are dot-matrix methods, dynamic programming, and word methods; however, multiple sequence alignment techniques can also align pairs of sequences. Although each method has its individual strengths and weaknesses, all three pairwise methods have difficulty with highly repetitive sequences of low information content - especially where the number of repetitions differ in the two sequences to be aligned.
Maximal unique match
One way of quantifying the utility of a given pairwise alignment is the 'maximal unique match' (MUM), or the longest subsequence that occurs in both query sequences. Longer MUM sequences typically reflect closer relatedness. in the multiple sequence alignment of genomes in computational biology. Identification of MUMs and other potential anchors, is the first step in larger alignment systems such as MUMmer. Anchors are the areas between two genomes where they are highly similar. To understand what a MUM is we can break down each word in the acronym. Match implies that the substring occurs in both sequences to be aligned. Unique means that the substring occurs only once in each sequence. Finally, maximal states that the substring is not part of another larger string that fulfills both prior requirements. The idea behind this, is that long sequences that match exactly and occur only once in each genome are almost certainly part of the global alignment.
More precisely:
"Given two genomes A and B, Maximal Unique Match (MUM) substring is a common substring of A and B of length longer than a specified minimum length d (by default d= 20) such that
it is maximal, that is, it cannot be extended on either end without incurring a mismatch; and
it is unique in both sequences"
Dot-matrix methods
The dot-matrix approach, which implicitly produces a family of alignments for individual sequence regions, is qualitative and conceptually simple, though time-consuming to analyze on a large scale. In the absence of noise, it can be easy to visually identify certain sequence features—such as insertions, deletions, repeats, or inverted repeats—from a dot-matrix plot. To construct a dot-matrix plot, the two sequences are written along the top row and leftmost column of a two-dimensional matrix and a dot is placed at any point where the characters in the appropriate columns match—this is a typical recurrence plot. Some implementations vary the size or intensity of the dot depending on the degree of similarity of the two characters, to accommodate conservative substitutions. The dot plots of very closely related sequences will appear as a single line along the matrix's main diagonal.
Problems with dot plots as an information display technique include: noise, lack of clarity, non-intuitiveness, difficulty extracting match summary statistics and match positions on the two sequences. There is also much wasted space where the match data is inherently duplicated across the diagonal and most of the actual area of the plot is taken up by either empty space or noise, and, finally, dot-plots are limited to two sequences. None of these limitations apply to Miropeats alignment diagrams but they have their own particular flaws.
Dot plots can also be used to assess repetitiveness in a single sequence. A sequence can be plotted against itself and regions that share significant similarities will appear as lines off the main diagonal. This effect occurs when a protein consists of multiple similar structural domains.
Dynamic programming
The technique of dynamic programming can be applied to produce global alignments via the Needleman-Wunsch algorithm, and local alignments via the Smith-Waterman algorithm. In typical usage, protein alignments use a substitution matrix to assign scores to amino-acid matches or mismatches, and a gap penalty for matching an amino acid in one sequence to a gap in the other. DNA and RNA alignments may use a scoring matrix, but in practice often simply assign a positive match score, a negative mismatch score, and a negative gap penalty. (In standard dynamic programming, the score of each amino acid position is independent of the identity of its neighbors, and therefore base stacking effects are not taken into account. However, it is possible to account for such effects by modifying the algorithm.)
A common extension to standard linear gap costs are affine gap costs. Here two different gap penalties are applied for opening a gap and for extending a gap. Typically the former is much larger than the latter, e.g. -10 for gap open and -2 for gap extension. This results in fewer gaps in an alignment and residues and gaps are kept together, traits more representative of biological sequences. The Gotoh algorithm implements affine gap costs by using three matrices.
Dynamic programming can be useful in aligning nucleotide to protein sequences, a task complicated by the need to take into account frameshift mutations (usually insertions or deletions). The framesearch method produces a series of global or local pairwise alignments between a query nucleotide sequence and a search set of protein sequences, or vice versa. Its ability to evaluate frameshifts offset by an arbitrary number of nucleotides makes the method useful for sequences containing large numbers of indels, which can be very difficult to align with more efficient heuristic methods. In practice, the method requires large amounts of computing power or a system whose architecture is specialized for dynamic programming. The BLAST and EMBOSS suites provide basic tools for creating translated alignments (though some of these approaches take advantage of side-effects of sequence searching capabilities of the tools). More general methods are available from open-source software such as GeneWise.
The dynamic programming method is guaranteed to find an optimal alignment given a particular scoring function; however, identifying a good scoring function is often an empirical rather than a theoretical matter. Although dynamic programming is extensible to more than two sequences, it is prohibitively slow for large numbers of sequences or extremely long sequences.
Word methods
Word methods, also known as k-tuple methods, are heuristic methods that are not guaranteed to find an optimal alignment solution, but are significantly more efficient than dynamic programming. These methods are especially useful in large-scale database searches where it is understood that a large proportion of the candidate sequences will have essentially no significant match with the query sequence. Word methods are best known for their implementation in the database search tools FASTA and the BLAST family. Word methods identify a series of short, nonoverlapping subsequences ("words") in the query sequence that are then matched to candidate database sequences. The relative positions of the word in the two sequences being compared are subtracted to obtain an offset; this will indicate a region of alignment if multiple distinct words produce the same offset. Only if this region is detected do these methods apply more sensitive alignment criteria; thus, many unnecessary comparisons with sequences of no appreciable similarity are eliminated.
In the FASTA method, the user defines a value k to use as the word length with which to search the database. The method is slower but more sensitive at lower values of k, which are also preferred for searches involving a very short query sequence. The BLAST family of search methods provides a number of algorithms optimized for particular types of queries, such as searching for distantly related sequence matches. BLAST was developed to provide a faster alternative to FASTA without sacrificing much accuracy; like FASTA, BLAST uses a word search of length k, but evaluates only the most significant word matches, rather than every word match as does FASTA. Most BLAST implementations use a fixed default word length that is optimized for the query and database type, and that is changed only under special circumstances, such as when searching with repetitive or very short query sequences. Implementations can be found via a number of web portals, such as EMBL FASTA and NCBI BLAST.
Multiple sequence alignment
Multiple sequence alignment is an extension of pairwise alignment to incorporate more than two sequences at a time. Multiple alignment methods try to align all of the sequences in a given query set. Multiple alignments are often used in identifying conserved sequence regions across a group of sequences hypothesized to be evolutionarily related. Such conserved sequence motifs can be used in conjunction with structural and mechanistic information to locate the catalytic active sites of enzymes. Alignments are also used to aid in establishing evolutionary relationships by constructing phylogenetic trees. Multiple sequence alignments are computationally difficult to produce and most formulations of the problem lead to NP-complete combinatorial optimization problems. Nevertheless, the utility of these alignments in bioinformatics has led to the development of a variety of methods suitable for aligning three or more sequences.
Dynamic programming
The technique of dynamic programming is theoretically applicable to any number of sequences; however, because it is computationally expensive in both time and memory, it is rarely used for more than three or four sequences in its most basic form. This method requires constructing the n-dimensional equivalent of the sequence matrix formed from two sequences, where n is the number of sequences in the query. Standard dynamic programming is first used on all pairs of query sequences and then the "alignment space" is filled in by considering possible matches or gaps at intermediate positions, eventually constructing an alignment essentially between each two-sequence alignment. Although this technique is computationally expensive, its guarantee of a global optimum solution is useful in cases where only a few sequences need to be aligned accurately. One method for reducing the computational demands of dynamic programming, which relies on the "sum of pairs" objective function, has been implemented in the MSA software package.
Progressive methods
Progressive, hierarchical, or tree methods generate a multiple sequence alignment by first aligning the most similar sequences and then adding successively less related sequences or groups to the alignment until the entire query set has been incorporated into the solution. The initial tree describing the sequence relatedness is based on pairwise comparisons that may include heuristic pairwise alignment methods similar to FASTA. Progressive alignment results are dependent on the choice of "most related" sequences and thus can be sensitive to inaccuracies in the initial pairwise alignments. Most progressive multiple sequence alignment methods additionally weight the sequences in the query set according to their relatedness, which reduces the likelihood of making a poor choice of initial sequences and thus improves alignment accuracy.
Many variations of the Clustal progressive implementation are used for multiple sequence alignment, phylogenetic tree construction, and as input for protein structure prediction. A slower but more accurate variant of the progressive method is known as T-Coffee.
Iterative methods
Iterative methods attempt to improve on the heavy dependence on the accuracy of the initial pairwise alignments, which is the weak point of the progressive methods. Iterative methods optimize an objective function based on a selected alignment scoring method by assigning an initial global alignment and then realigning sequence subsets. The realigned subsets are then themselves aligned to produce the next iteration's multiple sequence alignment. Various ways of selecting the sequence subgroups and objective function are reviewed in.
Motif finding
Motif finding, also known as profile analysis, constructs global multiple sequence alignments that attempt to align short conserved sequence motifs among the sequences in the query set. This is usually done by first constructing a general global multiple sequence alignment, after which the highly conserved regions are isolated and used to construct a set of profile matrices. The profile matrix for each conserved region is arranged like a scoring matrix but its frequency counts for each amino acid or nucleotide at each position are derived from the conserved region's character distribution rather than from a more general empirical distribution. The profile matrices are then used to search other sequences for occurrences of the motif they characterize. In cases where the original data set contained a small number of sequences, or only highly related sequences, pseudocounts are added to normalize the character distributions represented in the motif.
Techniques inspired by computer science
A variety of general optimization algorithms commonly used in computer science have also been applied to the multiple sequence alignment problem. Hidden Markov models have been used to produce probability scores for a family of possible multiple sequence alignments for a given query set; although early HMM-based methods produced underwhelming performance, later applications have found them especially effective in detecting remotely related sequences because they are less susceptible to noise created by conservative or semiconservative substitutions. Genetic algorithms and simulated annealing have also been used in optimizing multiple sequence alignment scores as judged by a scoring function like the sum-of-pairs method. More complete details and software packages can be found in the main article multiple sequence alignment.
The Burrows–Wheeler transform has been successfully applied to fast short read alignment in popular tools such as Bowtie and BWA. See FM-index.
Structural alignment
Structural alignments, which are usually specific to protein and sometimes RNA sequences, use information about the secondary and tertiary structure of the protein or RNA molecule to aid in aligning the sequences. These methods can be used for two or more sequences and typically produce local alignments; however, because they depend on the availability of structural information, they can only be used for sequences whose corresponding structures are known (usually through X-ray crystallography or NMR spectroscopy). Because both protein and RNA structure is more evolutionarily conserved than sequence, structural alignments can be more reliable between sequences that are very distantly related and that have diverged so extensively that sequence comparison cannot reliably detect their similarity.
Structural alignments are used as the "gold standard" in evaluating alignments for homology-based protein structure prediction because they explicitly align regions of the protein sequence that are structurally similar rather than relying exclusively on sequence information. However, clearly structural alignments cannot be used in structure prediction because at least one sequence in the query set is the target to be modeled, for which the structure is not known. It has been shown that, given the structural alignment between a target and a template sequence, highly accurate models of the target protein sequence can be produced; a major stumbling block in homology-based structure prediction is the production of structurally accurate alignments given only sequence information.
DALI
The DALI method, or distance matrix alignment, is a fragment-based method for constructing structural alignments based on contact similarity patterns between successive hexapeptides in the query sequences. It can generate pairwise or multiple alignments and identify a query sequence's structural neighbors in the Protein Data Bank (PDB). It has been used to construct the FSSP structural alignment database (Fold classification based on Structure-Structure alignment of Proteins, or Families of Structurally Similar Proteins). A DALI webserver can be accessed at DALI and the FSSP is located at The Dali Database.
SSAP
SSAP (sequential structure alignment program) is a dynamic programming-based method of structural alignment that uses atom-to-atom vectors in structure space as comparison points. It has been extended since its original description to include multiple as well as pairwise alignments, and has been used in the construction of the CATH (Class, Architecture, Topology, Homology) hierarchical database classification of protein folds. The CATH database can be accessed at CATH Protein Structure Classification.
Combinatorial extension
The combinatorial extension method of structural alignment generates a pairwise structural alignment by using local geometry to align short fragments of the two proteins being analyzed and then assembles these fragments into a larger alignment. Based on measures such as rigid-body root mean square distance, residue distances, local secondary structure, and surrounding environmental features such as residue neighbor hydrophobicity, local alignments called "aligned fragment pairs" are generated and used to build a similarity matrix representing all possible structural alignments within predefined cutoff criteria. A path from one protein structure state to the other is then traced through the matrix by extending the growing alignment one fragment at a time. The optimal such path defines the combinatorial-extension alignment. A web-based server implementing the method and providing a database of pairwise alignments of structures in the Protein Data Bank is located at the Combinatorial Extension website.
Phylogenetic analysis
Phylogenetics and sequence alignment are closely related fields due to the shared necessity of evaluating sequence relatedness. The field of phylogenetics makes extensive use of sequence alignments in the construction and interpretation of phylogenetic trees, which are used to classify the evolutionary relationships between homologous genes represented in the genomes of divergent species. The degree to which sequences in a query set differ is qualitatively related to the sequences' evolutionary distance from one another. Roughly speaking, high sequence identity suggests that the sequences in question have a comparatively young most recent common ancestor, while low identity suggests that the divergence is more ancient. This approximation, which reflects the "molecular clock" hypothesis that a roughly constant rate of evolutionary change can be used to extrapolate the elapsed time since two genes first diverged (that is, the coalescence time), assumes that the effects of mutation and selection are constant across sequence lineages. Therefore, it does not account for possible difference among organisms or species in the rates of DNA repair or the possible functional conservation of specific regions in a sequence. (In the case of nucleotide sequences, the molecular clock hypothesis in its most basic form also discounts the difference in acceptance rates between silent mutations that do not alter the meaning of a given codon and other mutations that result in a different amino acid being incorporated into the protein). More statistically accurate methods allow the evolutionary rate on each branch of the phylogenetic tree to vary, thus producing better estimates of coalescence times for genes.
Progressive multiple alignment techniques produce a phylogenetic tree by necessity because they incorporate sequences into the growing alignment in order of relatedness. Other techniques that assemble multiple sequence alignments and phylogenetic trees score and sort trees first and calculate a multiple sequence alignment from the highest-scoring tree. Commonly used methods of phylogenetic tree construction are mainly heuristic because the problem of selecting the optimal tree, like the problem of selecting the optimal multiple sequence alignment, is NP-hard.
Assessment of significance
Sequence alignments are useful in bioinformatics for identifying sequence similarity, producing phylogenetic trees, and developing homology models of protein structures. However, the biological relevance of sequence alignments is not always clear. Alignments are often assumed to reflect a degree of evolutionary change between sequences descended from a common ancestor; however, it is formally possible that convergent evolution can occur to produce apparent similarity between proteins that are evolutionarily unrelated but perform similar functions and have similar structures.
In database searches such as BLAST, statistical methods can determine the likelihood of a particular alignment between sequences or sequence regions arising by chance given the size and composition of the database being searched. These values can vary significantly depending on the search space. In particular, the likelihood of finding a given alignment by chance increases if the database consists only of sequences from the same organism as the query sequence. Repetitive sequences in the database or query can also distort both the search results and the assessment of statistical significance; BLAST automatically filters such repetitive sequences in the query to avoid apparent hits that are statistical artifacts.
Methods of statistical significance estimation for gapped sequence alignments are available in the literature.
Assessment of credibility
Statistical significance indicates the probability that an alignment of a given quality could arise by chance, but does not indicate how much superior a given alignment is to alternative alignments of the same sequences. Measures of alignment credibility indicate the extent to which the best scoring alignments for a given pair of sequences are substantially similar. Methods of alignment credibility estimation for gapped sequence alignments are available in the literature.
Scoring functions
The choice of a scoring function that reflects biological or statistical observations about known sequences is important to producing good alignments. Protein sequences are frequently aligned using substitution matrices that reflect the probabilities of given character-to-character substitutions. A series of matrices called PAM matrices (Point Accepted Mutation matrices, originally defined by Margaret Dayhoff and sometimes referred to as "Dayhoff matrices") explicitly encode evolutionary approximations regarding the rates and probabilities of particular amino acid mutations. Another common series of scoring matrices, known as BLOSUM (Blocks Substitution Matrix), encodes empirically derived substitution probabilities. Variants of both types of matrices are used to detect sequences with differing levels of divergence, thus allowing users of BLAST or FASTA to restrict searches to more closely related matches or expand to detect more divergent sequences. Gap penalties account for the introduction of a gap - on the evolutionary model, an insertion or deletion mutation - in both nucleotide and protein sequences, and therefore the penalty values should be proportional to the expected rate of such mutations. The quality of the alignments produced therefore depends on the quality of the scoring function.
It can be very useful and instructive to try the same alignment several times with different choices for scoring matrix and/or gap penalty values and compare the results. Regions where the solution is weak or non-unique can often be identified by observing which regions of the alignment are robust to variations in alignment parameters.
Other biological uses
Sequenced RNA, such as expressed sequence tags and full-length mRNAs, can be aligned to a sequenced genome to find where there are genes and get information about alternative splicing and RNA editing. Sequence alignment is also a part of genome assembly, where sequences are aligned to find overlap so that contigs (long stretches of sequence) can be formed. Another use is SNP analysis, where sequences from different individuals are aligned to find single basepairs that are often different in a population.
Non-biological uses
The methods used for biological sequence alignment have also found applications in other fields, most notably in natural language processing and in social sciences, where the Needleman-Wunsch algorithm is usually referred to as Optimal matching. Techniques that generate the set of elements from which words will be selected in natural-language generation algorithms have borrowed multiple sequence alignment techniques from bioinformatics to produce linguistic versions of computer-generated mathematical proofs. In the field of historical and comparative linguistics, sequence alignment has been used to partially automate the comparative method by which linguists traditionally reconstruct languages. Business and marketing research has also applied multiple sequence alignment techniques in analyzing series of purchases over time.
Software
A more complete list of available software categorized by algorithm and alignment type is available at sequence alignment software, but common software tools used for general sequence alignment tasks include ClustalW2 and T-coffee for alignment, and BLAST and FASTA3x for database searching. Commercial tools such as DNASTAR Lasergene, Geneious, and PatternHunter are also available. Tools annotated as performing sequence alignment are listed in the bio.tools registry.
Alignment algorithms and software can be directly compared to one another using a standardized set of benchmark reference multiple sequence alignments known as BAliBASE. The data set consists of structural alignments, which can be considered a standard against which purely sequence-based methods are compared. The relative performance of many common alignment methods on frequently encountered alignment problems has been tabulated and selected results published online at BAliBASE. A comprehensive list of BAliBASE scores for many (currently 12) different alignment tools can be computed within the protein workbench STRAP.
See also
Sequence homology
Sequence mining
BLAST
String searching algorithm
Alignment-free sequence analysis
UGENE
Needleman–Wunsch algorithm
Smith-Waterman algorithm
Sequence analysis in social sciences
References
External links
Bioinformatics algorithms
Computational phylogenetics
Evolutionary developmental biology
Algorithms on strings | Sequence alignment | [
"Biology"
] | 6,430 | [
"Genetics techniques",
"Computational phylogenetics",
"Bioinformatics algorithms",
"Bioinformatics",
"Phylogenetics"
] |
149,326 | https://en.wikipedia.org/wiki/Phylogenetic%20tree | A phylogenetic tree, phylogeny or evolutionary tree is a graphical representation which shows the evolutionary history between a set of species or taxa during a specific time. In other words, it is a branching diagram or a tree showing the evolutionary relationships among various biological species or other entities based upon similarities and differences in their physical or genetic characteristics. In evolutionary biology, all life on Earth is theoretically part of a single phylogenetic tree, indicating common ancestry. Phylogenetics is the study of phylogenetic trees. The main challenge is to find a phylogenetic tree representing optimal evolutionary ancestry between a set of species or taxa. Computational phylogenetics (also phylogeny inference) focuses on the algorithms involved in finding optimal phylogenetic tree in the phylogenetic landscape.
Phylogenetic trees may be rooted or unrooted. In a rooted phylogenetic tree, each node with descendants represents the inferred most recent common ancestor of those descendants, and the edge lengths in some trees may be interpreted as time estimates. Each node is called a taxonomic unit. Internal nodes are generally called hypothetical taxonomic units, as they cannot be directly observed. Trees are useful in fields of biology such as bioinformatics, systematics, and phylogenetics. Unrooted trees illustrate only the relatedness of the leaf nodes and do not require the ancestral root to be known or inferred.
History
The idea of a tree of life arose from ancient notions of a ladder-like progression from lower into higher forms of life (such as in the Great Chain of Being). Early representations of "branching" phylogenetic trees include a "paleontological chart" showing the geological relationships among plants and animals in the book Elementary Geology, by Edward Hitchcock (first edition: 1840).
Charles Darwin featured a diagrammatic evolutionary "tree" in his 1859 book On the Origin of Species. Over a century later, evolutionary biologists still use tree diagrams to depict evolution because such diagrams effectively convey the concept that speciation occurs through the adaptive and semirandom splitting of lineages.
The term phylogenetic, or phylogeny, derives from the two ancient greek words (), meaning "race, lineage", and (), meaning "origin, source".
Properties
Rooted tree
A rooted phylogenetic tree (see two graphics at top) is a directed tree with a unique node — the root — corresponding to the (usually imputed) most recent common ancestor of all the entities at the leaves of the tree. The root node does not have a parent node, but serves as the parent of all other nodes in the tree. The root is therefore a node of degree 2, while other internal nodes have a minimum degree of 3 (where "degree" here refers to the total number of incoming and outgoing edges).
The most common method for rooting trees is the use of an uncontroversial outgroup—close enough to allow inference from trait data or molecular sequencing, but far enough to be a clear outgroup. Another method is midpoint rooting, or a tree can also be rooted by using a non-stationary substitution model.
Unrooted tree
Unrooted trees illustrate the relatedness of the leaf nodes without making assumptions about ancestry. They do not require the ancestral root to be known or inferred. Unrooted trees can always be generated from rooted ones by simply omitting the root. By contrast, inferring the root of an unrooted tree requires some means of identifying ancestry. This is normally done by including an outgroup in the input data so that the root is necessarily between the outgroup and the rest of the taxa in the tree, or by introducing additional assumptions about the relative rates of evolution on each branch, such as an application of the molecular clock hypothesis.
Bifurcating versus multifurcating
Both rooted and unrooted trees can be either bifurcating or multifurcating. A rooted bifurcating tree has exactly two descendants arising from each interior node (that is, it forms a binary tree), and an unrooted bifurcating tree takes the form of an unrooted binary tree, a free tree with exactly three neighbors at each internal node. In contrast, a rooted multifurcating tree may have more than two children at some nodes and an unrooted multifurcating tree may have more than three neighbors at some nodes.
Labeled versus unlabeled
Both rooted and unrooted trees can be either labeled or unlabeled. A labeled tree has specific values assigned to its leaves, while an unlabeled tree, sometimes called a tree shape, defines a topology only. Some sequence-based trees built from a small genomic locus, such as Phylotree, feature internal nodes labeled with inferred ancestral haplotypes.
Enumerating trees
The number of possible trees for a given number of leaf nodes depends on the specific type of tree, but there are always more labeled than unlabeled trees, more multifurcating than bifurcating trees, and more rooted than unrooted trees. The last distinction is the most biologically relevant; it arises because there are many places on an unrooted tree to put the root. For bifurcating labeled trees, the total number of rooted trees is:
for , represents the number of leaf nodes.
For bifurcating labeled trees, the total number of unrooted trees is:
for .
Among labeled bifurcating trees, the number of unrooted trees with leaves is equal to the number of rooted trees with leaves.
The number of rooted trees grows quickly as a function of the number of tips. For 10 tips, there are more than possible bifurcating trees, and the number of multifurcating trees rises faster, with ca. 7 times as many of the latter as of the former.
Special tree types
Dendrogram
A dendrogram is a general name for a tree, whether phylogenetic or not, and hence also for the diagrammatic representation of a phylogenetic tree.
Cladogram
A cladogram only represents a branching pattern; i.e., its branch lengths do not represent time or relative amount of character change, and its internal nodes do not represent ancestors.
Phylogram
A phylogram is a phylogenetic tree that has branch lengths proportional to the amount of character change.
Chronogram
A chronogram is a phylogenetic tree that explicitly represents time through its branch lengths.
Dahlgrenogram
A Dahlgrenogram is a diagram representing a cross section of a phylogenetic tree.
Phylogenetic network
A phylogenetic network is not strictly speaking a tree, but rather a more general graph, or a directed acyclic graph in the case of rooted networks. They are used to overcome some of the limitations inherent to trees.
Spindle diagram
A spindle diagram, or bubble diagram, is often called a romerogram, after its popularisation by the American palaeontologist Alfred Romer.
It represents taxonomic diversity (horizontal width) against geological time (vertical axis) in order to reflect the variation of abundance of various taxa through time.
A spindle diagram is not an evolutionary tree: the taxonomic spindles obscure the actual relationships of the parent taxon to the daughter taxon and have the disadvantage of involving the paraphyly of the parental group.
This type of diagram is no longer used in the form originally proposed.
Coral of life
Darwin also mentioned that the coral may be a more suitable metaphor than the tree. Indeed, phylogenetic corals are useful for portraying past and present life, and they have some advantages over trees (anastomoses allowed, etc.).
Construction
Phylogenetic trees composed with a nontrivial number of input sequences are constructed using computational phylogenetics methods. Distance-matrix methods such as neighbor-joining or UPGMA, which calculate genetic distance from multiple sequence alignments, are simplest to implement, but do not invoke an evolutionary model. Many sequence alignment methods such as ClustalW also create trees by using the simpler algorithms (i.e. those based on distance) of tree construction. Maximum parsimony is another simple method of estimating phylogenetic trees, but implies an implicit model of evolution (i.e. parsimony). More advanced methods use the optimality criterion of maximum likelihood, often within a Bayesian framework, and apply an explicit model of evolution to phylogenetic tree estimation. Identifying the optimal tree using many of these techniques is NP-hard, so heuristic search and optimization methods are used in combination with tree-scoring functions to identify a reasonably good tree that fits the data.
Tree-building methods can be assessed on the basis of several criteria:
efficiency (how long does it take to compute the answer, how much memory does it need?)
power (does it make good use of the data, or is information being wasted?)
consistency (will it converge on the same answer repeatedly, if each time given different data for the same model problem?)
robustness (does it cope well with violations of the assumptions of the underlying model?)
falsifiability (does it alert us when it is not good to use, i.e. when assumptions are violated?)
Tree-building techniques have also gained the attention of mathematicians. Trees can also be built using T-theory.
File formats
Trees can be encoded in a number of different formats, all of which must represent the nested structure of a tree. They may or may not encode branch lengths and other features. Standardized formats are critical for distributing and sharing trees without relying on graphics output that is hard to import into existing software. Commonly used formats are
Nexus file format
Newick format
Limitations of phylogenetic analysis
Although phylogenetic trees produced on the basis of sequenced genes or genomic data in different species can provide evolutionary insight, these analyses have important limitations. Most importantly, the trees that they generate are not necessarily correct – they do not necessarily accurately represent the evolutionary history of the included taxa. As with any scientific result, they are subject to falsification by further study (e.g., gathering of additional data, analyzing the existing data with improved methods). The data on which they are based may be noisy; the analysis can be confounded by genetic recombination, horizontal gene transfer, hybridisation between species that were not nearest neighbors on the tree before hybridisation takes place, and conserved sequences.
Also, there are problems in basing an analysis on a single type of character, such as a single gene or protein or only on morphological analysis, because such trees constructed from another unrelated data source often differ from the first, and therefore great care is needed in inferring phylogenetic relationships among species. This is most true of genetic material that is subject to lateral gene transfer and recombination, where different haplotype blocks can have different histories. In these types of analysis, the output tree of a phylogenetic analysis of a single gene is an estimate of the gene's phylogeny (i.e. a gene tree) and not the phylogeny of the taxa (i.e. species tree) from which these characters were sampled, though ideally, both should be very close. For this reason, serious phylogenetic studies generally use a combination of genes that come from different genomic sources (e.g., from mitochondrial or plastid vs. nuclear genomes), or genes that would be expected to evolve under different selective regimes, so that homoplasy (false homology) would be unlikely to result from natural selection.
When extinct species are included as terminal nodes in an analysis (rather than, for example, to constrain internal nodes), they are considered not to represent direct ancestors of any extant species. Extinct species do not typically contain high-quality DNA.
The range of useful DNA materials has expanded with advances in extraction and sequencing technologies. Development of technologies able to infer sequences from smaller fragments, or from spatial patterns of DNA degradation products, would further expand the range of DNA considered useful.
Phylogenetic trees can also be inferred from a range of other data types, including morphology, the presence or absence of particular types of genes, insertion and deletion events – and any other observation thought to contain an evolutionary signal.
Phylogenetic networks are used when bifurcating trees are not suitable, due to these complications which suggest a more reticulate evolutionary history of the organisms sampled.
See also
Clade
Cladistics
Computational phylogenetics
Evolutionary biology
Evolutionary taxonomy
Generalized tree alignment
List of phylogenetics software
List of phylogenetic tree visualization software
PANDIT, a biological database covering protein domains
Phylogenetic comparative methods
Phylogenetic reconciliation
Taxonomic rank
Tokogeny
References
Further reading
Schuh, R. T. and A. V. Z. Brower. 2009. Biological Systematics: principles and applications (2nd edn.)
Manuel Lima, The Book of Trees: Visualizing Branches of Knowledge, 2014, Princeton Architectural Press, New York.
MEGA, a free software to draw phylogenetic trees.
Gontier, N. 2011. "Depicting the Tree of Life: the Philosophical and Historical Roots of Evolutionary Tree Diagrams." Evolution, Education, Outreach 4: 515–538.
Jan Sapp, The New Foundations of Evolution: On the Tree of Life, 2009, Oxford University Press, New York.
External links
Images
Human Y-Chromosome 2002 Phylogenetic Tree
iTOL: Interactive Tree Of Life
Phylogenetic Tree of Artificial Organisms Evolved on Computers
Miyamoto and Goodman's Phylogram of Eutherian Mammals
General
An overview of different methods of tree visualization is available at
OneZoom: Tree of Life – all living species as intuitive and zoomable fractal explorer (responsive design)
Discover Life An interactive tree based on the U.S. National Science Foundation's Assembling the Tree of Life Project
PhyloCode
A Multiple Alignment of 139 Myosin Sequences and a Phylogenetic Tree
Tree of Life Web Project
Phylogenetic inferring on the T-REX server
NCBI's Taxonomy Database
ETE: A Python Environment for Tree Exploration This is a programming library to analyze, manipulate and visualize phylogenetic trees. Ref.
A daily-updated tree of (sequenced) life
Phylogenetics
Trees (data structures) | Phylogenetic tree | [
"Biology"
] | 2,860 | [
"Bioinformatics",
"Phylogenetics",
"Tree of life (biology)",
"Taxonomy (biology)"
] |
149,353 | https://en.wikipedia.org/wiki/Computational%20biology | Computational biology refers to the use of techniques in computer science, data analysis, mathematical modeling and computational simulations to understand biological systems and relationships. An intersection of computer science, biology, and data science, the field also has foundations in applied mathematics, molecular biology, cell biology, chemistry, and genetics.
History
Bioinformatics, the analysis of informatics processes in biological systems, began in the early 1970s. At this time, research in artificial intelligence was using network models of the human brain in order to generate new algorithms. This use of biological data pushed biological researchers to use computers to evaluate and compare large data sets in their own field.
By 1982, researchers shared information via punch cards. The amount of data grew exponentially by the end of the 1980s, requiring new computational methods for quickly interpreting relevant information.
Perhaps the best-known example of computational biology, the Human Genome Project, officially began in 1990. By 2003, the project had mapped around 85% of the human genome, satisfying its initial goals. Work continued, however, and by 2021 level " a complete genome" was reached with only 0.3% remaining bases covered by potential issues. The missing Y chromosome was added in January 2022.
Since the late 1990s, computational biology has become an important part of biology, leading to numerous subfields. Today, the International Society for Computational Biology recognizes 21 different 'Communities of Special Interest', each representing a slice of the larger field. In addition to helping sequence the human genome, computational biology has helped create accurate models of the human brain, map the 3D structure of genomes, and model biological systems.
Global contributions
Colombia
In 2000, despite a lack of initial expertise in programming and data management, Colombia began applying computational biology from an industrial perspective, focusing on plant diseases. This research has contributed to understanding how to counteract diseases in crops like potatoes and studying the genetic diversity of coffee plants. By 2007, concerns about alternative energy sources and global climate change prompted biologists to collaborate with systems and computer engineers. Together, they developed a robust computational network and database to address these challenges. In 2009, in partnership with the University of Los Angeles, Colombia also created a Virtual Learning Environment (VLE) to improve the integration of computational biology and bioinformatics.
Poland
In Poland, computational biology is closely linked to mathematics and computational science, serving as a foundation for bioinformatics and biological physics. The field is divided into two main areas: one focusing on physics and simulation and the other on biological sequences. The application of statistical models in Poland has advanced techniques for studying proteins and RNA, contributing to global scientific progress. Polish scientists have also been instrumental in evaluating protein prediction methods, significantly enhancing the field of computational biology. Over time, they have expanded their research to cover topics such as protein-coding analysis and hybrid structures, further solidifying Poland's influence on the development of bioinformatics worldwide.
Applications
Anatomy
Computational anatomy is the study of anatomical shape and form at the visible or gross anatomical scale of morphology. It involves the development of computational mathematical and data-analytical methods for modeling and simulating biological structures. It focuses on the anatomical structures being imaged, rather than the medical imaging devices. Due to the availability of dense 3D measurements via technologies such as magnetic resonance imaging, computational anatomy has emerged as a subfield of medical imaging and bioengineering for extracting anatomical coordinate systems at the morpheme scale in 3D.
The original formulation of computational anatomy is as a generative model of shape and form from exemplars acted upon via transformations. The diffeomorphism group is used to study different coordinate systems via coordinate transformations as generated via the Lagrangian and Eulerian velocities of flow from one anatomical configuration in to another. It relates with shape statistics and morphometrics, with the distinction that diffeomorphisms are used to map coordinate systems, whose study is known as diffeomorphometry.
Data and modeling
Mathematical biology is the use of mathematical models of living organisms to examine the systems that govern structure, development, and behavior in biological systems. This entails a more theoretical approach to problems, rather than its more empirically-minded counterpart of experimental biology. Mathematical biology draws on discrete mathematics, topology (also useful for computational modeling), Bayesian statistics, linear algebra and Boolean algebra.
These mathematical approaches have enabled the creation of databases and other methods for storing, retrieving, and analyzing biological data, a field known as bioinformatics. Usually, this process involves genetics and analyzing genes.
Gathering and analyzing large datasets have made room for growing research fields such as data mining, and computational biomodeling, which refers to building computer models and visual simulations of biological systems. This allows researchers to predict how such systems will react to different environments, which is useful for determining if a system can "maintain their state and functions against external and internal perturbations". While current techniques focus on small biological systems, researchers are working on approaches that will allow for larger networks to be analyzed and modeled. A majority of researchers believe this will be essential in developing modern medical approaches to creating new drugs and gene therapy. A useful modeling approach is to use Petri nets via tools such as esyN.
Along similar lines, until recent decades theoretical ecology has largely dealt with analytic models that were detached from the statistical models used by empirical ecologists. However, computational methods have aided in developing ecological theory via simulation of ecological systems, in addition to increasing application of methods from computational statistics in ecological analyses.
Systems biology
Systems biology consists of computing the interactions between various biological systems ranging from the cellular level to entire populations with the goal of discovering emergent properties. This process usually involves networking cell signaling and metabolic pathways. Systems biology often uses computational techniques from biological modeling and graph theory to study these complex interactions at cellular levels.
Evolutionary biology
Computational biology has assisted evolutionary biology by:
Using DNA data to reconstruct the tree of life with computational phylogenetics
Fitting population genetics models (either forward time or backward time) to DNA data to make inferences about demographic or selective history
Building population genetics models of evolutionary systems from first principles in order to predict what is likely to evolve
Genomics
Computational genomics is the study of the genomes of cells and organisms. The Human Genome Project is one example of computational genomics. This project looks to sequence the entire human genome into a set of data. Once fully implemented, this could allow for doctors to analyze the genome of an individual patient. This opens the possibility of personalized medicine, prescribing treatments based on an individual's pre-existing genetic patterns. Researchers are looking to sequence the genomes of animals, plants, bacteria, and all other types of life.
One of the main ways that genomes are compared is by sequence homology. Homology is the study of biological structures and nucleotide sequences in different organisms that come from a common ancestor. Research suggests that between 80 and 90% of genes in newly sequenced prokaryotic genomes can be identified this way.
Sequence alignment is another process for comparing and detecting similarities between biological sequences or genes. Sequence alignment is useful in a number of bioinformatics applications, such as computing the longest common subsequence of two genes or comparing variants of certain diseases.
An untouched project in computational genomics is the analysis of intergenic regions, which comprise roughly 97% of the human genome. Researchers are working to understand the functions of non-coding regions of the human genome through the development of computational and statistical methods and via large consortia projects such as ENCODE and the Roadmap Epigenomics Project.
Understanding how individual genes contribute to the biology of an organism at the molecular, cellular, and organism levels is known as gene ontology. The Gene Ontology Consortium's mission is to develop an up-to-date, comprehensive, computational model of biological systems, from the molecular level to larger pathways, cellular, and organism-level systems. The Gene Ontology resource provides a computational representation of current scientific knowledge about the functions of genes (or, more properly, the protein and non-coding RNA molecules produced by genes) from many different organisms, from humans to bacteria.
3D genomics is a subsection in computational biology that focuses on the organization and interaction of genes within a eukaryotic cell. One method used to gather 3D genomic data is through Genome Architecture Mapping (GAM). GAM measures 3D distances of chromatin and DNA in the genome by combining cryosectioning, the process of cutting a strip from the nucleus to examine the DNA, with laser microdissection. A nuclear profile is simply this strip or slice that is taken from the nucleus. Each nuclear profile contains genomic windows, which are certain sequences of nucleotides - the base unit of DNA. GAM captures a genome network of complex, multi enhancer chromatin contacts throughout a cell.
Neuroscience
Computational neuroscience is the study of brain function in terms of the information processing properties of the nervous system. A subset of neuroscience, it looks to model the brain to examine specific aspects of the neurological system. Models of the brain include:
Realistic Brain Models: These models look to represent every aspect of the brain, including as much detail at the cellular level as possible. Realistic models provide the most information about the brain, but also have the largest margin for error. More variables in a brain model create the possibility for more error to occur. These models do not account for parts of the cellular structure that scientists do not know about. Realistic brain models are the most computationally heavy and the most expensive to implement.
Simplifying Brain Models: These models look to limit the scope of a model in order to assess a specific physical property of the neurological system. This allows for the intensive computational problems to be solved, and reduces the amount of potential error from a realistic brain model.
It is the work of computational neuroscientists to improve the algorithms and data structures currently used to increase the speed of such calculations.
Computational neuropsychiatry is an emerging field that uses mathematical and computer-assisted modeling of brain mechanisms involved in mental disorders. Several initiatives have demonstrated that computational modeling is an important contribution to understand neuronal circuits that could generate mental functions and dysfunctions.
Pharmacology
Computational pharmacology is "the study of the effects of genomic data to find links between specific genotypes and diseases and then screening drug data". The pharmaceutical industry requires a shift in methods to analyze drug data. Pharmacologists were able to use Microsoft Excel to compare chemical and genomic data related to the effectiveness of drugs. However, the industry has reached what is referred to as the Excel barricade. This arises from the limited number of cells accessible on a spreadsheet. This development led to the need for computational pharmacology. Scientists and researchers develop computational methods to analyze these massive data sets. This allows for an efficient comparison between the notable data points and allows for more accurate drugs to be developed.
Analysts project that if major medications fail due to patents, that computational biology will be necessary to replace current drugs on the market. Doctoral students in computational biology are being encouraged to pursue careers in industry rather than take Post-Doctoral positions. This is a direct result of major pharmaceutical companies needing more qualified analysts of the large data sets required for producing new drugs.
Oncology
Computational biology plays a crucial role in discovering signs of new, previously unknown living creatures and in cancer research. This field involves large-scale measurements of cellular processes, including RNA, DNA, and proteins, which pose significant computational challenges. To overcome these, biologists rely on computational tools to accurately measure and analyze biological data. In cancer research, computational biology aids in the complex analysis of tumor samples, helping researchers develop new ways to characterize tumors and understand various cellular properties. The use of high-throughput measurements, involving millions of data points from DNA, RNA, and other biological structures, helps in diagnosing cancer at early stages and in understanding the key factors that contribute to cancer development. Areas of focus include analyzing molecules that are deterministic in causing cancer and understanding how the human genome relates to tumor causation.
Toxicology
Computational toxicology is a multidisciplinary area of study, which is employed in the early stages of drug discovery and development to predict the safety and potential toxicity of drug candidates.
Techniques
Computational biologists use a wide range of software and algorithms to carry out their research.
Unsupervised Learning
Unsupervised learning is a type of algorithm that finds patterns in unlabeled data. One example is k-means clustering, which aims to partition n data points into k clusters, in which each data point belongs to the cluster with the nearest mean. Another version is the k-medoids algorithm, which, when selecting a cluster center or cluster centroid, will pick one of its data points in the set, and not just an average of the cluster.
The algorithm follows these steps:
Randomly select k distinct data points. These are the initial clusters.
Measure the distance between each point and each of the 'k' clusters. (This is the distance of the points from each point k).
Assign each point to the nearest cluster.
Find the center of each cluster (medoid).
Repeat until the clusters no longer change.
Assess the quality of the clustering by adding up the variation within each cluster.
Repeat the processes with different values of k.
Pick the best value for 'k' by finding the "elbow" in the plot of which k value has the lowest variance.
One example of this in biology is used in the 3D mapping of a genome. Information of a mouse's HIST1 region of chromosome 13 is gathered from Gene Expression Omnibus. This information contains data on which nuclear profiles show up in certain genomic regions. With this information, the Jaccard distance can be used to find a normalized distance between all the loci.
Graph Analytics
Graph analytics, or network analysis, is the study of graphs that represent connections between different objects. Graphs can represent all kinds of networks in biology such as protein-protein interaction networks, regulatory networks, Metabolic and biochemical networks and much more. There are many ways to analyze these networks. One of which is looking at centrality in graphs. Finding centrality in graphs assigns nodes rankings to their popularity or centrality in the graph. This can be useful in finding which nodes are most important. For example, given data on the activity of genes over a time period, degree centrality can be used to see what genes are most active throughout the network, or what genes interact with others the most throughout the network. This contributes to the understanding of the roles certain genes play in the network.
There are many ways to calculate centrality in graphs all of which can give different kinds of information on centrality. Finding centralities in biology can be applied in many different circumstances, some of which are gene regulatory, protein interaction and metabolic networks.
Supervised Learning
Supervised learning is a type of algorithm that learns from labeled data and learns how to assign labels to future data that is unlabeled. In biology supervised learning can be helpful when we have data that we know how to categorize and we would like to categorize more data into those categories.
A common supervised learning algorithm is the random forest, which uses numerous decision trees to train a model to classify a dataset. Forming the basis of the random forest, a decision tree is a structure which aims to classify, or label, some set of data using certain known features of that data. A practical biological example of this would be taking an individual's genetic data and predicting whether or not that individual is predisposed to develop a certain disease or cancer. At each internal node the algorithm checks the dataset for exactly one feature, a specific gene in the previous example, and then branches left or right based on the result. Then at each leaf node, the decision tree assigns a class label to the dataset. So in practice, the algorithm walks a specific root-to-leaf path based on the input dataset through the decision tree, which results in the classification of that dataset. Commonly, decision trees have target variables that take on discrete values, like yes/no, in which case it is referred to as a classification tree, but if the target variable is continuous then it is called a regression tree. To construct a decision tree, it must first be trained using a training set to identify which features are the best predictors of the target variable.
Open source software
Open source software provides a platform for computational biology where everyone can access and benefit from software developed in research. PLOS cites four main reasons for the use of open source software:
Reproducibility: This allows for researchers to use the exact methods used to calculate the relations between biological data.
Faster development: developers and researchers do not have to reinvent existing code for minor tasks. Instead they can use pre-existing programs to save time on the development and implementation of larger projects.
Increased quality: Having input from multiple researchers studying the same topic provides a layer of assurance that errors will not be in the code.
Long-term availability: Open source programs are not tied to any businesses or patents. This allows for them to be posted to multiple web pages and ensure that they are available in the future.
Research
There are several large conferences that are concerned with computational biology. Some notable examples are Intelligent Systems for Molecular Biology, European Conference on Computational Biology and Research in Computational Molecular Biology.
There are also numerous journals dedicated to computational biology. Some notable examples include Journal of Computational Biology and PLOS Computational Biology, a peer-reviewed open access journal that has many notable research projects in the field of computational biology. They provide reviews on software, tutorials for open source software, and display information on upcoming computational biology conferences. Other journals relevant to this field include Bioinformatics, Computers in Biology and Medicine, BMC Bioinformatics, Nature Methods, Nature Communications, Scientific Reports, PLOS One, etc.
Related fields
Computational biology, bioinformatics and mathematical biology are all interdisciplinary approaches to the life sciences that draw from quantitative disciplines such as mathematics and information science. The NIH describes computational/mathematical biology as the use of computational/mathematical approaches to address theoretical and experimental questions in biology and, by contrast, bioinformatics as the application of information science to understand complex life-sciences data.
Specifically, the NIH defines
While each field is distinct, there may be significant overlap at their interface, so much so that to many, bioinformatics and computational biology are terms that are used interchangeably.
The terms computational biology and evolutionary computation have a similar name, but are not to be confused. Unlike computational biology, evolutionary computation is not concerned with modeling and analyzing biological data. It instead creates algorithms based on the ideas of evolution across species. Sometimes referred to as genetic algorithms, the research of this field can be applied to computational biology. While evolutionary computation is not inherently a part of computational biology, computational evolutionary biology is a subfield of it.
See also
References
External links
bioinformatics.org
Bioinformatics
Computational fields of study | Computational biology | [
"Technology",
"Engineering",
"Biology"
] | 3,899 | [
"Biological engineering",
"Computational fields of study",
"Bioinformatics",
"Computing and society",
"Computational biology"
] |
149,544 | https://en.wikipedia.org/wiki/Molecular%20phylogenetics | Molecular phylogenetics () is the branch of phylogeny that analyzes genetic, hereditary molecular differences, predominantly in DNA sequences, to gain information on an organism's evolutionary relationships. From these analyses, it is possible to determine the processes by which diversity among species has been achieved. The result of a molecular phylogenetic analysis is expressed in a phylogenetic tree. Molecular phylogenetics is one aspect of molecular systematics, a broader term that also includes the use of molecular data in taxonomy and biogeography.
Molecular phylogenetics and molecular evolution correlate. Molecular evolution is the process of selective changes (mutations) at a molecular level (genes, proteins, etc.) throughout various branches in the tree of life (evolution). Molecular phylogenetics makes inferences of the evolutionary relationships that arise due to molecular evolution and results in the construction of a phylogenetic tree.
History
The theoretical frameworks for molecular systematics were laid in the 1960s in the works of Emile Zuckerkandl, Emanuel Margoliash, Linus Pauling, and Walter M. Fitch. Applications of molecular systematics were pioneered by Charles G. Sibley (birds), Herbert C. Dessauer (herpetology), and Morris Goodman (primates), followed by Allan C. Wilson, Robert K. Selander, and John C. Avise (who studied various groups). Work with protein electrophoresis began around 1956. Although the results were not quantitative and did not initially improve on morphological classification, they provided tantalizing hints that long-held notions of the classifications of birds, for example, needed substantial revision. In the period of 1974–1986, DNA–DNA hybridization was the dominant technique used to measure genetic difference.
Theoretical background
Early attempts at molecular systematics were also termed chemotaxonomy and made use of proteins, enzymes, carbohydrates, and other molecules that were separated and characterized using techniques such as chromatography. These have been replaced in recent times largely by DNA sequencing, which produces the exact sequences of nucleotides or bases in either DNA or RNA segments extracted using different techniques. In general, these are considered superior for evolutionary studies, since the actions of evolution are ultimately reflected in the genetic sequences. At present, it is still a long and expensive process to sequence the entire DNA of an organism (its genome). However, it is quite feasible to determine the sequence of a defined area of a particular chromosome. Typical molecular systematic analyses require the sequencing of around 1000 base pairs. At any location within such a sequence, the bases found in a given position may vary between organisms. The particular sequence found in a given organism is referred to as its haplotype. In principle, since there are four base types, with 1000 base pairs, we could have 41000 distinct haplotypes. However, for organisms within a particular species or in a group of related species, it has been found empirically that only a minority of sites show any variation at all, and most of the variations that are found are correlated, so that the number of distinct haplotypes that are found is relatively small.
In a molecular systematic analysis, the haplotypes are determined for a defined area of genetic material; a substantial sample of individuals of the target species or other taxon is used; however, many current studies are based on single individuals. Haplotypes of individuals of closely related, yet different, taxa are also determined. Finally, haplotypes from a smaller number of individuals from a definitely different taxon are determined: these are referred to as an outgroup. The base sequences for the haplotypes are then compared. In the simplest case, the difference between two haplotypes is assessed by counting the number of locations where they have different bases: this is referred to as the number of substitutions (other kinds of differences between haplotypes can also occur, for example, the insertion of a section of nucleic acid in one haplotype that is not present in another). The difference between organisms is usually re-expressed as a percentage divergence, by dividing the number of substitutions by the number of base pairs analysed: the hope is that this measure will be independent of the location and length of the section of DNA that is sequenced.
An older and superseded approach was to determine the divergences between the genotypes of individuals by DNA–DNA hybridization. The advantage claimed for using hybridization rather than gene sequencing was that it was based on the entire genotype, rather than on particular sections of DNA. Modern sequence comparison techniques overcome this objection by the use of multiple sequences.
Once the divergences between all pairs of samples have been determined, the resulting triangular matrix of differences is submitted to some form of statistical cluster analysis, and the resulting dendrogram is examined in order to see whether the samples cluster in the way that would be expected from current ideas about the taxonomy of the group. Any group of haplotypes that are all more similar to one another than any of them is to any other haplotype may be said to constitute a clade, which may be visually represented as the figure displayed on the right demonstrates. Statistical techniques such as bootstrapping and jackknifing help in providing reliability estimates for the positions of haplotypes within the evolutionary trees.
Techniques and applications
Every living organism contains deoxyribonucleic acid (DNA), ribonucleic acid (RNA), and proteins. In general, closely related organisms have a high degree of similarity in the molecular structure of these substances, while the molecules of organisms distantly related often show a pattern of dissimilarity. Conserved sequences, such as mitochondrial DNA, are expected to accumulate mutations over time, and assuming a constant rate of mutation, provide a molecular clock for dating divergence. Molecular phylogeny uses such data to build a "relationship tree" that shows the probable evolution of various organisms. With the invention of Sanger sequencing in 1977, it became possible to isolate and identify these molecular structures. High-throughput sequencing may also be used to obtain the transcriptome of an organism, allowing inference of phylogenetic relationships using transcriptomic data.
The most common approach is the comparison of homologous sequences for genes using sequence alignment techniques to identify similarity. Another application of molecular phylogeny is in DNA barcoding, wherein the species of an individual organism is identified using small sections of mitochondrial DNA or chloroplast DNA. Another application of the techniques that make this possible can be seen in the very limited field of human genetics, such as the ever-more-popular use of genetic testing to determine a child's paternity, as well as the emergence of a new branch of criminal forensics focused on evidence known as genetic fingerprinting.
Molecular phylogenetic analysis
There are several methods available for performing a molecular phylogenetic analysis. One method, including a comprehensive step-by-step protocol on constructing a phylogenetic tree, including DNA/Amino Acid contiguous sequence assembly, multiple sequence alignment, model-test (testing best-fitting substitution models), and phylogeny reconstruction using Maximum Likelihood and Bayesian Inference, is available at Nature Protocol.
Another molecular phylogenetic analysis technique has been described by Pevsner and shall be summarized in the sentences to follow (Pevsner, 2015). A phylogenetic analysis typically consists of five major steps. The first stage comprises sequence acquisition. The following step consists of performing a multiple sequence alignment, which is the fundamental basis of constructing a phylogenetic tree. The third stage includes different models of DNA and amino acid substitution. Several models of substitution exist. A few examples include Hamming distance, the Jukes and Cantor one-parameter model, and the Kimura two-parameter model (see Models of DNA evolution). The fourth stage consists of various methods of tree building, including distance-based and character-based methods. The normalized Hamming distance and the Jukes-Cantor correction formulas provide the degree of divergence and the probability that a nucleotide changes to another, respectively. Common tree-building methods include unweighted pair group method using arithmetic mean (UPGMA) and Neighbor joining, which are distance-based methods, Maximum parsimony, which is a character-based method, and Maximum likelihood estimation and Bayesian inference, which are character-based/model-based methods. UPGMA is a simple method; however, it is less accurate than the neighbor-joining approach. Finally, the last step comprises evaluating the trees. This assessment of accuracy is composed of consistency, efficiency, and robustness.
MEGA (molecular evolutionary genetics analysis) is an analysis software that is user-friendly and free to download and use. This software is capable of analyzing both distance-based and character-based tree methodologies. MEGA also contains several options one may choose to utilize, such as heuristic approaches and bootstrapping. Bootstrapping is an approach that is commonly used to measure the robustness of topology in a phylogenetic tree, which demonstrates the percentage each clade is supported after numerous replicates. In general, a value greater than 70% is considered significant. The flow chart displayed on the right visually demonstrates the order of the five stages of Pevsner's molecular phylogenetic analysis technique that have been described.
Limitations
Molecular systematics is an essentially cladistic approach: it assumes that classification must correspond to phylogenetic descent, and that all valid taxa must be monophyletic. This is a limitation when attempting to determine the optimal tree(s), which often involves bisecting and reconnecting portions of the phylogenetic tree(s).
The recent discovery of extensive horizontal gene transfer among organisms provides a significant complication to molecular systematics, indicating that different genes within the same organism can have different phylogenies. HGTs can be detected and excluded using a number of phylogenetic methods (see ).
In addition, molecular phylogenies are sensitive to the assumptions and models that go into making them. Firstly, sequences must be aligned; then, issues such as long-branch attraction, saturation, and taxon sampling problems must be addressed. This means that strikingly different results can be obtained by applying different models to the same dataset. The tree-building method also brings with it specific assumptions about tree topology, evolution speeds, and sampling. The simplistic UPGMA assumes a rooted tree and a uniform molecular clock, both of which can be incorrect.
See also
Computational phylogenetics
Microbial phylogenetics
Molecular clock
Molecular evolution
PhyloCode
Phylogenetic nomenclature
Notes and references
Further reading
External links
NCBI – Systematics and Molecular Phylogenetics
MEGA Software
Molecular phylogenetics from Encyclopædia Britannica.
Phylogenetics
Molecular evolution | Molecular phylogenetics | [
"Chemistry",
"Biology"
] | 2,188 | [
"Evolutionary processes",
"Molecular evolution",
"Taxonomy (biology)",
"Bioinformatics",
"Molecular biology",
"Phylogenetics"
] |
150,159 | https://en.wikipedia.org/wiki/Noether%27s%20theorem | Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law. This is the first of two theorems (see Noether's second theorem) published by mathematician Emmy Noether in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries of physical space.
Noether's theorem is used in theoretical physics and the calculus of variations. It reveals the fundamental relation between the symmetries of a physical system and the conservation laws. It also made modern theoretical physicists much more focused on symmetries of physical systems. A generalization of the formulations on constants of motion in Lagrangian and Hamiltonian mechanics (developed in 1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian alone (e.g., systems with a Rayleigh dissipation function). In particular, dissipative systems with continuous symmetries need not have a corresponding conservation law.
Basic illustrations and background
As an illustration, if a physical system behaves the same regardless of how it is oriented in space (that is, it's invariant), its Lagrangian is symmetric under continuous rotation: from this symmetry, Noether's theorem dictates that the angular momentum of the system be conserved, as a consequence of its laws of motion. The physical system itself need not be symmetric; a jagged asteroid tumbling in space conserves angular momentum despite its asymmetry. It is the laws of its motion that are symmetric.
As another example, if a physical process exhibits the same outcomes regardless of place or time, then its Lagrangian is symmetric under continuous translations in space and time respectively: by Noether's theorem, these symmetries account for the conservation laws of linear momentum and energy within this system, respectively.
Noether's theorem is important, both because of the insight it gives into conservation laws, and also as a practical calculational tool. It allows investigators to determine the conserved quantities (invariants) from the observed symmetries of a physical system. Conversely, it allows researchers to consider whole classes of hypothetical Lagrangians with given invariants, to describe a physical system. As an illustration, suppose that a physical theory is proposed which conserves a quantity X. A researcher can calculate the types of Lagrangians that conserve X through a continuous symmetry. Due to Noether's theorem, the properties of these Lagrangians provide further criteria to understand the implications and judge the fitness of the new theory.
There are numerous versions of Noether's theorem, with varying degrees of generality. There are natural quantum counterparts of this theorem, expressed in the Ward–Takahashi identities. Generalizations of Noether's theorem to superspaces also exist.
Informal statement of the theorem
All fine technical points aside, Noether's theorem can be stated informally as:
A more sophisticated version of the theorem involving fields states that:
The word "symmetry" in the above statement refers more precisely to the covariance of the form that a physical law takes with respect to a one-dimensional Lie group of transformations satisfying certain technical criteria. The conservation law of a physical quantity is usually expressed as a continuity equation.
The formal proof of the theorem utilizes the condition of invariance to derive an expression for a current associated with a conserved physical quantity. In modern terminology, the conserved quantity is called the Noether charge, while the flow carrying that charge is called the Noether current. The Noether current is defined up to a solenoidal (divergenceless) vector field.
In the context of gravitation, Felix Klein's statement of Noether's theorem for action I stipulates for the invariants:
Brief illustration and overview of the concept
The main idea behind Noether's theorem is most easily illustrated by a system with one coordinate and a continuous symmetry (gray arrows on the diagram).
Consider any trajectory (bold on the diagram) that satisfies the system's laws of motion. That is, the action governing this system is stationary on this trajectory, i.e. does not change under any local variation of the trajectory. In particular it would not change under a variation that applies the symmetry flow on a time segment and is motionless outside that segment. To keep the trajectory continuous, we use "buffering" periods of small time to transition between the segments gradually.
The total change in the action now comprises changes brought by every interval in play. Parts, where variation itself vanishes, i.e outside bring no . The middle part does not change the action either, because its transformation is a symmetry and thus preserves the Lagrangian and the action . The only remaining parts are the "buffering" pieces. In these regions both the coordinate and velocity change, but changes by , and the change in the coordinate is negligible by comparison since the time span of the buffering is small (taken to the limit of 0), so . So the regions contribute mostly through their "slanting" .
That changes the Lagrangian by , which integrates to
These last terms, evaluated around the endpoints and , should cancel each other in order to make the total change in the action be zero, as would be expected if the trajectory is a solution. That is
meaning the quantity is conserved, which is the conclusion of Noether's theorem. For instance if pure translations of by a constant are the symmetry, then the conserved quantity becomes just , the canonical momentum.
More general cases follow the same idea:
Historical context
A conservation law states that some quantity X in the mathematical description of a system's evolution remains constant throughout its motion – it is an invariant. Mathematically, the rate of change of X (its derivative with respect to time) is zero,
Such quantities are said to be conserved; they are often called constants of motion (although motion per se need not be involved, just evolution in time). For example, if the energy of a system is conserved, its energy is invariant at all times, which imposes a constraint on the system's motion and may help in solving for it. Aside from insights that such constants of motion give into the nature of a system, they are a useful calculational tool; for example, an approximate solution can be corrected by finding the nearest state that satisfies the suitable conservation laws.
The earliest constants of motion discovered were momentum and kinetic energy, which were proposed in the 17th century by René Descartes and Gottfried Leibniz on the basis of collision experiments, and refined by subsequent researchers. Isaac Newton was the first to enunciate the conservation of momentum in its modern form, and showed that it was a consequence of Newton's laws of motion. According to general relativity, the conservation laws of linear momentum, energy and angular momentum are only exactly true globally when expressed in terms of the sum of the stress–energy tensor (non-gravitational stress–energy) and the Landau–Lifshitz stress–energy–momentum pseudotensor (gravitational stress–energy). The local conservation of non-gravitational linear momentum and energy in a free-falling reference frame is expressed by the vanishing of the covariant divergence of the stress–energy tensor. Another important conserved quantity, discovered in studies of the celestial mechanics of astronomical bodies, is the Laplace–Runge–Lenz vector.
In the late 18th and early 19th centuries, physicists developed more systematic methods for discovering invariants. A major advance came in 1788 with the development of Lagrangian mechanics, which is related to the principle of least action. In this approach, the state of the system can be described by any type of generalized coordinates q; the laws of motion need not be expressed in a Cartesian coordinate system, as was customary in Newtonian mechanics. The action is defined as the time integral I of a function known as the Lagrangian L
where the dot over q signifies the rate of change of the coordinates q,
Hamilton's principle states that the physical path q(t)—the one actually taken by the system—is a path for which infinitesimal variations in that path cause no change in I, at least up to first order. This principle results in the Euler–Lagrange equations,
Thus, if one of the coordinates, say qk, does not appear in the Lagrangian, the right-hand side of the equation is zero, and the left-hand side requires that
where the momentum
is conserved throughout the motion (on the physical path).
Thus, the absence of the ignorable coordinate qk from the Lagrangian implies that the Lagrangian is unaffected by changes or transformations of qk; the Lagrangian is invariant, and is said to exhibit a symmetry under such transformations. This is the seed idea generalized in Noether's theorem.
Several alternative methods for finding conserved quantities were developed in the 19th century, especially by William Rowan Hamilton. For example, he developed a theory of canonical transformations which allowed changing coordinates so that some coordinates disappeared from the Lagrangian, as above, resulting in conserved canonical momenta. Another approach, and perhaps the most efficient for finding conserved quantities, is the Hamilton–Jacobi equation.
Emmy Noether's work on the invariance theorem began in 1915 when she was helping Felix Klein and David Hilbert with their work related to Albert Einstein's theory of general relativity By March 1918 she had most of the key ideas for the paper which would be published later in the year.
Mathematical expression
Simple form using perturbations
The essence of Noether's theorem is generalizing the notion of ignorable coordinates.
One can assume that the Lagrangian L defined above is invariant under small perturbations (warpings) of the time variable t and the generalized coordinates q. One may write
where the perturbations δt and δq are both small, but variable. For generality, assume there are (say) N such symmetry transformations of the action, i.e. transformations leaving the action unchanged; labelled by an index r = 1, 2, 3, ..., N.
Then the resultant perturbation can be written as a linear sum of the individual types of perturbations,
where εr are infinitesimal parameter coefficients corresponding to each:
generator Tr of time evolution, and
generator Qr of the generalized coordinates.
For translations, Qr is a constant with units of length; for rotations, it is an expression linear in the components of q, and the parameters make up an angle.
Using these definitions, Noether showed that the N quantities
are conserved (constants of motion).
Examples
I. Time invariance
For illustration, consider a Lagrangian that does not depend on time, i.e., that is invariant (symmetric) under changes t → t + δt, without any change in the coordinates q. In this case, N = 1, T = 1 and Q = 0; the corresponding conserved quantity is the total energy H
II. Translational invariance
Consider a Lagrangian which does not depend on an ("ignorable", as above) coordinate qk; so it is invariant (symmetric) under changes qk → qk + δqk. In that case, N = 1, T = 0, and Qk = 1; the conserved quantity is the corresponding linear momentum pk
In special and general relativity, these two conservation laws can be expressed either globally (as it is done above), or locally as a continuity equation. The global versions can be united into a single global conservation law: the conservation of the energy-momentum 4-vector. The local versions of energy and momentum conservation (at any point in space-time) can also be united, into the conservation of a quantity defined locally at the space-time point: the stress–energy tensor(this will be derived in the next section).
III. Rotational invariance
The conservation of the angular momentum L = r × p is analogous to its linear momentum counterpart. It is assumed that the symmetry of the Lagrangian is rotational, i.e., that the Lagrangian does not depend on the absolute orientation of the physical system in space. For concreteness, assume that the Lagrangian does not change under small rotations of an angle δθ about an axis n; such a rotation transforms the Cartesian coordinates by the equation
Since time is not being transformed, T = 0, and N = 1. Taking δθ as the ε parameter and the Cartesian coordinates r as the generalized coordinates q, the corresponding Q variables are given by
Then Noether's theorem states that the following quantity is conserved,
In other words, the component of the angular momentum L along the n axis is conserved. And if n is arbitrary, i.e., if the system is insensitive to any rotation, then every component of L is conserved; in short, angular momentum is conserved.
Field theory version
Although useful in its own right, the version of Noether's theorem just given is a special case of the general version derived in 1915. To give the flavor of the general theorem, a version of Noether's theorem for continuous fields in four-dimensional space–time is now given. Since field theory problems are more common in modern physics than mechanics problems, this field theory version is the most commonly used (or most often implemented) version of Noether's theorem.
Let there be a set of differentiable fields defined over all space and time; for example, the temperature would be representative of such a field, being a number defined at every place and time. The principle of least action can be applied to such fields, but the action is now an integral over space and time
(the theorem can be further generalized to the case where the Lagrangian depends on up to the nth derivative, and can also be formulated using jet bundles).
A continuous transformation of the fields can be written infinitesimally as
where is in general a function that may depend on both and . The condition for to generate a physical symmetry is that the action is left invariant. This will certainly be true if the Lagrangian density is left invariant, but it will also be true if the Lagrangian changes by a divergence,
since the integral of a divergence becomes a boundary term according to the divergence theorem. A system described by a given action might have multiple independent symmetries of this type, indexed by so the most general symmetry transformation would be written as
with the consequence
For such systems, Noether's theorem states that there are conserved current densities
(where the dot product is understood to contract the field indices, not the index or index).
In such cases, the conservation law is expressed in a four-dimensional way
which expresses the idea that the amount of a conserved quantity within a sphere cannot change unless some of it flows out of the sphere. For example, electric charge is conserved; the amount of charge within a sphere cannot change unless some of the charge leaves the sphere.
For illustration, consider a physical system of fields that behaves the same under translations in time and space, as considered above; in other words, is constant in its third argument. In that case, N = 4, one for each dimension of space and time. An infinitesimal translation in space, (with denoting the Kronecker delta), affects the fields as : that is, relabelling the coordinates is equivalent to leaving the coordinates in place while translating the field itself, which in turn is equivalent to transforming the field by replacing its value at each point with the value at the point "behind" it which would be mapped onto by the infinitesimal displacement under consideration. Since this is infinitesimal, we may write this transformation as
The Lagrangian density transforms in the same way, , so
and thus Noether's theorem corresponds to the conservation law for the stress–energy tensor Tμν, where we have used in place of . To wit, by using the expression given earlier, and collecting the four conserved currents (one for each ) into a tensor , Noether's theorem gives
with
(we relabelled as at an intermediate step to avoid conflict). (However, the obtained in this way may differ from the symmetric tensor used as the source term in general relativity; see Canonical stress–energy tensor.)
The conservation of electric charge, by contrast, can be derived by considering Ψ linear in the fields φ rather than in the derivatives. In quantum mechanics, the probability amplitude ψ(x) of finding a particle at a point x is a complex field φ, because it ascribes a complex number to every point in space and time. The probability amplitude itself is physically unmeasurable; only the probability p = |ψ|2 can be inferred from a set of measurements. Therefore, the system is invariant under transformations of the ψ field and its complex conjugate field ψ* that leave |ψ|2 unchanged, such as
a complex rotation. In the limit when the phase θ becomes infinitesimally small, δθ, it may be taken as the parameter ε, while the Ψ are equal to iψ and −iψ*, respectively. A specific example is the Klein–Gordon equation, the relativistically correct version of the Schrödinger equation for spinless particles, which has the Lagrangian density
In this case, Noether's theorem states that the conserved (∂ ⋅ j = 0) current equals
which, when multiplied by the charge on that species of particle, equals the electric current density due to that type of particle. This "gauge invariance" was first noted by Hermann Weyl, and is one of the prototype gauge symmetries of physics.
Derivations
One independent variable
Consider the simplest case, a system with one independent variable, time. Suppose the dependent variables q are such that the action integral
is invariant under brief infinitesimal variations in the dependent variables. In other words, they satisfy the Euler–Lagrange equations
And suppose that the integral is invariant under a continuous symmetry. Mathematically such a symmetry is represented as a flow, φ, which acts on the variables as follows
where ε is a real variable indicating the amount of flow, and T is a real constant (which could be zero) indicating how much the flow shifts time.
The action integral flows to
which may be regarded as a function of ε. Calculating the derivative at ε = 0 and using Leibniz's rule, we get
Notice that the Euler–Lagrange equations imply
Substituting this into the previous equation, one gets
Again using the Euler–Lagrange equations we get
Substituting this into the previous equation, one gets
From which one can see that
is a constant of the motion, i.e., it is a conserved quantity. Since φ[q, 0] = q, we get and so the conserved quantity simplifies to
To avoid excessive complication of the formulas, this derivation assumed that the flow does not change as time passes. The same result can be obtained in the more general case.
Field-theoretic derivation
Noether's theorem may also be derived for tensor fields where the index A ranges over the various components of the various tensor fields. These field quantities are functions defined over a four-dimensional space whose points are labeled by coordinates xμ where the index μ ranges over time (μ = 0) and three spatial dimensions (μ = 1, 2, 3). These four coordinates are the independent variables; and the values of the fields at each event are the dependent variables. Under an infinitesimal transformation, the variation in the coordinates is written
whereas the transformation of the field variables is expressed as
By this definition, the field variations result from two factors: intrinsic changes in the field themselves and changes in coordinates, since the transformed field αA depends on the transformed coordinates ξμ. To isolate the intrinsic changes, the field variation at a single point xμ may be defined
If the coordinates are changed, the boundary of the region of space–time over which the Lagrangian is being integrated also changes; the original boundary and its transformed version are denoted as Ω and Ω’, respectively.
Noether's theorem begins with the assumption that a specific transformation of the coordinates and field variables does not change the action, which is defined as the integral of the Lagrangian density over the given region of spacetime. Expressed mathematically, this assumption may be written as
where the comma subscript indicates a partial derivative with respect to the coordinate(s) that follows the comma, e.g.
Since ξ is a dummy variable of integration, and since the change in the boundary Ω is infinitesimal by assumption, the two integrals may be combined using the four-dimensional version of the divergence theorem into the following form
The difference in Lagrangians can be written to first-order in the infinitesimal variations as
However, because the variations are defined at the same point as described above, the variation and the derivative can be done in reverse order; they commute
Using the Euler–Lagrange field equations
the difference in Lagrangians can be written neatly as
Thus, the change in the action can be written as
Since this holds for any region Ω, the integrand must be zero
For any combination of the various symmetry transformations, the perturbation can be written
where is the Lie derivative of
in the Xμ direction. When is a scalar or ,
These equations imply that the field variation taken at one point equals
Differentiating the above divergence with respect to ε at ε = 0 and changing the sign yields the conservation law
where the conserved current equals
Manifold/fiber bundle derivation
Suppose we have an n-dimensional oriented Riemannian manifold, M and a target manifold T. Let be the configuration space of smooth functions from M to T. (More generally, we can have smooth sections of a fiber bundle T over M.)
Examples of this M in physics include:
In classical mechanics, in the Hamiltonian formulation, M is the one-dimensional manifold , representing time and the target space is the cotangent bundle of space of generalized positions.
In field theory, M is the spacetime manifold and the target space is the set of values the fields can take at any given point. For example, if there are m real-valued scalar fields, , then the target manifold is . If the field is a real vector field, then the target manifold is isomorphic to .
Now suppose there is a functional
called the action. (It takes values into , rather than ; this is for physical reasons, and is unimportant for this proof.)
To get to the usual version of Noether's theorem, we need additional restrictions on the action. We assume is the integral over M of a function
called the Lagrangian density, depending on , its derivative and the position. In other words, for in
Suppose we are given boundary conditions, i.e., a specification of the value of at the boundary if M is compact, or some limit on as x approaches ∞. Then the subspace of consisting of functions such that all functional derivatives of at are zero, that is:
and that satisfies the given boundary conditions, is the subspace of on shell solutions. (See principle of stationary action)
Now, suppose we have an infinitesimal transformation on , generated by a functional derivation, Q such that
for all compact submanifolds N or in other words,
for all x, where we set
If this holds on shell and off shell, we say Q generates an off-shell symmetry. If this only holds on shell, we say Q generates an on-shell symmetry. Then, we say Q is a generator of a one parameter symmetry Lie group.
Now, for any N, because of the Euler–Lagrange theorem, on shell (and only on-shell), we have
Since this is true for any N, we have
But this is the continuity equation for the current defined by:
which is called the Noether current associated with the symmetry. The continuity equation tells us that if we integrate this current over a space-like slice, we get a conserved quantity called the Noether charge (provided, of course, if M is noncompact, the currents fall off sufficiently fast at infinity).
Comments
Noether's theorem is an on shell theorem: it relies on use of the equations of motion—the classical path. It reflects the relation between the boundary conditions and the variational principle. Assuming no boundary terms in the action, Noether's theorem implies that
The quantum analogs of Noether's theorem involving expectation values (e.g., ) probing off shell quantities as well are the Ward–Takahashi identities.
Generalization to Lie algebras
Suppose we have two symmetry derivations Q1 and Q2. Then, [Q1, Q2] is also a symmetry derivation. Let us see this explicitly. Let us say
and
Then,
where f12 = Q1[f2μ] − Q2[f1μ]. So,
This shows we can extend Noether's theorem to larger Lie algebras in a natural way.
Generalization of the proof
This applies to any local symmetry derivation Q satisfying QS ≈ 0, and also to more general local functional differentiable actions, including ones where the Lagrangian depends on higher derivatives of the fields. Let ε be any arbitrary smooth function of the spacetime (or time) manifold such that the closure of its support is disjoint from the boundary. ε is a test function. Then, because of the variational principle (which does not apply to the boundary, by the way), the derivation distribution q generated by q[ε][Φ(x)] = ε(x)Q[Φ(x)] satisfies q[ε][S] ≈ 0 for every ε, or more compactly, q(x)[S] ≈ 0 for all x not on the boundary (but remember that q(x) is a shorthand for a derivation distribution, not a derivation parametrized by x in general). This is the generalization of Noether's theorem.
To see how the generalization is related to the version given above, assume that the action is the spacetime integral of a Lagrangian that only depends on and its first derivatives. Also, assume
Then,
for all .
More generally, if the Lagrangian depends on higher derivatives, then
Examples
Example 1: Conservation of energy
Looking at the specific case of a Newtonian particle of mass m, coordinate x, moving under the influence of a potential V, coordinatized by time t. The action, S, is:
The first term in the brackets is the kinetic energy of the particle, while the second is its potential energy. Consider the generator of time translations Q = d/dt. In other words, . The coordinate x has an explicit dependence on time, whilst V does not; consequently:
so we can set
Then,
The right hand side is the energy, and Noether's theorem states that (i.e. the principle of conservation of energy is a consequence of invariance under time translations).
More generally, if the Lagrangian does not depend explicitly on time, the quantity
(called the Hamiltonian) is conserved.
Example 2: Conservation of center of momentum
Still considering 1-dimensional time, let
for Newtonian particles where the potential only depends pairwise upon the relative displacement.
For , consider the generator of Galilean transformations (i.e. a change in the frame of reference). In other words,
And
This has the form of so we can set
Then,
where is the total momentum, M is the total mass and is the center of mass. Noether's theorem states:
Example 3: Conformal transformation
Both examples 1 and 2 are over a 1-dimensional manifold (time). An example involving spacetime is a conformal transformation of a massless real scalar field with a quartic potential in (3 + 1)-Minkowski spacetime.
For Q, consider the generator of a spacetime rescaling. In other words,
The second term on the right hand side is due to the "conformal weight" of . And
This has the form of
(where we have performed a change of dummy indices) so set
Then
Noether's theorem states that (as one may explicitly check by substituting the Euler–Lagrange equations into the left hand side).
If one tries to find the Ward–Takahashi analog of this equation, one runs into a problem because of anomalies.
Applications
Application of Noether's theorem allows physicists to gain powerful insights into any general theory in physics, by just analyzing the various transformations that would make the form of the laws involved invariant. For example:
Invariance of an isolated system with respect to spatial translation (in other words, that the laws of physics are the same at all locations in space) gives the law of conservation of linear momentum (which states that the total linear momentum of an isolated system is constant)
Invariance of an isolated system with respect to time translation (i.e. that the laws of physics are the same at all points in time) gives the law of conservation of energy (which states that the total energy of an isolated system is constant)
Invariance of an isolated system with respect to rotation (i.e., that the laws of physics are the same with respect to all angular orientations in space) gives the law of conservation of angular momentum (which states that the total angular momentum of an isolated system is constant)
Invariance of an isolated system with respect to Lorentz boosts (i.e., that the laws of physics are the same with respect to all inertial reference frames) gives the center-of-mass theorem (which states that the center-of-mass of an isolated system moves at a constant velocity).
In quantum field theory, the analog to Noether's theorem, the Ward–Takahashi identity, yields further conservation laws, such as the conservation of electric charge from the invariance with respect to a change in the phase factor of the complex field of the charged particle and the associated gauge of the electric potential and vector potential.
The Noether charge is also used in calculating the entropy of stationary black holes.
See also
Conservation law
Charge (physics)
Gauge symmetry
Gauge symmetry (mathematics)
Invariant (physics)
Goldstone boson
Symmetry (physics)
References
Further reading
Online copy.
External links
(Original in Gott. Nachr. 1918:235–257)
Noether's Theorem at MathPages.
Articles containing proofs
Calculus of variations
Conservation laws
Concepts in physics
Eponymous theorems of physics
Partial differential equations
Physics theorems
Quantum field theory
Symmetry | Noether's theorem | [
"Physics",
"Mathematics"
] | 6,437 | [
"Quantum field theory",
"Equations of physics",
"Conservation laws",
"Quantum mechanics",
"Eponymous theorems of physics",
"nan",
"Geometry",
"Articles containing proofs",
"Symmetry",
"Physics theorems"
] |
150,183 | https://en.wikipedia.org/wiki/Eschede%20train%20disaster | On 3 June 1998, part of an ICE 1 train on the Hannover–Hamburg railway near Eschede in Lower Saxony, Germany derailed and crashed into an overpass that crossed the railroad, which then collapsed onto the train. 101 people were killed and at least 88 were injured, making it the second-deadliest railway disaster in German history after the 1939 Genthin rail disaster, and the world's worst ever high-speed rail disaster.
The cause of the derailment was a single fatigue crack in one wheel, which caused a part of the wheel to become caught in a railroad switch (points), changing the direction of the switch as the train passed over it. This led to the train's carriages going down two separate tracks, causing the train to derail and crash into the pillars of a concrete road bridge, which then collapsed and crushed two coaches. The remaining coaches and the rear power car crashed into the wreckage.
After the incident, many investigations into the wheel fracture took place. Analysis concluded that the accident was caused by poor wheel design which allowed a fatigue fracture to develop on the wheel rim.
Investigators also considered other contributing factors, including the failure to stop the train, and maintenance procedures.
The disaster had legal and technical consequences including trials, fines and compensation payments. The wheel design was modified and train windows were made easier to break in an emergency.
A memorial place was opened at the place of the disaster.
Background
The InterCity Express 1, abbreviated as ICE 1, is the first German high-speed train and was introduced in 1988.
Timeline
Wheel fracture
ICE 1 trainset 51 was travelling as ICE 884 "Wilhelm Conrad Röntgen" from Munich to Hamburg. The train was scheduled to stop at Augsburg, Nürnberg, Würzburg, Fulda, Kassel, Göttingen, and Hanover before reaching Hamburg. After stopping in Hanover at 10:30, the train continued its journey northwards. About and forty minutes away from Hamburg and south of central Eschede, near Celle, the steel tyre on a wheel on the third axle of the first car split and peeled away from the wheel, having been weakened by metal fatigue. The momentum of this caused the steel tyre to flatten and it was catapulted upwards, penetrating the floor of the train carriage where it remained stuck.
The tyre embedded in the carriage was seen by Jörg Dittmann, one of the passengers in Coach 1. The tyre went through an armrest in his compartment between the seats where his wife and son were sitting. Dittmann took his wife and son out of the damaged coach and went to inform a conductor in the third coach.
The conductor, who noticed vibrations in the train, told Dittmann that company policy required him to investigate the circumstances before pulling the emergency brake. The conductor took one minute to reach the site in Coach 1. According to Dittmann, the train had begun to sway from side to side by then. The conductor did not show willingness to stop the train immediately, and wished to first investigate the incident more thoroughly. Dittmann could not find an emergency brake in the corridor and had not noticed that there was an emergency brake handle in his own compartment. The train crashed just as Dittmann was about to show the armrest puncture to the conductor.
Derailment
As the train passed over the first of two points, the embedded tyre slammed against the guide rail of the points, pulling it from the railway ties. This guide rail also penetrated the floor of the car, becoming embedded in the vehicle and lifting the bogie off the rails. At 10:59 local time (08:59 UTC), one of the now-derailed wheels struck the points lever of the second switch, changing its setting. The rear axles of car number 3 were switched onto a parallel track, and the entire car was thereby thrown sideways into the piers supporting a roadway overpass, destroying them.
Car number 4, likewise derailed by the violent deviation of car number 3 and still travelling at , passed intact under the bridge and rolled onto the embankment immediately behind it, striking several trees before coming to a stop. Two Deutsche Bahn railway workers who had been working near the bridge were killed instantly when the derailed car crushed them. The breaking of the car couplings caused the automatic emergency brakes to engage, and the mostly undamaged first three cars came to a stop.
Bridge collapse
The front power car and coaches one and two cleared the bridge. The third carriage hit the bridge, causing it to collapse, but cleared the bridge. Coach four cleared the bridge, moved away from the track onto an embankment, and hit a group of trees before stopping. The bridge pieces crushed the rear half of coach five. The restaurant coach, six, was crushed to a height. With the track now obstructed completely by the collapsed bridge, the remaining cars jackknifed into the rubble in a zig-zag pattern: car 7, the service car, the restaurant car, the three first-class cars numbered 10 to 12, and the rear power car all derailed and slammed into the pile. The resulting chaos was likened to a partially collapsed folding ruler. An automobile was also found in the wreckage; it belonged to the two railway technicians killed, and was probably parked on the bridge before the accident.
Separated from the rest of the carriages, the detached front power car coasted for a further three kilometers (two miles) until it came to a stop after passing Eschede railway station.
The crash produced a sound that witnesses later described as "startling", "horribly loud", and "like a plane crash". People living nearby, alerted by the sound, were the first to arrive at the scene; Erika Karl, the first, photographed the site. She said that, upon hearing the noise, her husband initially believed there had been an aircraft accident. After the accident, eight of the ICE carriages occupied an area slightly longer than the length of a single carriage.
At 11:02, the local police declared an emergency. At 11:07, as the magnitude of the disaster quickly became apparent and this was elevated to "major emergency". At 12:30 the Celle district government declared a "catastrophic emergency" (civil state of emergency). More than 1,000 rescue workers from regional emergency services, fire departments, rescue services, the police and army were dispatched. Some 37 emergency physicians, who happened to be attending a professional conference in nearby Hanover, also provided assistance during the early hours of the rescue effort, as did units of the British Forces Germany.
While the driver and many passengers in the front part of the train survived with minor to moderate injuries, very few passengers survived in the rear carriages, which crashed into the concrete bridge pile at a speed of . 101 were killed, including the two railway workers who had been standing under the bridge.
ICE 787, travelling from Hamburg to Hanover, had passed under the bridge going in the opposite direction only two minutes earlier. That train had passed the bridge one minute ahead of schedule, while the accident train was one minute behind schedule. Had both been on time, ICE 787 may have also been impacted by the derailment.
By 13:45 authorities had given emergency treatment to 87 people, of whom the 27 most severely injured were airlifted to hospitals.
Causes
The disintegrated resilient wheel was the cause of the accident, but several factors contributed to the severity of the damage, including proximity to the bridge and flipping point, and the wheel being on a car near the front of the train, causing many cars to derail.
Wheel design
The ICE 1 trains were originally equipped with single-cast wheelsets, known as monobloc wheels. Once in service it soon became apparent that this design could, as a result of metal fatigue and uneven wear, result in resonance and vibration at cruising speed. Passengers noticed this particularly in the restaurant car, where there were reports of loud vibrations in the dinnerware and of glasses "creeping" across tables.
Managers in the railway organisation had experienced these severe vibrations on a previous trip and asked to have the problem solved. In response engineers decided that to solve the problem, the suspension of ICE cars could be improved with the use of a rubber damping ring between the rail-contacting steel tyre and the steel wheel body. A similar design (known as resilient wheels) had been employed successfully in trams around the world, at much lower speeds. This kind of wheel, dubbed a wheel–tyre design, consisted of a wheel body surrounded by a rubber damper and then a relatively thin metal tyre. The new design was not tested at high speed in Germany before it was made operational, but was successful at resolving the issue of vibration at cruising speeds. Decade-long experience at high speed gathered by train manufacturers and railway companies in Italy, France and Japan was not considered.
At the time, there were no facilities in Germany that could test the actual failure limit of the wheels, and so complete prototypes were never tested physically. The design and specification relied greatly on available materials data and theory. The very few laboratory and rail tests that were performed did not measure wheel behaviour with extended wear conditions or speeds greater than normal cruising. Nevertheless, over several years the wheels had been reliable and, until the accident, had not caused any major problems.
In July 1997, nearly one year before the disaster, Üstra, the company that operates Hanover's tram network, discovered fatigue cracks in dual block wheels on trams running at about . It began changing wheels before fatigue cracks could develop, much earlier than was legally required by the specification. Üstra reported its findings in a warning to all other users of wheels built with similar designs, including Deutsche Bahn, in late 1997. According to Üstra, Deutsche Bahn replied by stating that they had not noticed problems in their trains.
The (Fraunhofer LBF) in Darmstadt was charged with the task of determining the cause of the accident. It was revealed later that the institute had told the DB management as early as 1992 about its concerns about possible wheel–tyre failure.
It was soon apparent that dynamic repetitive forces had not been considered in the modelling done during the design phase, and the resulting design lacked an adequate margin of safety. The following factors, overlooked during design, were noted:
The tyres were flattened into an ellipse as the wheel turned through each revolution (approximately 500,000 times during a typical day in service on an ICE train), with corresponding fatigue effects.
In contrast to the monobloc wheel design, cracks could form on the inside as well as the outside of the tyre.
As the tyre wore thinner, dynamic forces increased, causing crack growth.
Flat spots and ridges or swells in the tyre dramatically increased the dynamic forces on the assembly and greatly accelerated wear.
Failure to stop train
Failing to stop the train resulted in a catastrophic series of events. Had the train been stopped immediately after the disintegration of the wheel, it is unlikely that the subsequent events would have occurred.
Valuable time was lost when the train manager refused to stop the train until he had investigated the problem himself, saying this was company policy. This decision was upheld in court, absolving the train manager of all charges. Given that he was a customer service employee and not a train maintainer or engineer, he had no more authority to make an engineering judgment about whether or not to stop the train than did any passenger.
Maintenance
About the time of the disaster, the technicians at Deutsche Bahn's maintenance facility in Munich used only standard flashlights for visual inspection of the tyres, instead of metal fatigue detection equipment. Previously, advanced testing machines had been used; however the equipment generated many false positive error messages, so it was considered unreliable and its use was discontinued.
During the week prior to the Eschede disaster, three separate automated checks indicated that a wheel was defective. Investigators discovered, from a maintenance report generated by the train's on-board computer, that two months prior to the Eschede disaster, conductors and other train staff filed eight separate complaints about the noises and vibrations generated from the bogie with the defective wheel; the company did not replace the wheel. Deutsche Bahn said that its inspections were proper at the time and that the engineers could not have predicted the wheel fracture.
Other factors
The design of the overbridge may have also contributed to the accident because it had two thin piers holding up the bridge on either side, instead of the spans going from solid abutments to solid abutments. The bridge that collapsed in the Granville rail disaster of 1977 had a similar weakness. The bridge built after the disaster is a cantilevered design that does not have this vulnerability.
Another contributing factor to the casualty rate was the use of welds that "unzipped" during the crash in the carriage bodies.
Consequences
Legal
Immediately after the accident, Deutsche Bahn paid 30,000 Deutsche Marks (about US$19,000) for each fatality to the applicable families. At a later time Deutsche Bahn settled with some victims. Deutsche Bahn stated that it paid the equivalent of more than 30 million U.S. dollars to survivors and the families of victims.
In August 2002, two Deutsche Bahn officials and one engineer were charged with manslaughter. The trial lasted 53 days with expert witnesses from around the world testifying. The case ended in a plea bargain in April 2003. According to the German code of criminal procedure, if the defendant has not been found to bear substantial guilt, and if the state attorney and the defendant agree, the defendant may pay a fine and the criminal proceedings are dismissed with prejudice and without a verdict. Each engineer paid €10,000 (around US$12,000).
Technical
Within weeks, all wheels of similar design were replaced with monobloc wheels. The entire German railway network was checked for similar arrangements of switches close to possible obstacles.
Rescue workers at the crash site experienced considerable difficulties in cutting their way through the train to gain access to the victims. Both the aluminium framework and the pressure-proof windows offered unexpected resistance to rescue equipment. As a result, all trains were refitted with windows that have breaking seams.
Memorial
Udo Bauch, a survivor who was left disabled by the accident, built his own memorial with his own money. Bauch said that the chapel received 5,000 to 6,000 visitors per year. One year after Bauch's memorial was built, an official memorial, funded partly by Deutsche Bahn, was established.
The official memorial was opened on 11 May 2001 in the presence of 400 relatives as well as many dignitaries, rescuers and residents of Eschede. The memorial consists of 101 wild cherry trees, with each representing one fatality. The trees have been planted along the rails near the bridge and with the switch in front. From the field, a staircase leads up to the street and a gate; on the other side of the street a number of stairs lead further up to nowhere. There is an inscription on the side of the stone gate and an inscription on a memorial wall that also lists the names of the fatalities placed at the centre of the trees.
Dramatization
The Eschede derailment, as well as the investigation into the incident, was covered as the fifth episode of the first season of the National Geographic TV documentary series Seconds from Disaster, entitled "Derailment at Eschede" which was filmed on the Ecclesbourne Valley Railway in Derbyshire, UK.
See also
National Geographic Seconds from Disaster episodes
Lathen train collision – 2006 maglev train crash in Germany
Lists of rail accidents
List of structural failures and collapses
List of accidents and disasters by death toll
References
Citations
General references
The Eschede Reports
ICE Train Accident in Eschede – Recent News Summary
Official Eschede Website showing memorial
Further reading
O'Connor, Bryan, (NASA), "Eschede Train Disaster", Leadership ViTS Meeting, 7 May 2007
External links
The ICE/ICT pages
Eschede – Zug 884("Eschede – Train 884"), a German documentary film about the disaster by Raymond Ley (2008, 90 minutes).
"Das ICE-Unglück von Eschede" ("The ICE accident in Eschede")
Derailments in Germany
Railway accidents in 1998
Intercity Express
1998 in Germany
June 1998 events in Germany
Transport in Lower Saxony
Bridge disasters in Germany
Bridge disasters caused by collision
20th century in Lower Saxony
Accidents and incidents involving Deutsche Bahn
Engineering failures | Eschede train disaster | [
"Technology",
"Engineering"
] | 3,340 | [
"Systems engineering",
"Reliability engineering",
"Railway accidents and incidents",
"Technological failures",
"Bridge disasters caused by collision",
"Engineering failures",
"Civil engineering"
] |
6,620,973 | https://en.wikipedia.org/wiki/BKL%20singularity | A Belinski–Khalatnikov–Lifshitz (BKL) singularity is a model of the dynamic evolution of the universe near the initial gravitational singularity, described by an anisotropic, chaotic solution of the Einstein field equation of gravitation. According to this model, the universe is chaotically oscillating around a gravitational singularity in which time and space become equal to zero or, equivalently, the spacetime curvature becomes infinitely big. This singularity is physically real in the sense that it is a necessary property of the solution, and will appear also in the exact solution of those equations. The singularity is not artificially created by the assumptions and simplifications made by the other special solutions such as the Friedmann–Lemaître–Robertson–Walker, quasi-isotropic, and Kasner solutions.
The model is named after its authors Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, then working at the Landau Institute for Theoretical Physics.
The picture developed by BKL has several important elements. These are:
Near the singularity the evolution of the geometry at different spatial points decouples so that the solutions of the partial differential equations can be approximated by solutions of ordinary differential equations with respect to time for appropriately defined spatial scale factors. This is called the BKL conjecture.
For most types of matter the effect of the matter fields on the dynamics of the geometry becomes negligible near the singularity. Or, in the words of John Wheeler, "matter doesn't matter" near a singularity. The original BKL work posed a negligible effect for all matter but later they theorized that "stiff matter" (equation of state p = ε) equivalent to a massless scalar field can have a modifying effect on the dynamics near the singularity.
The ordinary differential equations describing the asymptotics come from a class of spatially homogeneous solutions which constitute the Mixmaster dynamics: a complicated oscillatory and chaotic model that exhibits properties similar to those discussed by BKL.
The study of the dynamics of the universe in the vicinity of the cosmological singularity has become a rapidly developing field of modern theoretical and mathematical physics. The generalization of the BKL model to the cosmological singularity in multidimensional (Kaluza–Klein type) cosmological models has a chaotic character in the spacetimes whose dimensionality is not higher than ten, while in the spacetimes of higher dimensionalities a universe after undergoing a finite number of oscillations enters into monotonic Kasner-type contracting regime.
The development of cosmological studies based on superstring models has revealed some new aspects of the dynamics in the vicinity of the singularity. In these models, mechanisms of changing of Kasner epochs are provoked not by the gravitational interactions but by the influence of other fields present. It was proved that the cosmological models based on six main superstring models plus eleven-dimensional supergravity model exhibit the chaotic BKL dynamics towards the singularity. A connection was discovered between oscillatory BKL-like cosmological models and a special subclass of infinite-dimensional Lie algebras – the so-called hyperbolic Kac–Moody algebras.
Introduction
The basis of modern cosmology are the special solutions of the Einstein field equations found by Alexander Friedmann in 1922–1924. The Universe is assumed homogeneous (space has the same metric properties (measures) in all points) and isotropic (space has the same measures in all directions). Friedmann's solutions allow two possible geometries for space: closed model with a ball-like, outwards-bowed space (positive curvature) and open model with a saddle-like, inwards-bowed space (negative curvature). In both models, the Universe is not standing still, it is constantly either expanding (becoming larger) or contracting (shrinking, becoming smaller). This was confirmed by Edwin Hubble who established the Hubble redshift of receding galaxies. The present consensus is that the isotropic model, in general, gives an adequate description of the present state of the Universe; however, isotropy of the present Universe by itself is not a reason to expect that it is adequate for describing the early stages of Universe evolution. At the same time, it is obvious that in the real world homogeneity is, at best, only an approximation. Even if one can speak about a homogeneous distribution of matter density at distances that are large compared to the intergalactic space, this homogeneity vanishes at smaller scales. On the other hand, the homogeneity assumption goes very far in a mathematical aspect: it makes the solution highly symmetric which can impart specific properties that disappear when considering a more general case.
Another important property of the isotropic model is the inevitable existence of a time singularity: time flow is not continuous, but stops or reverses after time reaches some very large or very small value. Between singularities, time flows in one direction: away from the singularity (arrow of time). In the open model, there is one time singularity so time is limited at one end but unlimited at the other, while in the closed model there are two singularities that limit time at both ends (the Big Bang and Big Crunch).
The only physically interesting properties of spacetimes (such as singularities) are those which are stable, i.e., those properties which still occur when the initial data is perturbed slightly. It is possible for a singularity to be stable and yet be of no physical interest: stability is a necessary but not a sufficient condition for physical relevance. For example, a singularity could be stable only in a neighbourhood of initial data sets corresponding to highly anisotropic universes. Since the actual universe is now apparently almost isotropic such a singularity could not occur in our universe. A sufficient condition for a stable singularity to be of physical interest is the requirement that the singularity be generic (or general). Roughly speaking, a stable singularity is generic if it occurs near every set of initial conditions and the non-gravitational fields are restricted in some specified way to "physically realistic" fields so that the Einstein equations, various equations of state, etc., are assumed to hold on the evolved spacetimes. It might happen that a singularity is stable under small variations of the true gravitational degrees of freedom, and yet it is not generic because the singularity depends in some way on the coordinate system, or rather on the choice of the initial hypersurface from which the spacetime is evolved.
For a system of non-linear differential equations, such as the Einstein equations, a general solution is not unambiguously defined. In principle, there may be multiple general integrals, and each of those may contain only a finite subset of all possible initial conditions. Each of those integrals may contain all required independent functions which, however, may be subject to some conditions (e.g., some inequalities). Existence of a general solution with a singularity, therefore, does not preclude the existence of other additional general solutions that do not contain a singularity. For example, there is no reason to doubt the existence of a general solution without a singularity that describes an isolated body with a relatively small mass.
It is impossible to find a general integral for all space and for all time. However, this is not necessary for resolving the problem: it is sufficient to study the solution near the singularity. This would also resolve another aspect of the problem: the characteristics of spacetime metric evolution in the general solution when it reaches the physical singularity, understood as a point where matter density and invariants of the Riemann curvature tensor become infinite.
Existence of physical time singularity
One of the principal problems studied by the Landau group (to which BKL belong) was whether relativistic cosmological models necessarily contain a time singularity or whether the time singularity is an artifact of the assumptions used to simplify these models. The independence of the singularity on symmetry assumptions would mean that time singularities exist not only in the special, but also in the general solutions of the Einstein equations. It is reasonable to suggest that if a singularity is present in the general solution, there must be some indications that are based only on the most general properties of the Einstein equations, although those indications by themselves might be insufficient for characterizing the singularity.
A criterion for generality of solutions is the number of independent space coordinate functions that they contain. These include only the "physically independent" functions whose number cannot be reduced by any choice of reference frame. In the general solution, the number of such functions must be enough to fully define the initial conditions (distribution and movement of matter, distribution of gravitational field) at some moment of time chosen as initial. This number is four for an empty (vacuum) space, and eight for a matter and/or radiation-filled space.
Previous work by the Landau group; reviewed in) led to the conclusion that the general solution does not contain a physical singularity. This search for a broader class of solutions with a singularity has been done, essentially, by a trial-and-error method, since a systematic approach to the study of the Einstein equations was lacking. A negative result, obtained in this way, is not convincing by itself; a solution with the necessary degree of generality would invalidate it, and at the same time would confirm any positive results related to the specific solution.
At that time, the only known indication for the existence of physical singularity in the general solution was related to the form of the Einstein equations written in a synchronous frame, that is, in a frame in which the proper time x0 = t is synchronized throughout the whole space; in this frame the space distance element dl is separate from the time interval dt. The Einstein equation written in synchronous frame gives a result in which the metric determinant g inevitably becomes zero in a finite time irrespective of any assumptions about matter distribution.
However, the efforts to find a general physical singularity were foregone after it became clear that the singularity mentioned above is linked with a specific geometric property of the synchronous frame: the crossing of time line coordinates. This crossing takes place on some encircling hypersurfaces which are four-dimensional analogs of the caustic surfaces in geometrical optics; g becomes zero exactly at this crossing. Therefore, although this singularity is general, it is fictitious, and not a physical one; it disappears when the reference frame is changed. This, apparently, dissuaded the researchers for further investigations along these lines.
Several years passed before the interest in this problem waxed again when published his theorems that linked the existence of a singularity of unknown character with some very general assumptions that did not have anything in common with a choice of reference frame. Other similar theorems were found later on by Hawking and Geroch (see Penrose–Hawking singularity theorems). This revived interest in the search for singular solutions.
Generalized homogeneous solution
In a space that is both homogeneous and isotropic the metric is determined completely, leaving free only the sign of the curvature. Assuming only space homogeneity with no additional symmetry such as isotropy leaves considerably more freedom in choosing the metric. The following pertains to the space part of the metric at a given instant of time t assuming a synchronous frame so that t is the same synchronised time for the whole space.
The BKL conjecture
In their 1970 work, BKL stated that as one approaches a singularity, terms containing time derivatives in Einstein's equations dominate over those containing spatial derivatives. This has since been known as the BKL conjecture and implies that Einstein's partial differential equations (PDE) are well approximated by ordinary differential equations (ODEs), whence the dynamics of general relativity effectively become local and oscillatory. The time evolution of fields at each spatial point is well approximated by the homogeneous cosmologies in the Bianchi classification.
By separating the time and space derivatives in the Einstein equations, for example, in the way used for the classification of homogeneous spaces, and then setting the terms containing space derivatives equal to zero, one can define the so-called truncated theory of the system (truncated equations). Then, the BKL conjecture can be made more specific:
Weak conjecture: As the singularity is approached the terms containing space derivatives in the Einstein equations are negligible in comparison to the terms containing time derivatives. Thus, as the singularity is approached the Einstein equations approach those found by setting derivative terms to zero. Thus, the weak conjecture says that the Einstein equations can be well approximated by the truncated equations in the vicinity of the singularity. Note that this does not imply that the solutions of the full equations of motion will approach the solutions to the truncated equations as the singularity is approached. This additional condition is captured in the strong version as follows.
Strong conjecture: As the singularity is approached the Einstein equations approach those of the truncated theory and in addition the solutions to the full equations are well approximated by solutions to the truncated equations.
In the beginning, the BKL conjecture seemed to be coordinate-dependent and rather implausible. Barrow and Tipler, for example, among the ten criticisms of BKL studies, include the inappropriate (according to them) choice of synchronous frame as a means to separate time and space derivatives. The BKL conjecture was sometimes rephrased in the literature as a statement that near the singularity only the time derivatives are important. Such a statement, taken at face value, is wrong or at best misleading since, as shown in the BKL analysis itself, space-like gradients of the metric tensor cannot be neglected for generic solutions of pure Einstein gravity in four spacetime dimensions, and in fact play a crucial role in the appearance of the oscillatory regime. However, there exist reformulations of Einstein theory in terms of new variables involving the relevant gradients, for example in Ashtekar-like variables, for which the statement about the dominant role of the time derivatives is correct. It is true that one gets at each spatial point an effective description of the singularity in terms of a finite dimensional dynamical system described by ordinary differential equations with respect to time, but the spatial gradients do enter these equations non-trivially.
Subsequent analysis by a large number of authors has shown that the BKL conjecture can be made precise and by now there is an impressive body of numerical and analytical evidence in its support. It is fair to say that we are still quite far from a proof of the strong conjecture. But there has been outstanding progress in simpler models. In particular, Berger, Garfinkle, Moncrief, Isenberg, Weaver, and others showed that, in a class of models, as the singularity is approached the solutions to the full Einstein field equations approach the "velocity term dominated" (truncated) ones obtained by neglecting spatial derivatives. Andersson and Rendall showed that for gravity coupled to a massless scalar field or a stiff fluid, for every solution to the truncated equations there exists a solution to the full field equations that converges to the truncated solution as the singularity is approached, even in the absence of symmetries. These results were generalized to also include p-form gauge fields. In these truncated models the dynamics are simpler, allowing a precise statement of the conjecture that could be proven. In the general case, the strongest evidence to date comes from numerical evolutions. Berger and Moncrief began a program to analyze generic cosmological singularities. While the initial work focused on symmetry reduced cases, more recently Garfinkle performed numerical evolution of space-times with no symmetries in which, again, the mixmaster behavior is apparent. Finally, additional support for the conjecture has come from a numerical study of the behavior of test fields near the singularity of a Schwarzschild black hole.
Kasner solution
The BKL approach to anisotropic (as opposed to isotropic) homogeneous spaces starts with a generalization of an exact particular solution derived by Kasner for a field in vacuum, in which the space is homogeneous and has a Euclidean metric that depends on time according to the Kasner metric
(dl is the line element; dx, dy, dz are infinitesimal displacements in the three spatial dimensions, and t is time period passed since some initial moment t0 = 0). Here, p1, p2, p3 are any three numbers that satisfy the following Kasner conditions
Because of these relations, only one of the three numbers is independent (two equations with three unknowns). All three numbers are never the same; two numbers are the same only in the sets of values and (0, 0, 1). In all other cases the numbers are different, one number is negative and the other two are positive. This is partially proved by squaring both sides of the first condition and developing the square:
The term is equal to 1 by dint of the second condition and therefore the term with the mixed products should be zero. This is possible if at least one of the p1, p2, p3 is negative.
If the numbers are arranged in increasing order, p1 < p2 < p3, they change in the intervals (Fig. 4)
The Kasner metric corresponds to a flat homogenous but anisotropic space in which all volumes increase with time in such a way that the linear distances along two axes y and z increase while the distance along the axis x decreases. The moment t = 0 causes a singularity in the solution; the singularity in the metric at t = 0 cannot be avoided by any reference frame transformation. At the singularity, the invariants of the four-dimensional curvature tensor go to infinity. An exception is the case p1 = р2 = 0, р3 = 1; these values correspond to a flat spacetime: the transformation t sh z = ζ, t ch z = τ turns the Kasner metric () into Galilean.
BKL parametrize the numbers p1, p2, p3 in terms of a single independent (real) parameter u (Lifshitz-Khalatnikov parameter) as follows
The Kasner index parametrization appears mysterious until one thinks about the two constraints on the indices . Both constraints fix the overall scale of the indices so that only their ratios can vary. It is natural to pick one of those ratios as a new parameter, which can be done in six different ways. Picking u = u32 = p3 / p2, for example, it is trivial to express all six possible ratios in terms of it. Eliminating p3 = up2 first, and then using the linear constraint to eliminate p1 = 1 − p2 − up2 = 1 − (1 + u)p2, the quadratic constraint reduces to a quadratic equation in p2
with roots p2 = 0 (obvious) and p2 = (1 + u) / (1 + u + u2), from which p1 and p3 are then obtained by back substitution. One can define six such parameters uab = pa / pb, for which pc ≤ pb ≤ pa when (c, b, a) is a cyclic permutation of (1, 2, 3).
All different values of p1, p2, p3 ordered as above are obtained with u running in the range u ≥ 1. The values u < 1 are brought into this range according to
In the generalized solution, the form corresponding to applies only to the asymptotic metric (the metric close to the singularity t = 0), respectively, to the major terms of its series expansion by powers of t. In the synchronous reference frame it is written in the form of with a space distance element
where
The three-dimensional vectors l, m, n define the directions at which space distance changes with time by the power laws . These vectors, as well as the numbers pl, pm, pn which, as before, are related by , are functions of the space coordinates. The powers pl, pm, pn are not arranged in increasing order, reserving the symbols p1, p2, p3 for the numbers in that remain arranged in increasing order. The determinant of the metric of is
where v = l[mn]. It is convenient to introduce the following quantities
The space metric in is anisotropic because the powers of t in cannot have the same values. On approaching the singularity at t = 0, the linear distances in each space element decrease in two directions and increase in the third direction. The volume of the element decreases in proportion to t.
The Kasner metric is introduced in the Einstein equations by substituting the respective metric tensor γαβ from without defining a priori the dependence of a, b, c from t:
where the dot above a symbol designates differentiation with respect to time. The Einstein equation takes the form
All its terms are to a second order for the large (at t → 0) quantity 1/t. In the Einstein equations , terms of such order appear only from terms that are time-differentiated. If the components of Pαβ do not include terms of order higher than two, then
where indices l, m, n designate tensor components in the directions l, m, n. These equations together with give the expressions with powers that satisfy .
However, the presence of one negative power among the 3 powers pl, pm, pn results in appearance of terms from Pαβ with an order greater than t−2. If the negative power is pl (pl = p1 < 0), then Pαβ contains the coordinate function λ and become
Here, the second terms are of order t−2(pm + pn − pl) whereby pm + pn − pl = 1 + 2 |pl| > 1. To remove these terms and restore the metric , it is necessary to impose on the coordinate functions the condition λ = 0.
The remaining three Einstein equations contain only first order time derivatives of the metric tensor. They give three time-independent relations that must be imposed as necessary conditions on the coordinate functions in . This, together with the condition λ = 0, makes four conditions. These conditions bind ten different coordinate functions: three components of each of the vectors l, m, n, and one function in the powers of t (any one of the functions pl, pm, pn, which are bound by the conditions ). When calculating the number of physically arbitrary functions, it must be taken into account that the synchronous system used here allows time-independent arbitrary transformations of the three space coordinates. Therefore, the final solution contains overall 10 − 4 − 3 = 3 physically arbitrary functions which is one less than what is needed for the general solution in vacuum.
The degree of generality reached at this point is not lessened by introducing matter; matter is written into the metric and contributes four new coordinate functions necessary to describe the initial distribution of its density and the three components of its velocity. This makes possible to determine matter evolution merely from the laws of its movement in an a priori given gravitational field which are the hydrodynamic equations
where ui is the 4-dimensional velocity, ε and σ are the densities of energy and entropy of matter (cf. and; also; for details see ). For the ultrarelativistic equation of state p = ε/3 the entropy σ ~ ε1/4. The major terms in and are those that contain time derivatives. From and the space components of one has
resulting in
where 'const' are time-independent quantities. Additionally, from the identity uiui = 1 one has (because all covariant components of uα are to the same order)
where un is the velocity component along the direction of n that is connected with the highest (positive) power of t (supposing that pn = p3). From the above relations, it follows that
or
The above equations can be used to confirm that the components of the matter stress-energy-momentum tensor standing in the right hand side of the equations
are, indeed, to a lower order by 1/t than the major terms in their left hand sides. In the equations the presence of matter results only in the change of relations imposed on their constituent coordinate functions.
The fact that ε becomes infinite by the law confirms that in the solution to one deals with a physical singularity at any values of the powers p1, p2, p3 excepting only (0, 0, 1). For these last values, the singularity is non-physical and can be removed by a change of reference frame.
The fictional singularity corresponding to the powers (0, 0, 1) arises as a result of time line coordinates crossing over some 2-dimensional "focal surface". As pointed out in, a synchronous reference frame can always be chosen in such a way that this inevitable time line crossing occurs exactly on such surface (instead of a 3-dimensional caustic surface). Therefore, a solution with such simultaneous for the whole space fictional singularity must exist with a full set of arbitrary functions needed for the general solution. Close to the point t = 0 it allows a regular expansion by whole powers of t. For an analysis of this case, see.
Oscillating mode towards the singularity
The general solution by definition is completely stable; otherwise the Universe would not exist. Any perturbation is equivalent to a change in the initial conditions in some moment of time; since the general solution allows arbitrary initial conditions, the perturbation is not able to change its character. Looked at such angle, the four conditions imposed on the coordinate functions in the solution are of different types: three conditions that arise from the equations = 0 are "natural"; they are a consequence of the structure of Einstein equations. However, the additional condition λ = 0 that causes the loss of one derivative function, is of entirely different type: instability caused by perturbations can break this condition. The action of such perturbation must bring the model to another, more general, mode. The perturbation cannot be considered as small: a transition to a new mode exceeds the range of very small perturbations.
The analysis of the behavior of the model under perturbative action, performed by BKL, delineates a complex oscillatory mode on approaching the singularity. They could not give all details of this mode in the broad frame of the general case. However, BKL explained the most important properties and character of the solution on specific models that allow far-reaching analytical study.
These models are based on a homogeneous space metric of a particular type. Supposing a homogeneity of space without any additional symmetry leaves a great freedom in choosing the metric. All possible homogeneous (but anisotropic) spaces are classified, according to Bianchi, in several Bianchi types (Type I to IX). (see also Generalized homogeneous solution) BKL investigate only spaces of Bianchi Types VIII and IX.
If the metric has the form of , for each type of homogeneous spaces exists some functional relation between the reference vectors l, m, n and the space coordinates. The specific form of this relation is not important. The important fact is that for Type VIII and IX spaces, the quantities λ, μ, ν are constants while all "mixed" products l rot m, l rot n, m rot l, etc.. are zeros. For Type IX spaces, the quantities λ, μ, ν have the same sign and one can write λ = μ = ν = 1 (the simultaneous sign change of the 3 constants does not change anything). For Type VIII spaces, 2 constants have a sign that is opposite to the sign of the third constant; one can write, for example, λ = − 1, μ = ν = 1.
The study of the effect of the perturbation on the "Kasner mode" is thus confined to a study on the effect of the λ-containing terms in the Einstein equations. Type VIII and IX spaces are the most suitable models for such a study. Since all 3 quantities λ, μ, ν in those Bianchi types differ from zero, the condition λ = 0 does not hold irrespective of which direction l, m, n has negative power law time dependence.
The Einstein equations for the Type VIII and Type IX space models are
(the remaining components , , , , , are identically zeros). These equations contain only functions of time; this is a condition that has to be fulfilled in all homogeneous spaces. Here, the and are exact and their validity does not depend on how near one is to the singularity at t = 0.
The time derivatives in and take a simpler form if а, b, с are substituted by their logarithms α, β, γ:
substituting the variable t for τ according to:
Then (subscripts denote differentiation by τ):
Adding together equations and substituting in the left hand side the sum (α + β + γ)τ τ according to , one obtains an equation containing only first derivatives which is the first integral of the system :
This equation plays the role of a binding condition imposed on the initial state of . The Kasner mode is a solution of when ignoring all terms in the right hand sides. But such situation cannot go on (at t → 0) indefinitely because among those terms there are always some that grow. Thus, if the negative power is in the function a(t) (pl = p1) then the perturbation of the Kasner mode will arise by the terms λ2a4; the rest of the terms will decrease with decreasing t. If only the growing terms are left in the right hand sides of , one obtains the system:
(compare ; below it is substituted λ2 = 1). The solution of these equations must describe the metric evolution from the initial state, in which it is described by with a given set of powers (with pl < 0); let pl = р1, pm = р2, pn = р3 so that
Then
where Λ is constant. Initial conditions for are redefined as
Equations are easily integrated; the solution that satisfies the condition is
where b0 and c0 are two more constants.
It can easily be seen that the asymptotic of functions at t → 0 is . The asymptotic expressions of these functions and the function t(τ) at τ → −∞ is
Expressing a, b, c as functions of t, one has
where
Then
The above shows that perturbation acts in such a way that it changes one Kasner mode with another Kasner mode, and in this process the negative power of t flips from direction l to direction m: if before it was pl < 0, now it is p'm < 0. During this change the function a(t) passes through a maximum and b(t) passes through a minimum; b, which before was decreasing, now increases: a from increasing becomes decreasing; and the decreasing c(t) decreases further. The perturbation itself (λ2a4α in ), which before was increasing, now begins to decrease and die away. Further evolution similarly causes an increase in the perturbation from the terms with μ2 (instead of λ2) in , next change of the Kasner mode, and so on.
It is convenient to write the power substitution rule with the help of the parametrization :
The greater of the two positive powers remains positive.
BKL call this flip of negative power between directions a Kasner epoch. The key to understanding the character of metric evolution on approaching singularity is exactly this process of Kasner epoch alternation with flipping of powers pl, pm, pn by the rule .
The successive alternations with flipping of the negative power p1 between directions l and m (Kasner epochs) continues by depletion of the whole part of the initial u until the moment at which u < 1. The value u < 1 transforms into u > 1 according to ; in this moment the negative power is pl or pm while pn becomes the lesser of two positive numbers (pn = p2). The next series of Kasner epochs then flips the negative power between directions n and l or between n and m. At an arbitrary (irrational) initial value of u this process of alternation continues unlimited.
In the exact solution of the Einstein equations, the powers pl, pm, pn lose their original precise sense. This circumstance introduces some "fuzziness" in the determination of these numbers (and together with them, to the parameter u) which, although small, makes meaningless the analysis of any definite (for example, rational) values of u. Therefore, only these laws that concern arbitrary irrational values of u have any particular meaning.
The larger periods in which the scales of space distances along two axes oscillate while distances along the third axis decrease monotonously, are called eras; volumes decrease by a law close to ~ t. On transition from one era to the next, the direction in which distances decrease monotonously, flips from one axis to another. The order of these transitions acquires the asymptotic character of a random process. The same random order is also characteristic for the alternation of the lengths of successive eras (by era length, BKL understand the number of Kasner epoch that an era contains, and not a time interval).
To each era (s-th era) correspond a series of values of the parameter u starting from the greatest, , and through the values − 1, − 2, ..., reaching to the smallest, < 1. Then
that is, k(s) = [] where the brackets mean the whole part of the value. The number k(s) is the era length, measured by the number of Kasner epochs that the era contains. For the next era
In the limitless series of numbers u, composed by these rules, there are infinitesimally small (but never zero) values x(s) and correspondingly infinitely large lengths k(s).
The era series become denser on approaching t = 0. However, the natural variable for describing the time course of this evolution is not the world time t, but its logarithm, ln t, by which the whole process of reaching the singularity is extended to −∞.
According to , one of the functions a, b, c, that passes through a maximum during a transition between Kasner epochs, at the peak of its maximum is
where it is supposed that amax is large compared to b0 and c0; in
u is the value of the parameter in the Kasner epoch before transition. It can be seen from here that the peaks of consecutive maxima during each era are gradually lowered. Indeed, in the next Kasner epoch this parameter has the value u = u − 1, and Λ is substituted according to with Λ' = Λ(1 − 2|p1(u)|). Therefore, the ratio of 2 consecutive maxima is
and finally
The above are solutions to Einstein equations in vacuum. As for the pure Kasner mode, matter does not change the qualitative properties of this solution and can be written into it disregarding its reaction on the field. However, if one does this for the model under discussion, understood as an exact solution of the Einstein equations, the resulting picture of matter evolution would not have a general character and would be specific for the high symmetry imminent to the present model. Mathematically, this specificity is related to the fact that for the homogeneous space geometry discussed here, the Ricci tensor components are identically zeros and therefore the Einstein equations would not allow movement of matter (which gives non-zero stress energy-momentum tensor components ). In other words, the synchronous frame must also be co-moving with respect to matter. If one substitutes in uα = 0, u0 = 1, it becomes ε ~ (abc)−4/3 ~ t−4/3.
This difficulty is avoided if one includes in the model only the major terms of the limiting (at t → 0) metric and writes into it a matter with arbitrary initial distribution of densities and velocities. Then the course of evolution of matter is determined by its general laws of movement and that result in . During each Kasner epoch, density increases by the law
where p3 is, as above, the greatest of the numbers p1, p2, p3. Matter density increases monotonously during all evolution towards the singularity.
Metric evolution
Very large u values correspond to Kasner powers
which are close to the values (0, 0, 1). Two values that are close to zero, are also close to each other, and therefore the changes in two out of the three types of "perturbations" (the terms with λ, μ and ν in the right hand sides of ) are also very similar. If in the beginning of such long era these terms are very close in absolute values in the moment of transition between two Kasner epochs (or made artificially such by assigning initial conditions) then they will remain close during the greatest part of the length of the whole era. In this case (BKL call this the case of small oscillations), analysis based on the action of one type of perturbations becomes incorrect; one must take into account the simultaneous effect of two perturbation types.
Two perturbations
Consider a long era, during which two of the functions a, b, c (let them be a and b) undergo small oscillations while the third function (c) decreases monotonously. The latter function quickly becomes small; consider the solution just in the region where one can ignore c in comparison to a and b. The calculations are first done for the Type IX space model by substituting accordingly λ = μ = ν = 1.
After ignoring function c, the first 2 equations give
and can be used as a third equation, which takes the form
The solution of is written in the form
where α0, ξ0 are positive constants, and τ0 is the upper limit of the era for the variable τ. It is convenient to introduce further a new variable (instead of τ)
Then
Equations and are transformed by introducing the variable χ = α − β:
Decrease of τ from τ0 to −∞ corresponds to a decrease of ξ from ξ0 to 0. The long era with close a and b (that is, with small χ), considered here, is obtained if ξ0 is a very large quantity. Indeed, at large ξ the solution of in the first approximation by 1/ξ is
where A is constant; the multiplier makes χ a small quantity so it can be substituted in by sh 2χ ≈ 2χ.
From one obtains
After determining α and β from and and expanding eα and eβ in series according to the above approximation, one obtains finally:
The relation between the variable ξ and time t is obtained by integration of the definition dt = abc dτ which gives
The constant c0 (the value of с at ξ = ξ0) should be now c0 α0·
Let us now consider the domain ξ 1. Here the major terms in the solution of are:
where k is a constant in the range − 1 < k < 1; this condition ensures that the last term in is small (sh 2χ contains ξ2k and ξ−2k). Then, after determining α, β, and t, one obtains
This is again a Kasner mode with the negative t power present in the function c(t).
These results picture an evolution that is qualitatively similar to that, described above. During a long period of time that corresponds to a large decreasing ξ value, the two functions a and b oscillate, remaining close in magnitude ; in the same time, both functions a and b slowly () decrease. The period of oscillations is constant by the variable ξ : Δξ = 2π (or, which is the same, with a constant period by logarithmic time: Δ ln t = 2πΑ2). The third function, c, decreases monotonously by a law close to c = c0t/t0.
This evolution continues until ξ ≈1 and formulas and are no longer applicable. Its time duration corresponds to change of t from t0 to the value t1, related to ξ0 according to
The relationship between ξ and t during this time can be presented in the form
After that, as seen from , the decreasing function c starts to increase while functions a and b start to decrease. This Kasner epoch continues until terms c2/a2b2 in become ~ t2 and a next series of oscillations begins.
The law for density change during the long era under discussion is obtained by substitution of in :
When ξ changes from ξ0 to ξ ≈1, the density increases times.
It must be stressed that although the function c(t) changes by a law, close to c ~ t, the metric does not correspond to a Kasner metric with powers (0, 0, 1). The latter corresponds to an exact solution found by Taub which is allowed by eqs. – and in which
where p, δ1, δ2 are constant. In the asymptotic region τ → −∞, one can obtain from here a = b = const, c = const.t after the substitution ерτ = t. In this metric, the singularity at t = 0 is non-physical.
Let us now describe the analogous study of the Type VIII model, substituting in eqs. – λ = −1, μ = ν = 1.
If during the long era, the monotonically decreasing function is a, nothing changes in the foregoing analysis: ignoring a2 on the right side of equations and , goes back to the same equations and (with altered notation). Some changes occur, however, if the monotonically decreasing function is b or c; let it be c.
As before, one has equation with the same symbols, and, therefore, the former expressions for the functions a(ξ) and b(ξ), but equation is replaced by
The major term at large ξ now becomes
so that
The value of c as a function of time t is again c = c0t/t0 but the time dependence of ξ changes. The length of a long era depends on ξ0 according to
On the other hand, the value ξ0 determines the number of oscillations of the functions a and b during an era (equal to ξ0/2π). Given the length of an era in logarithmic time (i.e., with given ratio t0/t1) the number of oscillations for Type VIII will be, generally speaking, less than for Type IX. For the period of oscillations one gets now Δ ln t = πξ/2; contrary to Type IX, the period is not constant throughout the long era, and slowly decreases along with ξ.
The small-time domain
Long eras violate the "regular" course of evolution which makes it difficult to study the evolution of time intervals spanning several eras. It can be shown, however, that such "abnormal" cases appear in the spontaneous evolution of the model to a singular point in the asymptotically small times t at sufficiently large distances from a start point with arbitrary initial conditions. Even in long eras both oscillatory functions during transitions between Kasner epochs remain so different that the transition occurs under the influence of only one perturbation. All results in this section relate equally to models of the types VIII and IX.
During each Kasner epoch abc = Λt, i. e. α + β + γ = ln Λ + ln t. On changing over from one epoch (with a given value of the parameter u) to the next epoch the constant Λ is multiplied by 1 + 2p1 = (1 – u + u2)/(1 + u + u2) < 1. Thus a systematic decrease in Λ takes place. But it is essential that the mean (with respect to the lengths k of eras) value of the entire variation of ln Λ during an era is finite. Actually the divergence of the mean value could be due only to a too rapid increase of this variation with increasing k. For large value of the parameter u, ln(1 + 2p1) ≈ −2/u. For a large k the maximal value u(max) = k + x ≈ k. Hence the entire variation of ln Λ during an era is given by a sum of the form
with only the terms that correspond to large values of u written down. When k increases this sum increases as ln k. But the probability for an appearance of an era of a large length k decreases as 1/k2 according to ; hence the mean value of the sum above is finite. Consequently, the systematic variation of the quantity ln Λ over a large number of eras will be proportional to this number. But it is seen in that with t → 0 the number s increases merely as ln |ln t|. Thus in the asymptotic limit of arbitrarily small t the term ln Λ can indeed be neglected as compared to ln t. In this approximation
where Ω denotes the "logarithmic time"
and the process of epoch transitions can be regarded as a series of brief time flashes. The magnitudes of maxima of the oscillating scale functions are also subject to a systematic variation. From for u ≫ 1 it follows that . In the same way as it was done above for the quantity ln Λ, one can hence deduce that the mean decrease in the height of the maxima during an era is finite and the total decrease over a large number of eras increases with t → 0 merely as ln Ω. At the same time the lowering of the minima, and by the same token the increase of the amplitude of the oscillations, proceed () proportional to Ω. In correspondence with the adopted approximation the lowering of the maxima is neglected in comparison with the increase of the amplitudes so for the maximal values of all oscillating functions and the quantities α, β, γ run only through negative values that are connected with one another at each instant of time by the relation .
Considering such instant change of epochs, the transition periods are ignored as small in comparison to the epoch length; this condition is actually fulfilled. Replacement of α, β, and γ maxima with zeros requires that quantities ln (|p1|Λ) be small in comparison with the amplitudes of oscillations of the respective functions. As mentioned above, during transitions between eras |p1| values can become very small while their magnitude and probability for occurrence are not related to the oscillation amplitudes in the respective moment. Therefore, in principle, it is possible to reach so small |p1| values that the above condition (zero maxima) is violated. Such drastic drop of can lead to various special situations in which the transition between Kasner epochs by the rule becomes incorrect (including the situations described above). These "dangerous" situations could break the laws used for the statistical analysis below. As mentioned, however, the probability for such deviations converges asymptotically to zero; this issue will be discussed below.
Consider an era that contains k Kasner epochs with a parameter u running through the values
and let α and β are the oscillating functions during this era (Fig. 4).
Initial moments of Kasner epochs with parameters un are Ωn. In each initial moment, one of the values α or β is zero, while the other has a minimum. Values α or β in consecutive minima, that is, in moments Ωn are
(not distinguishing minima α and β). Values δn that measure those minima in respective Ωn units can run between 0 and 1. Function γ monotonously decreases during this era; according to its value in moment Ωn is
During the epoch starting at moment Ωn and ending at moment Ωn+1 one of the functions α or β increases from −δnΩn to zero while the other decreases from 0 to −δn+1Ωn+1 by linear laws, respectively:
and
resulting in the recurrence relation
and for the logarithmic epoch length
where, for short, f(u) = 1 + u + u2. The sum of n epoch lengths is obtained by the formula
It can be seen from that |αn+1| > |αn|, i.e., the oscillation amplitudes of functions α and β increase during the whole era although the factors δn may be small. If the minimum at the beginning of an era is deep, the next minima will not become shallower; in other words, the residue |α — β| at the moment of transition between Kasner epochs remains large. This assertion does not depend upon era length k because transitions between epochs are determined by the common rule also for long eras.
The last oscillation amplitude of functions α or β in a given era is related to the amplitude of the first oscillation by the relationship |αk−1| = |α0| (k + x) / (1 + x). Even at ks as small as several units x can be ignored in comparison to k so that the increase of α and β oscillation amplitudes becomes proportional to the era length. For functions a = eα and b = eβ this means that if the amplitude of their oscillations in the beginning of an era was A0, at the end of this era the amplitude will become .
The length of Kasner epochs (in logarithmic time) also increases inside a given era; it is easy to calculate from that Δn+1 > Δn. The total era length is
(the term with 1/x arises from the last, k-th, epoch whose length is great at small x; cf. Fig. 2). Moment Ωn when the k-th epoch of a given era ends is at the same time moment Ω'0 of the beginning of the next era.
In the first Kasner epoch of the new era function γ is the first to rise from the minimal value γk = − Ωk (1 − δk) that it reached in the previous era; this value plays the role of a starting amplitude δ'0Ω'0 for the new series of oscillations. It is easily obtained that:
It is obvious that δ'0Ω'0 > δ0Ω0. Even at not very great k the amplitude increase is very significant: function c = eγ begins to oscillate from amplitude . The issue about the above-mentioned "dangerous" cases of drastic lowering of the upper oscillation limit is left aside for now.
According to the increase in matter density during the first (k − 1) epochs is given by the formula
For the last k epoch of a given era, at u = x < 1 the greatest power is p2(x) (not p3(x) ). Therefore, for the density increase over the whole era one obtains
Therefore, even at not very great k values, . During the next era (with a length k ' ) density will increase faster because of the increased starting amplitude A0': , etc. These formulae illustrate the steep increase in matter density.
Statistical analysis near the singularity
The sequence of era lengths k(s), measured by the number of Kasner epochs contained in them, acquires asymptotically the character of a random process. The same pertains also to the sequence of the interchanges of the pairs of oscillating functions on going over from one era to the next (it depends on whether the numbers k(s) are even or odd). A source of this stochasticity is the rule – according to which the transition from one era to the next is determined in an infinite numerical sequence of u values. This rule states, in other words, that if the entire infinite sequence begins with a certain initial value , then the lengths of the eras k(0), k(1), ..., are the numbers in the simple continued fraction expansion
This expansion corresponds to the mapping transformation of the interval [0, 1] onto itself by the formula Tx = {1/x}, i.e., xs+1 = {1/xs}. This transformation belongs to the so-called expanding transformations of the interval [0, 1], i.e., transformations x → f(x) with |f′(x)| > 1. Such transformations possess the property of exponential instability: if we take initially two close points their mutual distance increases exponentially under the iterations of the transformations. It is well known that the exponential instability leads to the appearance of strong stochastic properties.
It is possible to change over to a probabilistic description of such a sequence by considering not a definite initial value x(0) but the values x(0) = x distributed in the interval from 0 to 1 in accordance with a certain probabilistic distributional law w0(x). Then the values of x(s) terminating each era will also have distributions that follow certain laws ws(x). Let ws(x)dx be the probability that the s-th era terminates with the value lying in a specified interval dx.
The value x(s) = x, which terminates the s-th era, can result from initial (for this era) values , where k = 1, 2, ...; these values of correspond to the values x(s–1) = 1/(k + x) for the preceding era. Noting this, one can write the following recurrence relation, which expresses the distribution of the probabilities ws(x) in terms of the distribution ws–1(x):
or
If the distribution ws(x) tends with increasing s to a stationary (independent of s) limiting distribution w(x), then the latter should satisfy an equation obtained from by dropping the indices of the functions ws−1(x) and ws(x). This equation has a solution
(normalized to unity and taken to the first order of x).
In order for the s-th era to have a length k, the preceding era must terminate with a number x in the interval between 1/(k + 1) and 1/k. Therefore, the probability that the era will have a length k is equal to (in the stationary limit)
At large values of k
In relating the statistical properties of the cosmological model with the ergodic properties of the transformation xs+1 = {1/xs} an important point must be mentioned. In an infinite sequence of numbers x constructed in accordance with this rule, arbitrarily small (but never vanishing) values of x will be observed corresponding to arbitrarily large lengths k. Such cases can (by no means necessarily!) give rise to certain specific situations when the notion of eras, as of sequences of Kasner epochs interchanging each other according to the rule , loses its meaning (although the oscillatory mode of evolution of the model still persists). Such an "anomalous" situation can be manifested, for instance, in the necessity to retain in the right-hand side of terms not only with one of the functions a, b, c (say, a4), as is the case in the "regular" interchange of the Kasner epochs, but simultaneously with two of them (say, a4, b4, a2b2).
On emerging from an "anomalous" series of oscillations a succession of regular eras is restored. Statistical analysis of the behavior of the model which is entirely based on regular iterations of the transformations is corroborated by an important theorem: the probability of the appearance of anomalous cases tends asymptotically to zero as the number of iterations s → ∞ (i.e., the time t → 0) which is proved at the end of this section. The validity of this assertion is largely due to a very rapid rate of increase of the oscillation amplitudes during every era and especially in transition from one era to the next one.
The process of the relaxation of the cosmological model to the "stationary" statistical regime (with t → 0 starting from a given "initial instant") is less interesting, however, than the properties of this regime itself with due account taken for the concrete laws of the variation of the physical characteristics of the model during the successive eras.
An idea of the rate at which the stationary distribution sets in is obtained from the following example. Let the initial values x(0) be distributed in a narrow interval of width δx(0) about some definite number. From the recurrence relation (or directly from the expansion ) it is easy to conclude that the widths of the distributions ws(x) (about other definite numbers) will then be equal to
(this expression is valid only so long as it defines quantities δx(s) ≪ 1).
The mean value , calculated from this distribution, diverges logarithmically. For a sequence, cut off at a very large, but still finite number N, one has . The usefulness of the mean in this case is very limited because of its instability: because of the slow decrease of W(k), fluctuations in k diverge faster than its mean. A more adequate characteristic of this sequence is the probability that a randomly chosen number from it belongs to an era of length K where K is large. This probability is lnK / lnN. It is small if . In this respect one can say that a randomly chosen number from the given sequence belongs to the long era with a high probability.
It convenient to average expressions that depend simultaneously on k(s) and x(s). Since both these quantities are derived from the same quantity x(s–1) (which terminates the preceding era), in accordance with the formula k(s) + x(s) = 1/x(s–1), their statistical distributions cannot be regarded as independent. The joint distribution Ws(k,x)dx of both quantities can be obtained from the distribution ws–1(x)dx by making in the latter the substitution x → 1/(x + k). In other words, the function Ws(k,x) is given by the very expression under the summation sign in the right side of . In the stationary limit, taking w from , one obtains
Summation of this distribution over k brings us back to , and integration with respect to dx to .
The recurrent formulas defining transitions between eras are re-written with index s numbering the successive eras (not the Kasner epochs in a given era!), beginning from some era (s = 0) defined as initial. Ω(s) and ε(s) are, respectively, the initial moment and initial matter density in the s-th era; δ(s)Ω(s) is the initial oscillation amplitude of that pair of functions α, β, γ, which oscillates in the given era: k(s) is the length of s-th era, and x(s) determines the length (number of Kasner epochs) of the next era according to k(s+1) = [1/x(s)]. According to –
(ξ(s) is introduced in to be used further on).
The quantities δ(s) have a stable stationary statistical distribution P(δ) and a stable (small relative fluctuations) mean value. For their determination KL in coauthorship with Ilya Lifshitz, the brother of Evgeny Lifshitz, used (with due reservations) an approximate method based on the assumption of statistical independence of the random quantity δ(s) and of the random quantities k(s), x(s). For the function P(δ) an integral equation was set up which expressed the fact that the quantities δ(s+1) and δ(s) interconnected by the relation have the same distribution; this equation was solved numerically. In a later work, Khalatnikov et al. showed that the distribution P(δ) can actually be found exactly by an analytical method (see Fig. 5).
For the statistical properties in the stationary limit, it is reasonable to introduce the so-called natural extension of the transformation Tx = {1/x} by continuing it without limit to negative indices. Otherwise stated, this is a transition from a one-sided infinite sequence of the numbers (x0, x1, x2, ...), connected by the equalities Tx = {1/x}, to a "doubly infinite" sequence X = (..., x−1, x0, x1, x2, ...) of the numbers which are connected by the same equalities for all –∞ < s < ∞. Of course, such expansion is not unique in the literal meaning of the word (since xs–1 is not determined uniquely by xs), but all statistical properties of the extended sequence are uniform over its entire length, i.e., are invariant with respect to arbitrary shift (and x0 loses its meaning of an "initial" condition). The sequence X is equivalent to a sequence of integers K = (..., k−1, k0, k1, k2, ...), constructed by the rule ks = [1/xs–1]. Inversely, every number of X is determined by the integers of K as an infinite continued fraction
(the convenience of introducing the notation with an index shifted by 1 will become clear in the following). For concise notation the continuous fraction is denoted simply by enumeration (in square brackets) of its denominators; then the definition of can be written as
Reverse quantities are defined by a continued fraction with a retrograde (in the direction of diminishing indices) sequence of denominators
The recurrence relation is transformed by introducing temporarily the notation ηs = (1 − δs)/δs. Then can be rewritten as
By iteration an infinite continuous fraction is obtained
Hence and finally
This expression for δs contains only two (instead of the three in ) random quantities and , each of which assumes values in the interval [0, 1].
It follows from the definition that . Hence the shift of the entire sequence X by one step to the right means a joint transformation of the quantities and according to
This is a one-to-one mapping in the unit square. Thus we have now a one-to-one transformation of two quantities instead of a not one-to-one transformation Tx = {1/x} of one quantity.
The quantities and have a joint stationary distribution P(x+, x−). Since is a one-to-one transformation, the condition for the distribution to be stationary is expressed simply by a function equation
where J is the Jacobian of the transformation.
A shift of the sequence X by one step gives rise to the following transformation T of the unit square:
(with , , cf. ). The density P(x, y) defines the invariant measure for this transformation. It is natural to suppose that P(x, y) is a symmetric function of x and y. This means that the measure is invariant with respect to the transformation S(x, y) = (y, x) and hence with respect to the product ST with ST(x, y) = (x″, y″) and
Evidently ST has a first integral H = 1/x + y. On the line H = const ≡ c the transformation has the form
Hence the invariant measure density of ST must be of the form
Accounting for the symmetry P(x, y)= P(y, x), this becomes f(c)= c−2 and hence (after normalization)
(its integration over x+ or x– yields the function w(x) ). The reduction of the transformation to one-to-one mapping was used already by Chernoff and Barrow and they obtained a formula of the form of but for other variables; their paper does not contain applications to the problems which are considered in Khalatnikov et al.
The correctness of be verified also by a direct calculation; the Jacobian of the transformation is
(in its calculation one must note that ).
Since by δs is expressed in terms of the random quantities x+ and x−, the knowledge of their joint distribution makes it possible to calculate the statistical distribution P(δ) by integrating P(x+, x−) over one of the variables at a constant value of δ. Due to symmetry of the function with respect to the variables x+ and x−, P(δ) = P(1 − δ), i.e., the function P(δ) is symmetrical with respect to the point δ = 1/2. Then
On evaluating this integral (for 0 ≤ δ ≤ 1/2 and then making use of the aforementioned symmetry), finally
The mean value = 1/2 already as a result of the symmetry of the function P(δ). Thus the mean value of the initial (in every era) amplitude of oscillations of the functions α, β, γ increases as Ω/2.
The statistical relation between large time intervals Ω and the number of eras s contained in them is found by repeated application of :
Direct averaging of this equation, however, does not make sense: because of the slow decrease of function W(k) , the average values of the quantity exp ξ(s) are unstable in the above sense – the fluctuations increase even more rapidly than the mean value itself with increasing region of averaging. This instability is eliminated by taking the logarithm: the "doubly-logarithmic" time interval
is expressed by the sum of quantities ξ(p) which have a stable statistical distribution. The mean value of τ is . To calculate note that can be rewritten as
For the stationary distribution , and in virtue of the symmetry of the function P(δ) also . Hence
(w(x) from ). Thus
which determines the mean doubly-logarithmic time interval containing s successive eras.
For large s the number of terms in the sum is large and according to general theorems of the ergodic theory the values of τs are distributed around according to Gauss's law with the density
Calculation of the variance Dτ is more complicated since not only the knowledge of and are needed but also of the correlations . The calculation can be simplified by rearranging the terms in the sum . By using the sum can be rewritten as
The last two terms do not increase with increasing s; these terms can be omitted as the limiting laws for large s are dominating. Then
(the expression for δp is taken into account). To the same accuracy (i.e., up to the terms which do not increase with s) the equality
is valid. Indeed, in virtue of
and hence
By summing this identity over p is obtained. Finally again with the same accuracy is changed for xp under the summation sign and thus represent τs as
The variance of this sum in the limit of large s is
It is taken into account that in virtue of the statistical homogeneity of the sequence X the correlations depend only on the differences |p − p′|. The mean value ; the mean square
By taking into account also the values of correlations with p = 1, 2, 3
(calculated numerically) the final result Dτs = (3.5 ± 0.1)s is obtained.
At increasing s the relative fluctuation tends to zero as s−1/2. In other words, the statistical relation becomes almost certain at large s. This makes it possible to invert the relation, i.e., to represent it as the dependence of the average number of the eras sτ that are interchanged in a given interval τ of the double logarithmic time:
The statistical distribution of the exact values of sτ around its average is also Gaussian with the variance
The respective statistical distribution is given by the same Gaussian distribution in which the random variable is now sτ at a given τ:
From this point of view, the source of the statistical behavior is the arbitrariness in the choice of the starting point of the interval τ superimposed on the infinite sequence of the interchanging eras.
Respective to matter density, can be re-written with account of in the form
and then, for the total energy change during s eras,
The term with the sum by p gives the main contribution to this expression because it contains an exponent with a large power. Leaving only this term and averaging , one gets in its right hand side the expression which coincides with ; all other terms in the sum (also terms with ηs in their powers) lead only to corrections of a relative order 1/s. Therefore,
By virtue of the almost certain character of the relation between τs and s can be written as
which determines the value of the double logarithm of density increase averaged by given double-logarithmic time intervals τ or by a given number of eras s.
These stable statistical relationships exist specifically for doubly-logarithmic time intervals and for the density increase. For other characteristics, e.g., ln (ε(s)/ε(0)) or Ω(s) / Ω(0) = exp τs the relative fluctuation increase exponentially with the increase of the averaging range thereby voiding the term mean value of a stable meaning.
The origin of the statistical relationship can be traced already from the initial law governing the variation of the density during the individual Kasner epochs. According to , during the entire evolution we have
with 1 − p3(t) changing from epoch to epoch, running through values in the interval from 0 to 1. The term ln Ω = ln ln (1/t) increases monotonically; on the other hand, the term ln2(1 − p3) can assume large values (comparable with ln Ω) only when values of p3 very close to unity appear (i.e., very small |p1|). These are precisely the "dangerous" cases that disturb the regular course of evolution expressed by the recurrent relationships –.
It remains to show that such cases actually do not arise in the asymptotic limiting regime. The spontaneous evolution of the model starts at a certain instant at which definite initial conditions are specified in an arbitrary manner. Accordingly, by "asymptotic" is meant a regime sufficiently far away from the chosen initial instant.
Dangerous cases are those in which excessively small values of the parameter u = x (and hence also |p1| ≈ x) appear at the end of an era. A criterion for selection of such cases is the inequality
where | α(s) | is the initial minima depth of the functions that oscillate in era s (it would be more appropriate to choose the final amplitude, but that would only strengthen the selection criterion).
The value of x(0) in the first era is determined by the initial conditions. Dangerous are values in the interval δx(0) ~ exp ( − |α(0)| ), and also in intervals that could result in dangerous cases in the next eras. In order for x(s) to fall in the dangerous interval δx(s) ~ exp ( − | α(s) | ), the initial value x(0) should lie into an interval of a width δx(0) ~ δx(s) / k(1)^2 ... k(s)^2. Therefore, from a unit interval of all possible values of x(0), dangerous cases will appear in parts λ of this interval:
(the inner sum is taken over all the values k(1), k(2), ... , k(s) from 1 to ∞). It is easy to show that this era converges to the value λ 1 whose order of magnitude is determined by the first term in . This can be shown by a strong majoration of the era for which one substitutes | α(s) | = (s + 1) | α(0) |, regardless of the lengths of eras k(1), k(2), ... (In fact | α(s) | increase much faster; even in the most unfavorable case k(1) = k(2) = ... = 1 values of | α(s) | increase as qs | α(0) | with q > 1.) Noting that
one obtains
If the initial value of x(0) lies outside the dangerous region λ there will be no dangerous cases. If it lies inside this region dangerous cases occur, but upon their completion the model resumes a "regular" evolution with a new initial value which only occasionally (with a probability λ) may come into the dangerous interval. Repeated dangerous cases occur with probabilities λ2, λ3, ... , asymptotically converging to zero.
General solution with small oscillations
In the above models, metric evolution near the singularity is studied on the example of homogeneous space metrics. It is clear from the characteristic of this evolution that the analytic construction of the general solution for a singularity of such type should be made separately for each of the basic evolution components: for the Kasner epochs, for the process of transitions between epochs caused by "perturbations", for long eras with two perturbations acting simultaneously. During a Kasner epoch (i.e. at small perturbations), the metric is given by without the condition λ = 0.
BKL further developed a matter distribution-independent model (homogeneous or non-homogeneous) for long era with small oscillations. The time dependence of this solution turns out to be very similar to that in the particular case of homogeneous models; the latter can be obtained from the distribution-independent model by a special choice of the arbitrary functions contained in it.
It is convenient, however, to construct the general solution in a system of coordinates somewhat different from synchronous reference frame: g0α = 0 as in the synchronous frame, but instead of g00 = 1 it is now g00 = −g33. Defining again the space metric tensor γαβ = −gαβ one has, therefore
The special space coordinate is written as x3 = z and the time coordinate is written as x0 = ξ (as different from proper time t); it will be shown that ξ corresponds to the same variable defined in homogeneous models. Differentiation by ξ and z is designated, respectively, by dot and prime. Latin indices a, b, c take values 1, 2, corresponding to space coordinates x1, x2 which will be also written as x, y. Therefore, the metric is
The required solution should satisfy the inequalities
(these conditions specify that one of the functions a2, b2, c2 is small compared to the other two which was also the case with homogeneous models).
Inequality means that components γa3 are small in the sense that at any ratio of the shifts dxa and dz, terms with products dxadz can be omitted in the square of the spatial length element dl2. Therefore, the first approximation to a solution is a metric with γa3 = 0:
One can be easily convinced by calculating the Ricci tensor components , , , using metric and the condition that all terms containing derivatives by coordinates xa are small compared to terms with derivatives by ξ and z (their ratio is ~ γ33 / γab). In other words, to obtain the equations of the main approximation, γ33 and γab in should be differentiated as if they do not depend on xa. Designating
one obtains the following equations:
Index raising and lowering is done here with the help of γab. The quantities and λ are the contractions and whereby
As to the Ricci tensor components , , by this calculation they are identically zero. In the next approximation (i.e., with account to small γa3 and derivatives by x, y), they determine the quantities γa3 by already known γ33 and γab.
Contraction of gives , and, hence,
Different cases are possible depending on the G variable. In the above case g00 = γ33 γab and . The case N > 0 (quantity N is time-like) leads to time singularities of interest. Substituting in f1 = 1/2 (ξ + z) sin y, f2 = 1/2 (ξ − z) sin y results in G of type
This choice does not diminish the generality of conclusions; it can be shown that generality is possible (in the first approximation) just on account of the remaining permissible transformations of variables. At N < 0 (quantity N is space-like) one can substitute G = z which generalizes the well-known Einstein–Rosen metric. At N = 0 one arrives at the Robinson–Bondi wave metric that depends only on ξ + z or only on ξ − z (cf. ). The factor sin y in is put for convenient comparison with homogeneous models. Taking into account , equations – become
The principal equations are defining the γab components; then, function ψ is found by a simple integration of –.
The variable ξ runs through the values from 0 to ∞. The solution of is considered at two boundaries, ξ 1 and 1. At large ξ values, one can look for a solution that takes the form of a 1 / decomposition:
whereby
(equation needs condition to be true). Substituting in , one obtains in the first order
where quantities aac constitute a matrix that is inverse to matrix aac. The solution of has the form
where la, ma, ρ, are arbitrary functions of coordinates x, y bound by condition derived from .
To find higher terms of this decomposition, it is convenient to write the matrix of required quantities γab in the form
where the symbol ~ means matrix transposition. Matrix H is symmetric and its trace is zero. Presentation ensures symmetry of γab and fulfillment of condition . If exp H is substituted with 1, one obtains from γab = ξaab with aab from . In other words, the first term of γab decomposition corresponds to H = 0; higher terms are obtained by powers decomposition of matrix H whose components are considered small.
The independent components of matrix H are written as σ and φ so that
Substituting in and leaving only terms linear by H, one derives for σ and φ
If one tries to find a solution to these equations as Fourier series by the z coordinate, then for the series coefficients, as functions of ξ, one obtains Bessel equations. The major asymptotic terms of the solution at large ξ are
Coefficients A and B are arbitrary complex functions of coordinates x, y and satisfy the necessary conditions for real σ and φ; the base frequency ω is an arbitrary real function of x, y. Now from – it is easy to obtain the first term of the function ψ:
(this term vanishes if ρ = 0; in this case the major term is the one linear for ξ from the decomposition: ψ = ξq (x, y) where q is a positive function).
Therefore, at large ξ values, the components of the metric tensor γab oscillate upon decreasing ξ on the background of a slow decrease caused by the decreasing ξ factor in . The component γ33 = eψ decreases quickly by a law close to exp (ρ2ξ2); this makes it possible for condition .
Next BKL consider the case ξ 1. The first approximation to a solution of is found by the assumption (confirmed by the result) that in these equations terms with derivatives by coordinates can be left out:
This equation together with the condition gives
where λa, μa, s1, s2 are arbitrary functions of all 3 coordinates x, y, z, which are related with other conditions
Equations – give now
The derivatives , calculated by , contain terms ~ ξ4s1 − 2 and ~ ξ4s2 − 2 while terms left in are ~ ξ−2. Therefore, application of instead of is permitted on conditions s1 > 0, s2 > 0; hence 1 − > 0.
Thus, at small ξ oscillations of functions γab cease while function γ33 begins to increase at decreasing ξ. This is a Kasner mode and when γ33 is compared to γab, the above approximation is not applicable.
In order to check the compatibility of this analysis, BKL studied the equations = 0, = 0, and, calculating from them the components γa3, confirmed that the inequality takes place. This study showed that in both asymptotic regions the components γa3 were ~ γ33. Therefore, correctness of inequality immediately implies correctness of inequality .
This solution contains, as it should for the general case of a field in vacuum, four arbitrary functions of the three space coordinates x, y, z. In the region ξ 1 these functions are, e.g., λ1, λ2, μ1, s1. In the region ξ 1 the four functions are defined by the Fourier series by coordinate z from with coefficients that are functions of x, y; although Fourier series decomposition (or integral?) characterizes a special class of functions, this class is large enough to encompass any finite subset of the set of all possible initial conditions.
The solution contains also a number of other arbitrary functions of the coordinates x, y. Such two-dimensional arbitrary functions appear, generally speaking, because the relationships between three-dimensional functions in the solutions of the Einstein equations are differential (and not algebraic), leaving aside the deeper problem about the geometric meaning of these functions. BKL did not calculate the number of independent two-dimensional functions because in this case it is hard to make unambiguous conclusions since the three-dimensional functions are defined by a set of two-dimensional functions (cf. for more details).
Finally, BKL go on to show that the general solution contains the particular solution obtained above for homogeneous models.
Substituting the basis vectors for Bianchi Type IX homogeneous space in the space-time metric of this model takes the form
When c2 a2, b2, one can ignore c2 everywhere except in the term c2 dz2. To move from the synchronous frame used in to a frame with conditions , the transformation dt = c dξ/2 and substitution z → z/2 are done. Assuming also that χ ≡ ln (a/b) 1, one obtains from in the first approximation:
Similarly, with the basis vectors of Bianchi Type VIII homogeneous space, one obtains
According to the analysis of homogeneous spaces above, in both cases ab = ξ (simplifying = ξ0) and χ is from ; function c (ξ) is given by formulae and , respectively, for models of Types IX and VIII.
Identical metric for Type VIII is obtained from , , choosing two-dimensional vectors la and ma in the form
and substituting
To obtain the metric for Type IX, one should substitute
(for calculation of c (ξ) the approximation in is not sufficient and the term in ψ linear by ξ is calculated)
This analysis was done for empty space. Including matter does not make the solution less general and does not change its qualitative characteristics.
A limitation of great importance for the general solution is that all 3-dimensional functions contained in the metrics and should have a single and common characteristic change interval. Only this allows to approximate in the Einstein equations all metric spatial component derivatives with simple products of these components by a characteristic wave numbers which results in ordinary differential equations of the type obtained for the Type IX homogeneous model. This is the reason for the coincidence between homogeneous and general solutions.
It follows that both Type IX model and its generalisation contain an oscillatory mode with a single spatial scale of an arbitrary magnitude which is not selected among others by any physical conditions. However, it is known that in non-linear systems with infinite degrees of freedom such mode is unstable and partially dissipates to smaller oscillations. In the general case of small perturbations with an arbitrary spectrum, there will always be some whose amplitudes will increase feeding upon the total process energy. As a result, a complicated picture arises of multi-scale movements with certain energy distribution and energy exchange between oscillations of different scales. It doesn't occur only in the case when the development of small-scale oscillations is impossible because of physical conditions. For the latter, some natural physical length must exist which determines the minimal scale at which energy exits from a system with dynamical degrees of freedom (which, for example, occurs in a liquid with a certain viscosity). However, there is no innate physical scale for a gravitational field in vacuum, and, therefore, there is no impediment for the development of oscillations of arbitrarily small scales.
Conclusions
BKL describe singularities in the cosmologic solution of Einstein equations that have a complicated oscillatory character. Although these singularities have been studied primarily on spatially homogeneous models, there are convincing reasons to assume that singularities in the general solution of Einstein equations have the same characteristics; this circumstance makes the BKL model important for cosmology.
A basis for such statement is the fact that the oscillatory mode in the approach to singularity is caused by the single perturbation that also causes instability in the generalized Kasner solution. A confirmation of the generality of the model is the analytic construction for long era with small oscillations. Although this latter behavior is not a necessary element of metric evolution close to the singularity, it has all principal qualitative properties: metric oscillation in two spatial dimensions and monotonous change in the third dimension with a certain perturbation of this mode at the end of some time interval. However, the transitions between Kasner epochs in the general case of non-homogeneous spatial metric have not been elucidated in details.
The problem connected with the possible limitations upon space geometry caused by the singularity was left aside for further study. It is clear from the outset, however, that the original BKL model is applicable to both finite or infinite space; this is evidenced by the existence of oscillatory singularity models for both closed and open spacetimes.
The oscillatory mode of the approach to singularity gives a new aspect to the term 'finiteness of time'. Between any finite moment of the world time t and the moment t = 0 there is an infinite number of oscillations. In this sense, the process acquires an infinite character. Instead of time t, a more adequate variable for its description is ln t by which the process is extended to .
BKL consider metric evolution in the direction of decreasing time. The Einstein equations are symmetric in respect to the time sign so that a metric evolution in the direction of increasing time is equally possible. However, these two cases are fundamentally different because past and future are not equivalent in the physical sense. Future singularity can be physically meaningful only if it is possible at arbitrary initial conditions existing in a previous moment. Matter distribution and fields in some moment in the evolution of Universe do not necessarily correspond to the specific conditions required for the existence of a given special solution to the Einstein equations.
The choice of solutions corresponding to the real world is related to profound physical requirements which is impossible to find using only the existing relativity theory and which can be found as a result of future synthesis of physical theories. Thus, it may turn out that this choice singles out some special (e.g., isotropic) type of singularity. Nevertheless, it is more natural to assume that because of its general character, the oscillatory mode should be the main characteristic of the initial evolutionary stages.
In this respect, of considerable interest is the property of the "Mixmaster" model shown by Misner, related to propagation of light signals. In the isotropic model, a "light horizon" exists, meaning that for each moment of time, there is some longest distance, at which exchange of light signals and, thus, a causal connection, is impossible: the signal cannot reach such distances for the time since the singularity t = 0.
Signal propagation is determined by the equation ds = 0. In the isotropic model near the singularity t = 0 the interval element is , where is a time-independent spatial differential form. Substituting yields
The "distance" reached by the signal is
Since η, like t, runs through values starting from 0, up to the "moment" η signals can propagate only at the distance which fixes the farthest distance to the horizon.
The existence of a light horizon in the isotropic model poses a problem in the understanding of the origin of the presently observed isotropy in the relic radiation. According to the isotropic model, the observed isotropy means isotropic properties of radiation that comes to the observer from such regions of space that can not be causally connected with each other. The situation in the oscillatory evolution model near the singularity can be different.
For example, in the homogeneous model for Type IX space, a signal is propagated in a direction in which for a long era, scales change by a law close to ~ t. The square of the distance element in this direction is dl2 = t2, and the respective element of the four-dimensional interval is . The substitution puts this in the form
and for the signal propagation one has equation of the type again. The important difference is that the variable η runs now through values starting from (if metric is valid for all t starting from t = 0).
Therefore, for each given "moment" η are found intermediate intervals Δη sufficient for the signal to cover each finite distance.
In this way, during a long era a light horizon is opened in a given space direction. Although the duration of each long era is still finite, during the course of the world evolution eras change an infinite number of times in different space directions. This circumstance makes one expect that in this model a causal connection between events in the whole space is possible. Because of this property, Misner named this model "Mixmaster universe" by a brand name of a dough-blending machine.
As time passes and one goes away from the singularity, the effect of matter on metric evolution, which was insignificant at the early stages of evolution, gradually increases and eventually becomes dominant. It can be expected that this effect will lead to a gradual "isotropisation" of space as a result of which its characteristics come closer to the Friedman model which adequately describes the present state of the Universe.
Finally, BKL pose the problem about the feasibility of considering a "singular state" of a world with infinitely dense matter on the basis of the existing relativity theory. The physical application of the Einstein equations in their present form in these conditions can be made clear only in the process of a future synthesis of physical theories and in this sense the problem can not be solved at present.
It is important that the gravitational theory itself does not lose its logical cohesion (i.e., does not lead to internal controversies) at whatever matter densities. In other words, this theory is not limited by the conditions that it imposes, which could make logically inadmissible and controversial its application at very large densities; limitations could, in principle, appear only as a result of factors that are "external" to the gravitational theory. This circumstance makes the study of singularities in cosmological models formally acceptable and necessary in the frame of existing theory.
Notes
References
Bibliography
; English translation in
; Physical Review Letters, 6', 311 (1961)
; English translation in
];
General relativity
Physical cosmology
Exact solutions in general relativity | BKL singularity | [
"Physics",
"Astronomy",
"Mathematics"
] | 19,193 | [
"Exact solutions in general relativity",
"Black holes",
"Physical phenomena",
"Astronomical sub-disciplines",
"Physical quantities",
"Theoretical physics",
"Unsolved problems in physics",
"Mathematical objects",
"Astrophysics",
"General relativity",
"Equations",
"Density",
"Theory of relativ... |
6,625,288 | https://en.wikipedia.org/wiki/Cradle-to-cradle%20design | Cradle-to-cradle design (also referred to as 2CC2, C2C, cradle 2 cradle, or regenerative design) is a biomimetic approach to the design of products and systems that models human industry on nature's processes, where materials are viewed as nutrients circulating in healthy, safe metabolisms. The term itself is a play on the popular corporate phrase "cradle to grave", implying that the C2C model is sustainable and considerate of life and future generations—from the birth, or "cradle", of one generation to the next generation, versus from birth to death, or "grave", within the same generation.
C2C suggests that industry must protect and enrich ecosystems and nature's biological metabolism while also maintaining a safe, productive technical metabolism for the high-quality use and circulation of organic and technical nutrients. It is a holistic, economic, industrial and social framework that seeks to create systems that are not only efficient but also essentially waste free. Building off the whole systems approach of John T. Lyle's regenerative design, the model in its broadest sense is not limited to industrial design and manufacturing; it can be applied to many aspects of human civilization such as urban environments, buildings, economics and social systems.
The term "Cradle to Cradle" is a registered trademark of McDonough Braungart Design Chemistry (MBDC) consultants. The Cradle to Cradle Certified Products Program began as a proprietary system; however, in 2012 MBDC turned the certification over to an independent non-profit called the Cradle to Cradle Products Innovation Institute. Independence, openness, and transparency are the Institute's first objectives for the certification protocols. The phrase "cradle to cradle" itself was coined by Walter R. Stahel in the 1970s. The current model is based on a system of "lifecycle development" initiated by Michael Braungart and colleagues at the Environmental Protection Encouragement Agency (EPEA) in the 1990s and explored through the publication A Technical Framework for Life-Cycle Assessment.
In 2002, Braungart and William McDonough published a book called Cradle to Cradle: Remaking the Way We Make Things, a manifesto for cradle-to-cradle design that gives specific details of how to achieve the model. The model has been implemented by many companies, organizations and governments around the world. Cradle-to-cradle design has also been the subject of many documentary films such as Waste = Food.
Introduction
In the cradle-to-cradle model, all materials used in industrial or commercial processes—such as metals, fibers, dyes—fall into one of two categories: "technical" or "biological" nutrients.
Technical nutrients are strictly limited to non-toxic, non-harmful synthetic materials that have no negative effects on the natural environment; they can be used in continuous cycles as the same product without losing their integrity or quality. In this manner these materials can be used over and over again instead of being "downcycled" into lesser products, ultimately becoming waste.
Biological nutrients are organic materials that, once used, can be disposed of in any natural environment and decompose into the soil, providing food for small life forms without affecting the natural environment. This is dependent on the ecology of the region; for example, organic material from one country or landmass may be harmful to the ecology of another country or landmass.
The two types of materials each follow their own cycle in the regenerative economy envisioned by Keunen and Huizing.
Structure
Initially defined by McDonough and Braungart, the Cradle to Cradle Products Innovation Institute's five certification criteria are:
Material health, which involves identifying the chemical composition of the materials that make up the product. Particularly hazardous materials (e.g. heavy metals, pigments, halogen compounds etc.) have to be reported whatever the concentration, and other materials reported where they exceed 100 ppm. For wood, the forest source is required. The risk for each material is assessed against criteria and eventually ranked on a scale with green being materials of low risk, yellow being those with moderate risk but are acceptable to continue to use, red for materials that have high risk and need to be phased out, and grey for materials with incomplete data. The method uses the term 'risk' in the sense of hazard (as opposed to consequence and likelihood).
Material reutilization, which is about recovery and recycling at the end of product life.
Assessment of energy required for production, which for the highest level of certification needs to be based on at least 40% renewable energy for all parts and subassemblies.
Water, particularly usage and discharge quality.
Social responsibility, which assesses fair labor practices.
Health
Currently, many human beings come into contact or consume, directly or indirectly, many harmful materials and chemicals daily. In addition, countless other forms of plant and animal life are also exposed. C2C seeks to remove dangerous technical nutrients (synthetic materials such as mutagenic materials, heavy metals and other dangerous chemicals) from current life cycles. If the materials we come into contact with and are exposed to on a daily basis are not toxic and do not have long term health effects, then the health of the overall system can be better maintained. For example, a fabric factory can eliminate all harmful technical nutrients by carefully reconsidering what chemicals they use in their dyes to achieve the colours they need and attempt to do so with fewer base chemicals.
Economics
The C2C model shows high potential for reducing the financial cost of industrial systems. For example, in the redesign of the Ford River Rouge Complex, the planting of Sedum (stonecrop) vegetation on assembly plant roofs retains and cleanses rain water. It also moderates the internal temperature of the building in order to save energy. The roof is part of an $18 million rainwater treatment system designed to clean of rainwater annually. This saved Ford $30 million that would otherwise have been spent on mechanical treatment facilities. Following C2C design principles, product manufacture can be designed to cost less for the producer and consumer. Theoretically, they can eliminate the need for waste disposal such as landfills.
Definitions
Cradle to cradle is a play on the phrase "cradle to grave", implying that the C2C model is sustainable and considerate of life and future generations.
Technical nutrients are basically inorganic or synthetic materials manufactured by humans—such as plastics and metals—that can be used many times over without any loss in quality, staying in a continuous cycle.
Biological nutrients and materials are organic materials that can decompose into the natural environment, soil, water, etc. without affecting it in a negative way, providing food for bacteria and microbiological life.
Materials are usually referred to as the building blocks of other materials, such as the dyes used in colouring fibers or rubbers used in the sole of a shoe.
Downcycling is the reuse of materials into lesser products. For example, a plastic computer case could be downcycled into a plastic cup, which then becomes a park bench, etc.; this eventually leads to plastic waste. In conventional understanding, this is no different from recycling that produces a supply of the same product or material.
Waste = Food is a basic concept of organic waste materials becoming food for bugs, insects and other small forms of life who can feed on it, decompose it and return it to the natural environment which we then indirectly use for food ourselves.
Existing synthetic materials
The question of how to deal with the countless existing technical nutrients (synthetic materials) that cannot be recycled or reintroduced to the natural environment is dealt with in C2C design. The materials that can be reused and retain their quality can be used within the technical nutrient cycles while other materials are far more difficult to deal with, such as plastics in the Pacific Ocean.
Hypothetical examples
One potential example is a shoe that is designed and mass-produced using the C2C model. The sole might be made of "biological nutrients" while the upper parts might be made of "technical nutrients". The shoe is mass-produced at a manufacturing plant that utilizes its waste material by putting it back into the cycle, potentially by using off-cuts from the rubber soles to make more soles instead of merely disposing of them; this is dependent on the technical materials not losing their quality as they are reused. Once the shoes have been manufactured, they are distributed to retail outlets where the customer buys the shoe at a reduced price because the customer is only paying for the use of the materials in the shoe for the period of time that they will be wearing them. When they outgrow the shoe or it is damaged, they return it to the manufacturer. When the manufacturer separates the sole from the upper parts (separating the technical and biological nutrients), the biological nutrients are returned to the natural environment while the technical nutrients can be used to create the sole of another shoe.
Another example of C2C design is a disposable cup, bottle, or wrapper made entirely out of biological materials. When the user is finished with the item, it can be disposed of and returned to the natural environment; the cost of disposal of waste such as landfill and recycling is greatly reduced. The user could also potentially return the item for a refund so it can be used again.
Finished products
Rohner Textile AG Climatex-textile
Biofoam, a cradle-to-cradle alternative to expanded polystyrene
Sewage sludge treatment plants are facilities that may create fertiliser from sewage sludge. This approach is green retrofit for the current (inefficient) system of organic waste disposal; as composting toilets are a better approach in the long run.
Aquion Energy large scale batteries
Ecovative Design packaging and insulation made from waste by binding it together with mycelium
Implementation
The C2C model can be applied to almost any system in modern society: urban environments, buildings, manufacturing, social systems, etc. Five steps are outlined in Cradle to Cradle: Remaking the Way We Make Things:
Get "free of" known culprits
Follow informed personal preferences
Create "passive positive" lists—lists of materials used categorised according to their safety level
The X list—substances that must be phased out, such as teratogenic, mutagenic, carcinogenic
The gray list—problematic substances that are not so urgently in need of phasing out
The P list—the "positive" list, substances actively defined as safe for use
Activate the positive list
Reinvent—the redesign of the former system
Products that adhere to all steps may be eligible to receive C2C certification. Other certifications such as Leadership in Energy and Environmental Design (LEED) and Building Research Establishment Environmental Assessment Method (BREEAM) can be used to qualify for certification, and vice versa in the case of BREEAM.
C2C principles were first applied to systems in the early 1990s by Braungart's Hamburger Umweltinstitut (HUI) and The Environmental Institute in Brazil for biomass nutrient recycling of effluent to produce agricultural products and clean water as a byproduct.
In 2007, MBDC and the EPEA formed a strategic partnership with global materials consultancy Material ConneXion to help promote and disseminate C2C design principles by providing greater global access to C2C material information, certification and product development.
As of January 2008, Material ConneXion's Materials Libraries in New York, Milan, Cologne, Bangkok and Daegu, Korea, started to feature C2C assessed and certified materials and, in collaboration with MBDC and EPEA, the company now offers C2C Certification, and C2C product development.
While the C2C model has influenced the construction or redevelopment of smaller sites, several large organizations and governments have also implemented the C2C model and its ideas and concepts:
Major implementations
The Lyle Center for Regenerative Studies incorporates holistic & cyclic systems throughout the center. Regenerative design is arguably the foundation for the trademarked C2C.
The Government of China contributed to the construction of the city of Huangbaiyu based on C2C principles, utilising the rooftops for agriculture. This project is largely criticized as a failure to meet the desires & constraints of the local people.
The Ford River Rouge Complex redevelopment, cleaning of rainwater annually.
The Netherlands Institute of Ecology (NIOO-KNAW) planned to make its laboratory and office complex completely cradle-to-cradle compliant.
Several private houses and communal buildings in the Netherlands.
Fashion Positive, an initiative to assist the fashion world in implementing the cradle-to-cradle model in five areas: material health, material reuse, renewable energy, water stewardship and social fairness.
Coordination with other models
The cradle-to-cradle model can be viewed as a framework that considers systems as a whole or holistically. It can be applied to many aspects of human society, and is related to life-cycle assessment. See for instance the LCA-based model of the eco-costs, which has been designed to cope with analyses of recycle systems. The cradle-to-cradle model in some implementations is closely linked with the car-free movement, such as in the case of large-scale building projects or the construction or redevelopment of urban environments. It is closely linked with passive solar design in the building industry and with permaculture in agriculture within or near urban environments. An earthship is a perfect example where different re-use models are used, including cradle-to-cradle design and permaculture.
Constraints
A major constraint in the optimal recycling of materials is that at civic amenity sites, products are not disassembled by hand and have each individual part sorted into a bin, but instead have the entire product sorted into a certain bin.
This makes the extraction of rare-earth elements and other materials uneconomical (at recycling sites, products typically get crushed after which the materials are extracted by means of magnets, chemicals, special sorting methods, ...) and thus optimal recycling of, for example metals is impossible (an optimal recycling method for metals would require to sort all similar alloys together rather than mixing plain iron with alloys).
Obviously, disassembling products is not feasible at currently designed civic amenity sites, and a better method would be to send back the broken products to the manufacturer, so that the manufacturer can disassemble the product. These disassembled product can then be used for making new products or at least to have the components sent separately to recycling sites (for proper recycling, by the exact type of material). At present though, few laws are put in place in any country to oblige manufacturers to take back their products for disassembly, nor are there even such obligations for manufacturers of cradle-to-cradle products. One process where this is happening is in the EU with the Waste Electrical and Electronic Equipment Directive. Also, the European Training Network for the Design and Recycling of Rare-Earth Permanent Magnet Motors and Generators in Hybrid and Full Electric Vehicles (ETN-Demeter) makes designs of electric motors of which the magnets can be easily removed for recycling the rare earth metals.
Criticism and response
Criticism has been advanced on the fact that McDonough and Braungart previously kept C2C consultancy and certification in their inner circle. Critics argued that this lack of competition prevented the model from fulfilling its potential. Many critics pleaded for a public-private partnership overseeing the C2C concept, thus enabling competition and growth of practical applications and services.
McDonough and Braungart responded to this criticism by giving control of the certification protocol to a non-profit, independent Institute called the Cradle to Cradle Products Innovation Institute. McDonough said the new institute "will enable our protocol to become a public certification program and global standard". The new Institute announced the creation of a Certification Standards Board in June 2012. The new board, under the auspices of the Institute, will oversee the certification moving forward.
Experts in the field of environment protection have questioned the practicability of the concept. Friedrich Schmidt-Bleek, head of the German Wuppertal Institute, called his assertion that the "old" environmental movement had hindered innovation with its pessimist approach "pseudo-psychological humbug". Schmidt-Bleek said of the Cradle-to-Cradle seat cushions Braungart developed for the Airbus 380: "I can feel very nice on Michael's seat covers in the airplane. Nevertheless I am still waiting for a detailed proposal for a design of the other 99.99 percent of the Airbus 380 after his principles."
In 2009 Schmidt-Bleek stated that it is out of the question that the concept can be realized on a bigger scale.
Some claim that C2C certification may not be entirely sufficient in all eco-design approaches. Quantitative methodologies (LCAs) and more adapted tools (regarding the product type which is considered) could be used in tandem. The C2C concept ignores the use phase of a product. According to variants of life-cycle assessment (see: ) the entire life cycle of a product or service has to be evaluated, not only the material itself. For many goods e.g. in transport, the use phase has the most influence on the environmental footprint. For example, the more lightweight a car or a plane the less fuel it consumes and consequently the less impact it has. Braungart fully ignores the use phase.
It is safe to say that every production step or resource-transformation step needs a certain amount of energy.
The C2C concept foresees its own certification of its analysis and therefore is in contradiction to international publishing standards (ISO 14040 and ISO 14044) for life-cycle assessment whereas an independent external review is needed in order to obtain comparative and resilient results.
See also
Appropriate technology
Ellen MacArthur Foundation
List of environment topics
Modular construction systems
Planned obsolescencethe opposite of durable, no waste design
The Blue Economy
Upcycling
References
External links
Sustainable design
Environmental design
Industrial ecology
Sustainable building | Cradle-to-cradle design | [
"Chemistry",
"Engineering"
] | 3,659 | [
"Environmental design",
"Sustainable building",
"Building engineering",
"Industrial engineering",
"Construction",
"Environmental engineering",
"Industrial ecology",
"Design"
] |
3,717,018 | https://en.wikipedia.org/wiki/Schoof%E2%80%93Elkies%E2%80%93Atkin%20algorithm | The Schoof–Elkies–Atkin algorithm (SEA) is an algorithm used for finding the order of or calculating the number of points on an elliptic curve over a finite field. Its primary application is in elliptic curve cryptography. The algorithm is an extension of Schoof's algorithm by Noam Elkies and A. O. L. Atkin to significantly improve its efficiency (under heuristic assumptions).
Details
The Elkies-Atkin extension to Schoof's algorithm works by restricting the set of primes considered to primes of a certain kind. These came to be called Elkies primes and Atkin primes respectively. A prime is called an Elkies prime if the characteristic equation: splits over , while an Atkin prime is a prime that is not an Elkies prime. Atkin showed how to combine information obtained from the Atkin primes with the information obtained from Elkies primes to produce an efficient algorithm, which came to be known as the Schoof–Elkies–Atkin algorithm. The first problem to address is to determine whether a given prime is Elkies or Atkin. In order to do so, we make use of modular polynomials that parametrize pairs of -isogenous elliptic curves in terms of their j-invariants (in practice alternative modular polynomials may also be used but for the same purpose).
If the instantiated polynomial has a root in then is an Elkies prime, and we may compute a polynomial whose roots correspond to points in the kernel of the -isogeny from to . The polynomial is a divisor of the corresponding division polynomial used in Schoof's algorithm, and it has significantly lower degree, versus . For Elkies primes, this allows one to compute the number of points on modulo more efficiently than in Schoof's algorithm.
In the case of an Atkin prime, we can gain some information from the factorization pattern of in , which constrains the possibilities for the number of points modulo , but the asymptotic complexity of the algorithm depends entirely on the Elkies primes. Provided there are sufficiently many small Elkies primes (on average, we expect half the primes to be Elkies primes), this results in a reduction in the running time. The resulting algorithm is probabilistic (of Las Vegas type), and its expected running time is, heuristically, , making it more efficient in practice than Schoof's algorithm. Here the notation is a variant of big O notation that suppresses terms that are logarithmic in the main term of an expression.
Implementations
Schoof–Elkies–Atkin algorithm is implemented in the PARI/GP computer algebra system in the GP function ellap.
External links
"Schoof: Counting points on elliptic curves over finite fields"
article on Mathworld
"Remarks on the Schoof-Elkies-Atkin algorithm"
"The SEA Algorithm in Characteristic 2"
Asymmetric-key algorithms
Elliptic curve cryptography
Group theory
Finite fields
Number theory | Schoof–Elkies–Atkin algorithm | [
"Mathematics"
] | 632 | [
"Group theory",
"Fields of abstract algebra",
"Discrete mathematics",
"Number theory"
] |
3,718,379 | https://en.wikipedia.org/wiki/Biquadratic%20field | In mathematics, a biquadratic field is a number field of a particular kind, which is a Galois extension of the rational number field with Galois group isomorphic to the Klein four-group.
Structure and subfields
Biquadratic fields are all obtained by adjoining two square roots. Therefore in explicit terms they have the form
for rational numbers and . There is no loss of generality in taking and to be non-zero and square-free integers.
According to Galois theory, there must be three quadratic fields contained in , since the Galois group has three subgroups of index 2. The third subfield, to add to the evident and , is .
Biquadratic fields are the simplest examples of abelian extensions of that are not cyclic extensions.
References
Section 12 of
Algebraic number theory
Galois theory | Biquadratic field | [
"Mathematics"
] | 169 | [
"Algebraic number theory",
"Number theory"
] |
3,719,339 | https://en.wikipedia.org/wiki/1%2C2-Dichlorotetrafluoroethane | 1,2-Dichlorotetrafluoroethane, or R-114, also known as cryofluorane (INN), is a chlorofluorocarbon (CFC) with the molecular formula ClFCCFCl. Its primary use has been as a refrigerant. It is a non-flammable gas with a sweetish, chloroform-like odor with the critical point occurring at 145.6 °C and 3.26 MPa. When pressurized or cooled, it is a colorless liquid. It is listed on the Intergovernmental Panel on Climate Change's list of ozone depleting chemicals, and is classified as a Montreal Protocol Class I, group 1 ozone depleting substance.
Uses
When used as a refrigerant, R-114 is classified as a medium pressure refrigerant.
The U.S. Navy uses R-114 in its centrifugal chillers in preference to R-11 to avoid air and moisture leakage into the system. While the evaporator of an R-11 charged chiller runs at a vacuum during operation, R-114 yields approximately 0 psig operating pressure in the evaporator.
Manufactured and sold R-114 was usually mixed with the non symmetrical isomer 1,1-dichlorotetrafluoroethane (CFC-114a), as separation of the two isomers is difficult.
Dangers
Aside from its immense environmental impacts, R114, like most chlorofluoroalkanes, forms phosgene gas when exposed to a naked flame.
References
External links
Material Safety Data Sheet from Honeywell International Inc., dated 22 August 2007.
CDC - NIOSH Pocket Guide to Chemical Hazards
Chlorofluorocarbons
Refrigerants
Greenhouse gases
GABAA receptor positive allosteric modulators
Ozone-depleting chemical substances | 1,2-Dichlorotetrafluoroethane | [
"Chemistry",
"Environmental_science"
] | 398 | [
"Greenhouse gases",
"Harmful chemical substances",
"Environmental chemistry",
"Ozone-depleting chemical substances"
] |
3,719,827 | https://en.wikipedia.org/wiki/Porphine | Porphine or porphin is an organic compound of empirical formula . It is heterocyclic and aromatic. The molecule is a flat macrocycle, consisting of four pyrrole-like rings joined by four methine bridges, which makes it the simplest of the tetrapyrroles.
The nonpolar tetrapyrrolic ring structure of porphine means it is poorly soluble in most organic solvents and hardly water soluble. As a result, porphine is mostly of theoretical interest. It has been detected in GC-MS of certain fractions of Piper betle.
Porphine derivatives: porphyrins
Substituted derivatives of porphine are called porphyrins. Many porphyrins are found in nature with the dominant example being protoporphyrin IX. Many synthetic porphyrins are also known, including octaethylporphyrin and tetraphenylporphyrin.
Further reading
References
Biomolecules
Chelating agents
Macrocycles
Tetrapyrroles | Porphine | [
"Chemistry",
"Biology"
] | 210 | [
"Natural products",
"Organic compounds",
"Macrocycles",
"Structural biology",
"Biomolecules",
"Biochemistry",
"Chelating agents",
"Process chemicals",
"Molecular biology"
] |
3,720,055 | https://en.wikipedia.org/wiki/Road%20junction | A junction is where two or more roads meet.
History
Roads began as a means of linking locations of interest: towns, forts and geographic features such as river fords. Where roads met outside of an existing settlement, these junctions often led to a new settlement. Scotch Corner is an example of such a location.
In the United Kingdom and other countries, the practice of giving names to junctions emerged, to help travellers find their way. Junctions took the name of a prominent nearby business or a point of interest.
As of the road networks increased in density and traffic flows followed suit, managing the flow of traffic across the junction became of increasing importance, to minimize delays and improve safety. The first innovation was to add traffic control devices, such as stop signs and traffic lights that regulated traffic flow. Next came lane controls that limited what each lane of traffic was allowed to do while crossing. Turns across oncoming traffic might be prohibited, or allowed only when oncoming and crossing traffic was stopped.
This was followed by specialized junction designs that incorporated information about traffic volumes, speeds, driver intent and many other factors.
Types
The most basic distinction among junction types is whether or not the roads cross at the same or different elevations. More expensive, grade-separated interchanges generally offer higher throughput at higher cost. Single-grade intersections are lower cost and lower throughput. Each main type comes in many variants.
Interchange
At interchanges, roads pass above or below each other, using grade separation and slip roads. The terms motorway junction and highway interchange typically refer to this layout. They can be further subdivided into those with and without signal controls.
Signalized (traffic-light controlled) interchanges include such "diamond" designs as the diverging diamond, Michigan urban diamond, three-level diamond, and tight diamond. Others include center-turn overpass, contraflow left, single loop, and single-point urban overpass.
Non-signalized designs include the cloverleaf, contraflow left, dogbone (restricted dumbbell), double crossover merging, dumbbell (grade-separated bowtie), echelon, free-flow interchange, partial cloverleaf, raindrop, single and double roundabouts (grade-separated roundabout), single-point urban, stack, and windmill.
(literally "autobahn cross"), short form , and abbreviated as AK, is a four-way interchange on the German autobahn network. (literally "autobahn triangle"), short form , and abbreviated as AD, is a three-way interchange on the German autobahn network.
Intersection
At intersections, roads cross at-grade. They also can be further subdivided into those with and without signal controls.
Signalized designs include advanced stop line, bowtie, box junction, continuous-flow intersection, continuous Green-T, double-wide, hook turn, jughandle, median u-turn, Michigan left, paired, quadrant, seagulls, slip lane, split, staggered, superstreet, Texas T, Texas U-turn and turnarounds.
Non-signalized designs include unsignalized variations on continuous-flow 3 and 4-leg, median u-turn and superstreet, along with Maryland T/J, roundabout and traffic circle.
Safety
In the EU it is estimated that around 5,000 out of 26,100 people who are killed in car crashes are killed in a junction collision, in 2015, while it was around 8,000 in 2006. During the 2006–2015 decade, this means around 20% of road fatalities occur at junctions.
By kind of users junctions fatalities are car users, 34%; pedestrians, 23%; motorcycle, 21%; pedal-cycle 12%; and other road users, the remaining.
Causes of fatalities
It has been considered that several causes might lead to fatalities; for instance:
Observation missed – the largest category, encompassing all factors that cause a driver or rider to not notice something:
Physical factors:
Temporary obstruction to view
Permanent obstruction to view
Permanent sight obstruction
Human factors:
Faulty diagnosis – a misunderstanding of another road user's actions or the road conditions
Distraction
Inadequate plan – the details of the situation, as interpreted by the road user, are lacking in quantity and/or quality (including their correspondence to reality)
Inattention
Faulty diagnosis (not leading to observation missed)
Information failure – the road user judged the situation incorrectly and made a decision based upon the incorrect judgement (e.g. thinking that another vehicle is moving when it is not, and thus colliding with it)
Communication failure – a miscommunication between road users
Inadequate plan (not leading to observation missed)
Insufficient knowledge
Protected intersections
Bicycles
A number of features make this protected intersection much safer. A corner refuge island, a setback crossing of the pedestrians and cyclists, generally between 1.5–7 metres of setback, a forward stop bar, which allows cyclists to stop for a traffic light well ahead of motor traffic who must stop behind the crosswalk. Separate signal staging or at least an advance green for cyclists and pedestrians is used to give cyclists and pedestrians no conflicts or a head start over traffic. The design makes a right turn on red, and sometimes left on red depending on the geometry of the intersection in question, possible in many cases, often without stopping.
Cyclists ideally have a protected bike lane on the approach to the intersection, separated by a concrete median with splay kerbs if possible, and have a protected bike lane width of at least 2 metres if possible (one way). In the Netherlands, most one way cycle paths are at least 2.5 metres wide.
Bicycle traffic can be accommodated with the low grade bike lanes in the roadway or higher grade and much safer protected bicycle paths that are physically separated from the roadway.
In Manchester, UK, traffic engineers have designed a protected junction known as the Cycle-Optimised Signal (CYCLOPS) Junction. This design places a circulatory cycle track around the edge of the junction, with pedestrian crossing on the inside. This design allows for an all-red pedestrian / cyclist phase with reduced conflicts. Traffic signals are timed to allow cyclists to make a right turn (across oncoming traffic) in one turn). It also allows for diagonal crossings (pedestrian scramble) and reduces crossing distances for pedestrians.
Pedestrians
Intersections generally must manage pedestrian as well as vehicle traffic. Pedestrian aids include crosswalks, pedestrian-directed traffic signals ("walk light") and over/underpasses. Walk lights may be accompanied by audio signals to aid the visually impaired. Medians can offer pedestrian islands, allowing pedestrians to divide their crossings into a separate segment for each traffic direction, possibly with a separate signal for each.
See also
List of road junctions in the United Kingdom
Junction (traffic)
References
Transport infrastructure | Road junction | [
"Physics"
] | 1,372 | [
"Physical systems",
"Transport",
"Transport infrastructure"
] |
3,720,236 | https://en.wikipedia.org/wiki/Boundary%20conformal%20field%20theory | In theoretical physics, boundary conformal field theory (BCFT) is a conformal field theory defined on a spacetime with a boundary (or boundaries). Different kinds of boundary conditions for the fields may be imposed on the fundamental fields; for example, Neumann boundary condition or Dirichlet boundary condition is acceptable for free bosonic fields. BCFT was developed by John Cardy.
In the context of string theory, physicists are often interested in two-dimensional BCFTs. The specific types of boundary conditions in a specific CFT describe different kinds of D-branes.
BCFT is also used in condensed matter physics - it can be used to study boundary critical behavior and to solve quantum impurity models.
See also
Conformal field theory
Operator product expansion
Critical point
References
Further reading
Conformal field theory | Boundary conformal field theory | [
"Physics"
] | 164 | [
"Quantum mechanics",
"Quantum physics stubs"
] |
3,720,440 | https://en.wikipedia.org/wiki/Back%20pressure | Back pressure (or backpressure) is the term for a resistance to the desired flow of fluid through pipes. Obstructions or tight bends create backpressure via friction loss and pressure drop.
In distributed systems in particular event-driven architecture, back pressure is a technique to regulate flow of data, ensuring that components do not become overwhelmed.
Explanation
A common example of backpressure is that caused by the exhaust system (consisting of the exhaust manifold, catalytic converter, muffler and connecting pipes) of an automotive four-stroke engine, which has a negative effect on engine efficiency, resulting in a decrease of power output that must be compensated by increasing fuel consumption.
In a piston-ported two-stroke engine, however, the situation is more complicated, due to the need to prevent unburned fuel/air mixture from passing right through the cylinders into the exhaust. During the exhaust phase of the cycle, backpressure is even more undesirable than in a four-stroke engine, as there is less time available for exhaust and the lack of pumping action from the piston to force the exhaust out of the cylinder. However, since the exhaust port necessarily remains open for a time after scavenging is completed, unburned mixture can follow the exhaust out of the cylinder, wasting fuel and increasing pollution. This can only be prevented if the pressure at the exhaust port is greater than that in the cylinder. Since the timing of this process is determined mainly by exhaust system geometry, which is extremely difficult to make variable, correct timing and therefore optimum engine efficiency can typically only be achieved over a small part of the engine's range of operating speed.
Liquid chromatography
Back pressure is the term used for the hydraulic pressure required to create a flow through a chromatography column in high-performance liquid chromatography, the term deriving from the fact that it is generated by the resistance of the column, and exerts its influence backwards on the pump that must supply the flow. Back-pressure is a useful diagnostic feature of problems with the chromatography column. Rapid chromatography is favoured by columns packed with very small particles, which create high back-pressures. Column designers use "kinetic plots" to show the performance of a column at a constant back-pressure, usually selected as the maximum that a system's pump can reliably produce.
See also
Exhaust pulse pressure charging
Expansion chamber
Scalar quantity
References
Engine technology
Pressure
Piping
Two-stroke engine technology | Back pressure | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 507 | [
"Scalar physical quantities",
"Mechanical quantities",
"Physical quantities",
"Engines",
"Building engineering",
"Chemical engineering",
"Pressure",
"Engine technology",
"Mechanical engineering",
"Piping",
"Wikipedia categories named after physical quantities"
] |
3,720,810 | https://en.wikipedia.org/wiki/Yang%E2%80%93Baxter%20equation | In physics, the Yang–Baxter equation (or star–triangle relation) is a consistency equation which was first introduced in the field of statistical mechanics. It depends on the idea that in some scattering situations, particles may preserve their momentum while changing their quantum internal states. It states that a matrix , acting on two out of three objects, satisfies
where is followed by a swap of the two objects. In one-dimensional quantum systems, is the scattering matrix and if it satisfies the Yang–Baxter equation then the system is integrable. The Yang–Baxter equation also shows up when discussing knot theory and the braid groups where corresponds to swapping two strands. Since one can swap three strands in two different ways, the Yang–Baxter equation enforces that both paths are the same.
History
According to Jimbo, the Yang–Baxter equation (YBE) manifested itself in the works of J. B. McGuire in 1964 and C. N. Yang in 1967. They considered a quantum mechanical many-body problem on a line having as the potential. Using Bethe's Ansatz techniques, they found that the scattering matrix factorized to that of the two-body problem, and determined it exactly. Here YBE arises as the consistency condition for the factorization.
In statistical mechanics, the source of YBE probably goes back to Onsager's star-triangle relation, briefly mentioned in the introduction to his solution of the Ising model in 1944. The hunt for solvable lattice models has been actively pursued since then, culminating in Baxter's solution of the eight vertex model in 1972.
Another line of development was the theory of factorized S-matrix in two dimensional quantum field theory. Zamolodchikov pointed out that the algebraic mechanics working here is the same as that in the Baxter's and others' works.
The YBE has also manifested itself in a study of Young operators in the group algebra of the symmetric group in the work of A. A. Jucys in 1966.
General form of the parameter-dependent Yang–Baxter equation
Let be a unital associative algebra. In its most general form, the parameter-dependent Yang–Baxter equation is an equation for , a parameter-dependent element of the tensor product (here, and are the parameters, which usually range over the real numbers ℝ in the case of an additive parameter, or over positive real numbers ℝ+ in the case of a multiplicative parameter).
Let for , with algebra homomorphisms determined by
The general form of the Yang–Baxter equation is
for all values of , and .
Parameter-independent form
Let be a unital associative algebra. The parameter-independent Yang–Baxter equation is an equation for , an invertible element of the tensor product . The Yang–Baxter equation is
where , , and .
With respect to a basis
Often the unital associative algebra is the algebra of endomorphisms of a vector space over a field , that is, . With respect to a basis of , the components of the matrices are written , which is the component associated to the map . Omitting parameter dependence, the component of the Yang–Baxter equation associated to the map reads
Alternate form and representations of the braid group
Let be a module of , and . Let be the linear map satisfying for all . The Yang–Baxter equation then has the following alternate form in terms of on .
.
Alternatively, we can express it in the same notation as above, defining , in which case the alternate form is
In the parameter-independent special case where does not depend on parameters, the equation reduces to
,
and (if is invertible) a representation of the braid group, , can be constructed on by for . This representation can be used to determine quasi-invariants of braids, knots and links.
Symmetry
Solutions to the Yang–Baxter equation are often constrained by requiring the matrix to be invariant under the action of a Lie group . For example, in the case and , the only -invariant maps in are the identity and the permutation map . The general form of the -matrix is then for scalar functions .
The Yang–Baxter equation is homogeneous in parameter dependence in the sense that if one defines , where is a scalar function, then also satisfies the Yang–Baxter equation.
The argument space itself may have symmetry. For example translation invariance enforces that the dependence on the arguments must be dependent only on the translation-invariant difference , while scale invariance enforces that is a function of the scale-invariant ratio .
Parametrizations and example solutions
A common ansatz for computing solutions is the difference property, , where R depends only on a single (additive) parameter. Equivalently, taking logarithms, we may choose the parametrization , in which case R is said to depend on a multiplicative parameter. In those cases, we may reduce the YBE to two free parameters in a form that facilitates computations:
for all values of and . For a multiplicative parameter, the Yang–Baxter equation is
for all values of and .
The braided forms read as:
In some cases, the determinant of can vanish at specific values of the spectral parameter . Some matrices turn into a one dimensional projector at
. In this case a quantum determinant can be defined .
Example solutions of the parameter-dependent YBE
A particularly simple class of parameter-dependent solutions can be obtained from solutions of the parameter-independent YBE satisfying , where the corresponding braid group representation is a permutation group representation. In this case, (equivalently, ) is a solution of the (additive) parameter-dependent YBE. In the case where and , this gives the scattering matrix of the Heisenberg XXX spin chain.
The -matrices of the evaluation modules of the quantum group are given explicitly by the matrix
Then the parametrized Yang-Baxter equation (in braided form) with the multiplicative parameter is satisfied:
Classification of solutions
There are broadly speaking three classes of solutions: rational, trigonometric and elliptic. These are related to quantum groups known as the Yangian, affine quantum groups and elliptic algebras respectively.
Set-theoretic Yang–Baxter equation
Set-theoretic solutions were studied by Drinfeld. In this case, there is an -matrix invariant basis for the vector space in the sense that the -matrix maps the induced basis on to itself. This then induces a map given by restriction of the -matrix to the basis. The set-theoretic Yang–Baxter equation is then defined using the 'twisted' alternate form above, asserting
as maps on . The equation can then be considered purely as an equation in the category of sets.
Examples
.
where , the transposition map.
If is a (right) shelf, then is a set-theoretic solution to the YBE.
Classical Yang–Baxter equation
Solutions to the classical YBE were studied and to some extent classified by Belavin and Drinfeld. Given a 'classical -matrix' , which may also depend on a pair of arguments , the classical YBE is (suppressing parameters)
This is quadratic in the -matrix, unlike the usual quantum YBE which is cubic in .
This equation emerges from so called quasi-classical solutions to the quantum YBE, in which the -matrix admits an asymptotic expansion in terms of an expansion parameter
The classical YBE then comes from reading off the coefficient of the quantum YBE (and the equation trivially holds at orders ).
See also
Lie bialgebra
Yangian
Reidemeister move
Quasitriangular Hopf algebra
Yang–Baxter operator
References
H.-D. Doebner, J.-D. Hennig, eds, Quantum groups, Proceedings of the 8th International Workshop on Mathematical Physics, Arnold Sommerfeld Institute, Clausthal, FRG, 1989, Springer-Verlag Berlin, .
Vyjayanthi Chari and Andrew Pressley, A Guide to Quantum Groups, (1994), Cambridge University Press, Cambridge .
Jacques H.H. Perk and Helen Au-Yang, "Yang–Baxter Equations", (2006), .
External links
Eponymous equations of physics
Yang Chen-Ning
Monoidal categories
Statistical mechanics
Exactly solvable models
Conformal field theory | Yang–Baxter equation | [
"Physics",
"Mathematics"
] | 1,699 | [
"Mathematical structures",
"Equations of physics",
"Monoidal categories",
"Eponymous equations of physics",
"Category theory",
"Statistical mechanics"
] |
3,721,032 | https://en.wikipedia.org/wiki/Supersonic%20wind%20tunnel | A supersonic wind tunnel is a wind tunnel that produces supersonic speeds (1.2<M<5)
The Mach number and flow are determined by the nozzle geometry. The Reynolds number is varied by changing the density level (pressure in the settling chamber). Therefore, a high pressure ratio is required (for a supersonic regime at M=4, this ratio is of the order of 10). Apart from that, condensation of moisture or even gas liquefaction can occur if the static temperature becomes cold enough. This means that a supersonic wind tunnel usually needs a drying or a pre-heating facility.
A supersonic wind tunnel has a large power demand, so most are designed for intermittent instead of continuous operation.
The first supersonic wind tunnel (with a cross section of 2 cm) was built in National Physical Laboratory in England, and started working in 1922.
Power requirements
The power required to run a supersonic wind tunnel is enormous, of the order of 50 MW per square meter of test section cross-sectional area. For this reason most wind tunnels operate intermittently using energy stored in high-pressure tanks. These wind tunnels are also called intermittent supersonic blowdown wind tunnels (of which a schematic preview is given below). Another way of achieving the huge power output is with the use of a vacuum storage tank. These tunnels are called indraft supersonic wind tunnels, and are seldom used because they are restricted to low Reynolds numbers. Some large countries have built major supersonic tunnels that run continuously; one is shown in the photo.
Other problems operating a supersonic wind tunnel include:
starting and unstart of the test section (related to maintaining at least a minimum pressure ratio)
adequate supply of dry air
wall interference effects due to shock wave reflection and (sometimes) blockage
instrumentation with high data acquisition speeds is required due to the short run times in intermittent tunnels
Tunnels such as a Ludwieg tube have short test times (usually less than one second), relatively high Reynolds number, and low power requirements.
Further reading
See also
Low speed wind tunnel
High speed wind tunnel
Hypersonic wind tunnel
Ludwieg tube
Shock tube
External links
Supersonic wind tunnel test demonstration (Mach 2.5) with flat plate and wedge creating an oblique shock(Video)
Fluid dynamics
Aerodynamics
Wind tunnels | Supersonic wind tunnel | [
"Chemistry",
"Engineering"
] | 473 | [
"Chemical engineering",
"Aerodynamics",
"Aerospace engineering",
"Piping",
"Fluid dynamics"
] |
3,721,767 | https://en.wikipedia.org/wiki/Biophysical%20Society | The Biophysical Society is an international scientific society whose purpose is to lead the development and dissemination of knowledge in biophysics. Founded in 1958, the Society currently consists of over 7,000 members in academia, government, and industry. Although the Society is based in the United States, it is an international organization. Overseas members currently comprise over one third of the total.
Origins
The Biophysical Society was founded in response to the growth of the field of biophysics after World War Two, as well as concerns that the American Physiological Society had become too large to serve the community of biophysicists. Discussions between prominent biophysicists in 1955 and 1956 led to the planning of the society's first meeting in Columbus, Ohio in 1957, with about 500 attendees. Among the scientists involved in the early effort were Ernest C. Pollard, Samuel Talbot, Otto Schmitt, Kenneth Stewart Cole, W. A. Selle, Max Lauffer, Ralph Stacy, Herman P. Schwan, and Robley C. Williams. This meeting was described by Cole as "a biophysics meeting with the ulterior motive of finding out if there was such a thing as biophysics and, if so, what sort of thing this biophysics might be."
Organization
The Biophysical Society is governed by four officers: the President, President-elect, Past-President Secretary, and Treasurer, as well as by a Council of twelve members in addition to the officers. These offices are elected by the membership of the society. The Council appoints an executive officer to oversee the functions and staff of the society. The society has a number of committees that help to implement its mission. The committees are: Awards, Early Careers, Education, Finance, Member Services, Membership, Committee for Inclusion and Diversity, Nominating, Professional Opportunities for Women, Program, Public Affairs, Publications, and Thematic Meetings.
The Biophysical Society also supports subgroups focusing on smaller areas within biophysics. The current subgroups are: Bioenergetics, Mitochondria, and Metabolism, Bioengineering, Biological Fluorescence, Biopolymers in vivo, Cell Biophysics, Cryo-EM, Exocytosis and Endocytosis, Intrinsically Disordered Proteins, Membrane Biophysics, Membrane Structure & Function, Mechanobiology, Molecular Biophysics, Motility & Cytoskeleton, Nanoscale Biophysics, and Permeation & Transport.
Activities
Since 1960 the Biophysical Society has published the Biophysical Journal, which is currently semi-monthly, as a specialized journal in the field of biophysics. This was started because the Society perceived other scientific journals as unsympathetic to submissions by biophysicists. The society also publishes a monthly newsletter, an annual Membership Directory, and a Products Guide.
The Biophysical Society sponsors an Annual Meeting which brings together more than 6,000 scientists for symposia, workshops, industrial and educational exhibits, subgroup meetings, and awards presentations. The meeting features a talk by that year's Biophysical Society Lecturer, chosen for significance in biophysical research and excellence in presentation; the lectures are published in the Biophysical Journal, and those since 2003 are available on video. Starting in 2010 with "Calcium Signaling" in Beijing, the society now also sponsors 3-4 smaller thematic meetings annually across the world.
Since 2016, the Society has sponsored Biophysics Week each March. The week is a global event aimed at encouraging connections within the biophysics community and raising awareness of the field and its impact among the general public, policy makers, students, and scientists in the field.
Awards
The Society currently offers eleven Society Awards each year to distinguished biophysicists in different categories. The awards are:
Anatrace Membrane Protein Award
Avanti Award in Lipids
Rosalba Kampman Distinguished Service Award
Innovation Award
Emily M. Gray Award
Fellow of the Biophysical Society Award
Founders Award
Margaret Oakley Dayhoff Award
Michael and Kate Bárány Award
The Kazuhiko Kinosita Award in Single Molecule Biophysics
BPS Award in the Biophysics of Health and Disease
Ignacio Tinoco Award
The Society also offers travel awards to its annual meeting, poster awards at Society-sponsored meetings, as well as other scientific conferences. The society sponsors "Biophysics Awards" at high school science fairs across the nation.
Public policy
The Biophysical Society's Public Affairs committee responds to science policy issues such as research, careers, and science education, and has adopted a number of positions. In February 2004, the society released a statement supporting freedom of communication of scientific data, supporting the existing policy that prior classification strictly for national security reasons should be the only reason communication of scientific data should be restricted. The society also urged a reexamination of visa policy in the wake of several foreign-born scientists being denied permission to travel to the United States, citing the importance of their importance to the economy and security of the United States. In May 2005, the society released a statement opposing the teaching of intelligent design in science classrooms, calling it an "effort to blur the distinction between science and theology".
The society is also active in supporting federal funding of science, and provides materials to assist scientists in communicating with elected officials. The society participates in the annual Science-Engineering-Technology Congressional Visits Day, in which scientists, engineers and business leaders meet with elected officials in the United States Congress.
Beginning with the 2015-2016 year, the Biophysical Society has sponsored a congressional fellow through the AAAS Technology and Policy Fellowship Program. The purpose of the program is to provide an opportunity for BPS members to gain practical experience and insights into public policy by working on Capitol Hill while also allowing scientists to contribute their knowledge and analytical skills in the federal policy realm.
See also
British Biophysical Society
References
External links
Biophysical Society records, 1955-2009 at the University of Maryland, Baltimore County
Scientific organizations established in 1957
Biophysics
Biology societies
Physics societies
Biophysics organizations
1957 establishments in Ohio
Scientific societies based in the United States | Biophysical Society | [
"Physics",
"Biology"
] | 1,213 | [
"Applied and interdisciplinary physics",
"Biophysics"
] |
3,721,845 | https://en.wikipedia.org/wiki/Rotational%E2%80%93vibrational%20coupling | In physics, rotational–vibrational coupling occurs when the rotation frequency of a system is close to or identical to a natural frequency of internal vibration. The animation on the right shows ideal motion, with the force exerted by the spring and the distance from the center of rotation increasing together linearly with no friction.
In rotational-vibrational coupling, angular velocity oscillates. By pulling the circling masses closer together, the spring transfers its stored strain energy into the kinetic energy of the circling masses, increasing their angular velocity. The spring cannot bring the circling masses together, since the spring's pull weakens as the circling masses approach. At some point, the increasing angular velocity of the circling masses overcomes the pull of the spring, causing the circling masses to increasingly distance themselves. This increasingly strains the spring, strengthening its pull and causing the circling masses to transfer their kinetic energy into the spring's strain energy, thereby decreasing the circling masses' angular velocity. At some point, the pull of the spring overcomes the angular velocity of the circling masses, restarting the cycle.
In helicopter design, helicopters must incorporate damping devices, because at specific angular velocities, the rotorblade vibrations can reinforce themselves by rotational-vibrational coupling, and build up catastrophically. Without damping, these vibrations would cause the rotorblades to break loose.
Energy conversions
The animation on the right provides a clearer view on the oscillation of the angular velocity. There is a close analogy with harmonic oscillation.
When a harmonic oscillation is at its midpoint then all the energy of the system is kinetic energy. When the harmonic oscillation is at the points furthest away from the midpoint all the energy of the system is potential energy. The energy of the system is oscillating back and forth between kinetic energy and potential energy.
In the animation with the two circling masses there is a back and forth oscillation of kinetic energy and potential energy. When the spring is at its maximal extension then the potential energy is largest, when the angular velocity is at its maximum the kinetic energy is at largest.
With a real spring there is friction involved. With a real spring the vibration will be damped and the final situation will be that the masses circle each other at a constant distance, with a constant tension of the spring.
Mathematical derivation
This discussion applies the following simplifications: the spring itself is taken as being weightless, and the spring is taken as being a perfect spring; the restoring force increases in a linear way as the spring is stretched out. That is, the restoring force is exactly proportional to the distance from the center of rotation. A restoring force with this characteristic is called a harmonic force.
The following parametric equation of the position as a function of time describes the motion of the circling masses:
where
is the major radius
is the minor radius
is 360° divided by the duration of one revolution
The motion as a function of time can also be seen as a vector combination of two uniform circular motions. The parametric equations (1) and (2) can be rewritten as:
A transformation to a coordinate system that subtracts the overall circular motion leaves the eccentricity of the ellipse-shaped trajectory. the center of the eccentricity is located at a distance of from the main center:
That is in fact what is seen in the second animation, in which the motion is mapped to a coordinate system that is rotating at a constant angular velocity. The angular velocity of the motion with respect to the rotating coordinate system is 2ω, twice the angular velocity of the overall motion.
The spring is continuously doing work. More precisely, the spring is oscillating between doing positive work (increasing the weight's kinetic energy) and doing negative work (decreasing the weight's kinetic energy)
Discussion using vector notation
The centripetal force is a harmonic force.
The set of all solutions to the above equation of motion consists of both circular trajectories and ellipse-shaped trajectories. All the solutions have the same period of revolution. This is a distinctive feature of motion under the influence of a harmonic force; all trajectories take the same amount of time to complete a revolution.
When a rotating coordinate system is used the centrifugal term and the Coriolis term are added to the equation of motion. The following equation gives the acceleration with respect to a rotating system of an object in inertial motion.
Here, Ω is the angular velocity of the rotating coordinate system with respect to the inertial coordinate system. v is velocity of the moving object with respect to the rotating coordinate system. It is important to note that the centrifugal term is determined by the angular velocity of the rotating coordinate system; the centrifugal term does not relate to the motion of the object.
In all, this gives the following three terms in the equation of motion for motion with respect to a coordinate system rotating with angular velocity Ω.
Both the centripetal force and the centrifugal term in the equation of motion are proportional to r. The angular velocity of the rotating coordinate system is adjusted to have the same period of revolution as the object following an ellipse-shaped trajectory. Hence the vector of the centripetal force and the vector of the centrifugal term are at every distance from the center equal to each other in magnitude and opposite in direction, so those two terms drop away against each other.
It is only in very special circumstances that the vector of the centripetal force and the centrifugal term drop away against each other at every distance from the center of rotation. This is the case if and only if the centripetal force is a harmonic force.
In this case, only the Coriolis term remains in the equation of motion.
Since the vector of the Coriolis term always points perpendicular to the velocity with respect to the rotating coordinate system, it follows that in the case of a restoring force that is a harmonic force, the eccentricity in the trajectory will show up as a small circular motion with respect to the rotating coordinate system. The factor 2 of the Coriolis term corresponds to a period of revolution that is half the period of the overall motion.
As expected, the analysis using vector notation results in a straight confirmation of the previous analysis:
The spring is continuously doing work. More precisely, the spring is oscillating between doing positive work (increasing the weight's kinetic energy) and doing negative work (decreasing the weight's kinetic energy).
Conservation of angular momentum
In the earlier section titled 'Energy conversions', the dynamics is followed by keeping track of the energy conversions. The increase of angular velocity on contraction is in accordance with the principle of conservation of angular momentum. Since there is no torque acting on the circling weights, angular momentum is conserved. However, this disregards the causal mechanism, which is the force of the extended spring, and the work done during its contraction and extension.
Similarly, when a cannon is fired, the projectile will shoot out of the barrel toward the target, and the barrel will recoil, in accordance with the principle of conservation of momentum. This does not mean that the projectile leaves the barrel at high velocity because the barrel recoils. While recoil of the barrel must occur, as described by Newton's third law, it is not a causal agent.
The causal mechanism is in the energy conversions: the explosion of the gunpowder converts potential chemical energy to the potential energy of a highly compressed gas. As the gas expands, its high pressure exerts a force on both the projectile and the interior of the barrel. It is through the action of that force that potential energy is converted to kinetic energy of both projectile and barrel.
In the case of rotational-vibrational coupling, the causal agent is the force exerted by the spring. The spring is oscillating between doing work and doing negative work. (The work is taken to be negative when the direction of the force is opposite to the direction of the motion.)
See also
Rotational–vibrational spectroscopy
References
Dynamical systems | Rotational–vibrational coupling | [
"Physics",
"Mathematics"
] | 1,651 | [
"Mechanics",
"Dynamical systems"
] |
3,722,654 | https://en.wikipedia.org/wiki/TASF%20reagent | The TASF reagent or tris(dimethylamino)sulfonium difluorotrimethylsilicate is a reagent in organic chemistry with structural formula [((CH3)2N)3S]+[F2Si(CH3)3]−. It is an anhydrous source of fluoride and is used to cleave silyl ether protective groups. Many other fluoride reagents are known, but few are truly anhydrous, because of the extraordinary basicity of "naked" F−. In TASF, the fluoride is masked as an adduct with the weak Lewis acid trimethylsilylfluoride (FSi(CH3)3). The sulfonium cation ((CH3)2N)3S+ is unusually non-electrophilic due to the electron-donating properties of the three (CH3)2N substituents.
This compound is prepared from sulfur tetrafluoride:
3 (CH3)2NSi(CH3)3 + SF4 → 2 (CH3)3SiF + [((CH3)2N)3S]+[F2Si(CH3)3]−
The colorless salt precipitates from the reaction solvent, diethyl ether.
Structure
The cation [((CH3)2N)3S]+ is a sulfonium ion. The S-N distances are 1.612 and 1.675 pm. The N-S-N angles are 99.6°. The anion is [F2Si(CH3)3]−. It is trigonal bipyramidal with mutually trans fluorides. The Si-F distances are 176 picometers. The Si-C distances are 188 pm.
References
Reagents for organic chemistry
Fluorides
Dimethylamino compounds
Sulfonium compounds | TASF reagent | [
"Chemistry"
] | 411 | [
"Reagents for organic chemistry",
"Fluorides",
"Salts"
] |
22,682,042 | https://en.wikipedia.org/wiki/Magnetolithography | Magnetolithography (ML) is a photoresist-less and photomaskless lithography method for patterning wafer surfaces. ML is based on applying a magnetic field on the substrate using paramagnetic metal masks named "magnetic masks" placed on either topside or backside of the wafer. Magnetic masks are analogous to a photomask in photolithography, in that they define the spatial distribution and shape of the applied magnetic field. The fabrication of the magnetic masks involves the use of conventional photolithography and photoresist however. The second component of the process is ferromagnetic nanoparticles (analogous to the photoresist in photolithography, e.g. cobalt nanoparticles) that are assembled over the substrate according to the field induced by the mask which blocks its areas from reach of etchants or depositing materials (e.g. dopants or metallic layers).
ML can be used for applying either a positive or negative approach. In the positive approach, the magnetic nanoparticles react chemically or interact via chemical recognition with the substrate. Hence, the magnetic nanoparticles are immobilized at selected locations, where the mask induces a magnetic field, resulting in a patterned substrate. In the negative approach, the magnetic nanoparticles are inert to the substrate. Hence, once they pattern the substrate, they block their binding site on the substrate from reacting with another reacting agent. After the adsorption of the reacting agent, the nanoparticles are removed, resulting in a negatively patterned substrate.
ML is also a backside lithography, which has the advantage of ease in producing multilayer with high accuracy of alignment and with the same efficiency for all layers.
References
Lithography (microfabrication)
Nanotechnology | Magnetolithography | [
"Materials_science",
"Engineering"
] | 377 | [
"Nanotechnology",
"Materials science",
"Microtechnology",
"Lithography (microfabrication)"
] |
22,686,225 | https://en.wikipedia.org/wiki/Concurrent%20tandem%20catalysis | Concurrent tandem catalysis (CTC) is a technique in chemistry where multiple catalysts (usually two) produce a product otherwise not accessible by a single catalyst. It is usually practiced as homogeneous catalysis.
Scheme 1 illustrates this process. Molecule A enters this catalytic system to produce the comonomer, B, which along with A enters the next catalytic process to produce the final product, P. This one-pot approach can decrease product loss from isolation or purification of intermediates. Reactions with relatively unstable products can be generated as intermediates because they are only transient species and are immediately used in a consecutive reaction.
Introduction
The major advantage of using CTC is it requires a single molecule; however, the required reaction conditions and catalyst compatibility are major hurdles. The system must be thoroughly studied to find the optimal conditions for both the catalysis and reactant to produce the desired product. Occasionally, a trade-off must be made between several competing effects.
The desire of getting better yields and selectivity is of interest to many in academia and the industry. In this one-pot system, intermediate purification is unnecessary, so the risk of unwanted products and side reactions are more probable. Matching compatible catalysts would eliminate the likelihood of a catalyst starving or saturating the system, which may cause the catalyst to decompose or generate unwanted side reactions. If side products were to be generated, it may be capable of interfering with the catalytic system. Thus in-depth knowledge is required of the mechanistic characteristics of both catalytic processes and the activity of the catalysts. Kinetic measurements are a crucial instrument in the development of CTC processes.
Scope
Polymerization
One of the simplest and most thoroughly studied polymers arises from the polymerization of ethylene. Linear low-density polyethylene, LLDPE, is of industrial importance and is currently produced on the macro- scale; millions of tons per year. Branching of polyethylene involves the oligomerization of ethylene into alpha-olefins, carried out by one catalyst, followed by ethylene polymerization using the α-olefins as co-monomer, carried out by a second catalyst. This system suffers in practice.
Electrophilic boranes activate the chelated nickel catalyst to oligomerize ethylene into α-butylene. In the same pot a titanium catalyst polymerizes ethylene and the α-olefin to form LLDPE. The degree of branching was found to increase linearly with the increase in concentration of the nickel catalyst.
Metathesis
Metathesis has been a powerful tool in the coupling of olefins for several decades. The ability to rearrange carbon-carbon double bonds has provided great utility in all aspects of organic chemistry. Cossy et al. report a simple synthesis to form substituted five and six membered lactones from the cross metathesis of an allylic or homoallylic alcohol and acrylic acid using a ruthenium based metathesis catalyst. Lactones are good synthetic starting points for many natural products and are prevalent structures in biology therefore they are widely utilized in pharmaceuticals.
Carbonylation
One of the most studied and commercially important transition metal catalyzed reactions is alkene hydroformylation. This type of catalysis allows for the functionalization of simple alkenes into aldehydes and gives a remarkably useful handle to generate other functional groups. This transformation can be carried out using a cobalt or rhodium catalyst in a hydrogen/carbon monoxide atmosphere and consists of four stages: metal insertion, migratory insertion, heterolytic cleavage, and ligand exchange. Breit et al. generated extended alkane functionality by hydroformylation, olefination, and then hydrogenation.
Orthogonal tandem catalysis
Orthogonal tandem catalysis is a "one-pot reaction in which sequential catalytic processes occur through two or more functionally distinct, and preferably non-interfering, catalytic cycles". This technique has been deployed in tandem alkane-dehydrogenation-olefin-metathesis catalysis
Photoprotection tandem catalysis
Recently, tandem catalysis mechanism is proposed for significant photoprotection of dye, pigment, and polymers in paints and coatings when cerium carbonate is used together with photoactive metal oxide like titanium oxide.
See also
Chain shuttling polymerization
References
Catalysis | Concurrent tandem catalysis | [
"Chemistry"
] | 893 | [
"Catalysis",
"Chemical kinetics"
] |
22,690,116 | https://en.wikipedia.org/wiki/Moving%20particle%20semi-implicit%20method | The moving particle semi-implicit (MPS) method is a computational method for the simulation of incompressible free surface flows. It is a macroscopic, deterministic particle method (Lagrangian mesh-free method) developed by Koshizuka and Oka (1996).
Method
The MPS method is used to solve the Navier-Stokes equations in a Lagrangian framework. A fractional step method is applied which consists of splitting each time step in two steps of prediction and correction. The fluid is represented with particles, and the motion of each particle is calculated based on the interactions with the neighboring particles by means of a kernel function. The MPS method is similar to the SPH (smoothed-particle hydrodynamics) method (Gingold and Monaghan, 1977; Lucy, 1977) in that both methods provide approximations to the strong form of the partial differential equations (PDEs) on the basis of integral interpolants. However, the MPS method applies simplified differential operator models solely based on a local weighted averaging process without taking the gradient of a kernel function. In addition, the solution process of MPS method differs to that of the original SPH method as the solutions to the PDEs are obtained through a semi-implicit prediction-correction process rather than the fully explicit one in original SPH method.
Applications
Through the past years, the MPS method has been applied in a wide range of engineering applications including Nuclear Engineering (e.g. Koshizuka et al., 1999; Koshizuka and Oka, 2001; Xie et al., 2005), Coastal Engineering (e.g. Gotoh et al., 2005; Gotoh and Sakai, 2006), Environmental Hydraulics (e.g. Shakibaeina and Jin, 2009; Nabian and Farhadi, 2016), Ocean Engineering (Shibata and Koshizuka, 2007; Sueyoshi et al., 2008; Zuo et al. 2022), Structural Engineering (e.g. Chikazawa et al., 2001), Mechanical Engineering (e.g. Heo et al., 2002; Sun et al., 2009), Bioengineering (e.g. Tsubota et al., 2006) and Chemical Engineering (e.g. Sun et al., 2009; Xu and Jin, 2018).
Improvements
Improved versions of MPS method have been proposed for enhancement of numerical stability (e.g. Koshizuka et al., 1998; Zhang et al., 2005; Ataie-Ashtiani and Farhadi, 2006;Shakibaeina and Jin, 2009; Jandaghian and Shakibaeinia, 2020; Cheng et al. 2021), momentum conservation (e.g. Hamiltonian MPS by Suzuki et al., 2007; Corrected MPS by Khayyer and Gotoh, 2008; Enhanced MPS by Jandaghian and Shakibaeinia, 2020), mechanical energy conservation (e.g. Hamiltonian MPS by Suzuki et al., 2007), pressure calculation (e.g. Khayyer and Gotoh, 2009, Kondo and Koshizuka, 2010, Khayyer and Gotoh, 2010, Xu and Jin, 2019), and for simulation of multiphase and granular flows (Nabian and Farhadi 2016; Xu and Jin, 2021; Xu and Li, 2022).
References
R.A. Gingold and J.J. Monaghan, "Smoothed particle hydrodynamics: theory and application to non-spherical stars," Mon. Not. R. Astron. Soc., Vol 181, pp. 375–89, 1977.
L.B. Lucy, "A numerical approach to the testing of the fission hypothesis," Astron. J., Vol 82, pp. 1013–1024, 1977.
S. Koshizuka and Y. Oka, "Moving particle semi-implicit method for fragmentation of incompressible fluid," Nuclear Science and Engineering, Vol 123, pp. 421–434, 1996.
S. Koshizuka, A. Nobe and Y. Oka, "Numerical Analysis of Breaking Waves Using the Moving Particle Semi-implicit Method," Int. J. Numer. Meth. Fluid, Vol 26, pp. 751–769, 1998.
S. Koshizuka, H. Ikeda and Y. Oka, "Numerical analysis of fragmentation mechanisms in vapor explosions," Nuclear Engineering and Design, Vol 189, pp. 423–433, 1999.
Y. Chikazawa, S. Koshizuka, and Y. Oka, "A particle method for elastic and visco-plastic structures and fluid-structure interactions," Comput. Mech. 27, pp. 97–106, 2001.
S. Koshizuka, S. and Y. Oka, "Application of Moving Particle Semi-implicit Method to Nuclear Reactor Safety," Comput. Fluid Dyn. J., Vol 9, pp. 366–375, 2001.
S. Heo, S. Koshizuka and Y. Oka, "Numerical analysis of boiling on high heat-flux and high subcooling condition using MPS-MAFL," International Journal of Heat and Mass Transfer, Vol 45, pp. 2633–2642, 2002.
H. Gotoh, H. Ikari, T. Memita and T. Sakai, "Lagrangian particle method for simulation of wave overtopping on a vertical seawall," Coast. Eng. J., Vol 47, No 2–3, pp. 157–181, 2005.
H. Xie, S. Koshizuka and Y. Oka, "Simulation of drop deposition process in annular mist flow using three-dimensional particle method," Nuclear Engineering and Design, Vol 235, pp. 1687–1697, 2005.
S. Zhang, K. Morita, K. Fukuda and N. Shirakawa, "An improved MPS method for numerical simulations of convective heat transfer problems," Int. J. Numer. Meth. Fluid, 51, 31–47, 2005.
B. Ataie-Ashtiani and L. Farhadi, "A stable moving particle semi-implicit method for free surface flows," Fluid Dynamics Research 38, pp. 241–256, 2006.
H. Gotoh and T. Sakai, "Key issues in the particle method for computation of wave breaking," Coastal Engineering, Vol 53, No 2–3, pp. 171–179, 2006.
K. Tsubota, S. Wada, H. Kamada, Y. Kitagawa, R. Lima and T. Yamaguchi, "A Particle Method for Blood Flow Simulation – Application to Flowing Red Blood Cells and Platelets–," Journal of the Earth Simulator, Vol 5, pp. 2–7, 2006.
K. Shibata and S. Koshizuka, "Numerical analysis of shipping water impact on a deck using a particle method," Ocean Engineering, Vol 34, pp. 585–593, 2007.
Y. Suzuki, S. Koshizuka, Y. Oka, "Hamiltonian moving-particle semi-implicit (HMPS) method for incompressible fluid flows," Computer Methods in Applied Mechanics and Engineering, Vol 196, pp. 2876-2894, 2007.
A. Khayyer and H. Gotoh, "Development of CMPS method for accurate water-surface tracking in breaking waves," Coast. Eng. J., Vol 50, No 2, pp. 179–207, 2008.
M. Sueyoshi, M. Kashiwagi and S. Naito, "Numerical simulation of wave-induced nonlinear motions of a two-dimensional floating body by the moving particle semi-implicit method," Journal of Marine Science and Technology, Vol 13, pp. 85–94, 2008.
A. Khayyer and H. Gotoh, "Modified Moving Particle Semi-implicit methods for the prediction of 2D wave impact pressure," Coastal Engineering, Vol 56, pp. 419–440, 2009.
A. Shakibaeinia and Y.C. Jin "A weakly compressible MPS method for simulation open-boundary free-surface flow." Int. J. Numer. Methods Fluids, 63 (10):1208–1232 (Published Online: 7 Aug 2009 ).
A. Shakibaeinia and Y.C. Jin "Lagrangian Modeling of flow over spillways using moving particle semi-implicit method." Proc. 33rd IAHR Congress, Vancouver, Canada, 2009, 1809–1816.
Z. Sun, G. Xi and X. Chen, "A numerical study of stir mixing of liquids with particle method," Chemical Engineering Science, Vol 64, pp. 341–350, 2009.
Z. Sun, G. Xi and X. Chen, "Mechanism study of deformation and mass transfer for binary droplet collisions with particle method," Phys. Fluids, Vol 21, 032106, 2009.
A. Khayyer and H. Gotoh, "A higher order Laplacian model for enhancement and stabilization of pressure calculation by the MPS method," Applied Ocean Research, Vol 32, pp. 124-131, 2010.
A. Shakibaeinia and Y.C. Jin "A mesh-free particle model for simulation of mobile-bed dam break." Advances in Water Resources, 34 (6):794–807 .
A. Shakibaeinia and Y.C. Jin "A MPS Based Mesh-free Particle Method for Open Channel flow." Journal of Hydraulic Engineering ASCE. 137(11): 1375–1384. 2011.
M. Kondo and S. Koshizuka, "Improvement of stability in moving particle semi-implicit method", Int. J. Numer. Meth. Fluid, Vol. 65, pp. 638-654, 2011.
A. Shakibaeinia and Y.C. Jin "MPS Mesh-Free Particle Method for Multiphase Flows." Computer methods in Applied Mechanics and Engineering. 229–232: 13–26. 2012.
K.S. Kim, M.H. Kim and J.C. Park, "Development of MPS (Moving Particle Simulation) method for Multi-liquid-layer Sloshing," Journal of Mathematical Problems in Engineering, Vol 2014,
M.A. Nabian and L. Farhadi, "Multiphase Mesh-Free Particle Method for Simulating Granular Flows and Sediment Transport," Journal of Hydraulic Engineering, 2016.
T. Xu, Y. C. Jin, Simulation the convective mixing of CO2 in geological formations with a meshless model. Chemical Engineering Science, 192, 187-198, 2018.
T. Xu, Y. C. Jin, Improvement of a projection-based particle method in free-surface flows by improved Laplacian model and stabilization techniques. Computers & Fluids, 191, 104235, 2019.
M. Jandaghian and A. Shakibaeinia "An enhanced weakly-compressible MPS method for free-surface flows," Computer Methods in Applied Mechanics and Engineering, vol. 360, p. 112771, 2020/03/01/ 2020, doi: https://doi.org/10.1016/j.cma.2019.112771.
L. Y. Cheng, R. A. Amaro Jr., E. H. Favero, "Improving stability of moving particle semi-implicit method by source terms based on time-scale correction of particle-level impulses," Engineering Analysis with Boundary Elements, Vol. 131, pp. 118-145, 2021.
T. Xu, Y. C. Jin, Two-dimensional continuum modelling granular column collapse by non-local peridynamics in a mesh-free method with rheology. Journal of Fluid Mechanics, 917, A51, 2021.
T. Xu, S. S. Li, Development of a non-local partial Peridynamic explicit mesh-free incompressible method and its validation for simulating dry dense granular flows. Acta Geotechnica, 1-20, 2022.
J. Zuo, T. Xu, D. Z. Zhu, H. Gu, Impact pressure of dam-break waves on a vertical wall with various downstream conditions by an explicit mesh-free method. Ocean Engineering, 256, 111569, 2022.
Specific
External links
Laboratory of Professor Seiichi Koshizuka at the University of Tokyo, Japan
Laboratory of Professor Hitoshi Gotoh at Kyoto University, Japan
MPS-RYUJIN by Fuji Technical Research
Fluid dynamics
Numerical differential equations | Moving particle semi-implicit method | [
"Chemistry",
"Engineering"
] | 2,664 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
20,124,078 | https://en.wikipedia.org/wiki/Grome | Grome is an environmental modeling package developed by Quad Software dedicated for procedural and manual generation of large virtual outdoor worlds suitable for games and other 3D real-time simulation applications.
History
After more than two years of internal developing, the program was first launched on as version 1.0. It immediately started to be used by various professional game studios and independent developers. Version 1.1 was launched on adding various optimizations, pen tablet support and more flexibility by introducing user defined data sets. Following this was version 1.2 on which optimized existing fractal tools and added new ones, extended import and export with new data formats and introduced the Grome scripting language for automated tasks.
Major release, 2.0, was introduced on with many new additions. With this new version, Grome becomes a more complete outdoor editing platform by introducing editing of water surfaces, vegetation and other decoration layers. 64bit processing was introduced to take advantage of large amounts of RAM and 64bit CPUs. With this version, Quad Software announced the availability for customization work for professional studios that need a specific Grome version for their projects.
The Grome 3.0 version was launched on bringing road networks editing, support for per-pixel materials, optimization tools for mobiles and low-end hardware and other new tools. A new updated, version 3.1, was launched on October 12, 2011 with focus on optimization. A further update, version 3.11, was made on September 12, 2012, fixing various issues and improving the SDK.
Main features
General
Easy to use user interface, with shortcuts available for every operation. Real-time preview on multiple customizable hardware accelerated viewports. Presets system for tool parameters.
Terrain editor
Supports unlimited terrain size using custom data swapping mechanism to hard disk space. Terrain made of multiple terrain zones supporting variable resolution and automatic border stitching.
Procedural heightmap generation using fractal, terracing and erosion tools. Possibility to create and assign multiple layers of heightmaps for each zone.
Manual editing of heightmap layers using selection layers and brushes. Flexible brush system that allows custom orientation, mask, strength, directions and pen tablet pressure. Elevation, terracing and fractals brushes.
Novel texture layering system that allows different shading methods, variable resolutions and images per terrain zones. Creation of images based on brushes or procedural generation based on slope, direction, altitude, external shape files and erosion flowmaps.
Object editing
User defined format system using Grome SDK plugins.
Spawn objects individually or using brushes with custom orientation and area of effect. Tag, categorized, search and replace tools for objects.
Gizmo and numeric object transformation tools for translation, scaling and rotation. Possibility to group object for transformation around common pivot. Groups can be saved to disk and later restored.
Object-terrain linking mechanism that allows snapping objects to terrain and moving objects along ground surface.
Road editing
Starting with version 3.0 road placement and manipulation was introduced. Control of road geometry generation, resolution, banking and intersections is now possible while real-time road/terrain interaction computation is done using the GPU.
Water editing
Real-time water rendering using pixel lighting, bump mapping and masking for smooth shoreline transitions. Generation of water layers masks with brushes or based on terrain heights. Coloring and lightmapping of water based on terrain texture layers.
Decoration layers
Allow rendering of massive population of decoration objects and grass blades. Animated grass effects to simulate wind and characters walking through. Generation with brushes or automatically based on terrain slope, orientation and altitude.
Software development kit (SDK)
Offered for every Grome client to allow integration with their engine/application pipeline. Allows saving and loading from custom formats, define new mesh formats and automatic responses to various editing events. Comes with source code for all default plugins and documentation. Out-of-the-box, Grome has the default plugins to include support for COLLADA, XML, DTED, shape files, 16bit RAW, BT and all major images formats.
Licensing
Grome 3.0 is currently licensed under two different prices, the same base version being offered to individuals and small developers under a reduced price. Professional companies benefit from premium support and dongle based protection. Normal builds offer file based activation per computer.
Customization program
Starting with version 2.0, Quad Software publicly announced the availability of customization work for companies. In the way professional studios can obtain customized builds as part of under-contract projects. Examples of previous developed customized modules were presented, among them being road editing, AI navigation mesh generation and a more advanced lightmapper.
Graphite Renderer
Graphite is a real-time outdoor rendering library created by Grome developers to allow easy integration of Grome scenes into any 3D engine and graphical application. The renderer is designed to be flexible and supports DirectX rendering API. It is offered for free in binary form for non-commercial usage for all Grome clients, while paid licenses are offered to professional studios allowing source access and commercial usage.
Grome scenes gallery
See also
3D computer graphics software
Heightmap
List of level editors
List of 3D graphics software
List of supported engines:
OGRE
OpenSceneGraph
Torque (game engine)
Unity (game engine)
Unreal Engine
References
External links
Grome Editor Homepage
Customization Program
Graphite Library
2007 software
3D graphics software
Environmental science software | Grome | [
"Environmental_science"
] | 1,093 | [
"Environmental science software"
] |
20,124,373 | https://en.wikipedia.org/wiki/Andreas%20Acrivos | Andreas Acrivos (born 13 June 1928) is the Albert Einstein Professor of Science and Engineering, emeritus at the City College of New York. He is also the director of the Benjamin Levich Institute for Physicochemical Hydrodynamics.
Education and career
Born in Athens, Greece, Acrivos moved to the United States to pursue an engineering education. He received a bachelor's degree from Syracuse University in 1950; a master's degree from the University of Minnesota in 1951; and a Ph.D. from the University of Minnesota in 1954; all in chemical engineering.
Acrivos is considered to be one of the leading fluid dynamicists of the 20th century. In 1954 Acrivos joined the faculty at the University of California, Berkeley. In 1962, he moved to Stanford University, where he worked with Professor David Mason to build chemical engineering programs. In 1977, he was elected as a member into the National Academy of Engineering for contributions in the application of mathematical analysis to the understanding of fundamental phenomena in chemical engineering processes. In 1987, Acrivos joined as the Albert Einstein Professor of Science and Engineering at The City College of the City University of New York, succeeding Veniamin Levich.
From 1982 to 1997, Acrivos served as the editor-in-chief of Physics of Fluids.
Awards and honors
National Medal of Science, 2001
Fellow of the American Academy of Arts and Sciences, 1993
Fluid Dynamics Prize, 1991
G. I. Taylor Medal, Society of Engineering Science, 1988
Elected as a member into the National Academy of Engineering, 1977
Acrivos has been listed as an ISI Highly Cited Author in Engineering by the ISI Web of Knowledge, Thomson Scientific Company.
References
External links
1928 births
Living people
City College of New York faculty
Fluid dynamicists
Greek emigrants to the United States
20th-century Greek physicists
Members of the United States National Academy of Sciences
Members of the United States National Academy of Engineering
National Medal of Science laureates
Engineers from Athens
Stanford University School of Engineering faculty
University of Minnesota College of Science and Engineering alumni
UC Berkeley College of Engineering faculty
Fellows of the American Physical Society
Fellows of the American Academy of Arts and Sciences
American chemical engineers
Fellows of Clare Hall, Cambridge
Minnesota CEMS
Physics of Fluids editors | Andreas Acrivos | [
"Chemistry"
] | 452 | [
"Fluid dynamicists",
"Fluid dynamics"
] |
20,125,479 | https://en.wikipedia.org/wiki/Collision%20cascade | In condensed-matter physics, a collision cascade (also known as a displacement cascade or a displacement spike) is a set of nearby adjacent energetic (much higher than ordinary thermal energies) collisions of atoms induced by an energetic particle in a solid or liquid.
If the maximum atom or ion energies in a collision cascade are higher than the threshold displacement energy of the material (tens of eVs or more), the collisions can permanently displace atoms from their lattice sites and produce defects. The initial energetic atom can be, e.g., an ion from a particle accelerator, an atomic recoil produced by a passing high-energy neutron, electron or photon, or be produced when a radioactive nucleus decays and gives the atom a recoil energy.
The nature of collision cascades can vary strongly depending on the energy and mass of the recoil/incoming ion and density of the material (stopping power).
Linear cascades
When the initial recoil/ion mass is low, and the material where the cascade occurs has a low density (i.e. the recoil-material combination has a low stopping power), the collisions between the initial recoil and sample atoms occur rarely, and can be understood well as a sequence of independent binary collisions between atoms. This kind of a cascade can be theoretically well treated using the binary collision approximation (BCA) simulation approach. For instance, H and He ions with energies below 10 keV can be expected to lead to purely linear cascades in all materials.
The most commonly used BCA code SRIM can be used to simulate linear collision cascades in disordered materials for all ion in all materials up to ion energies of 1 GeV. Note, however, that SRIM does not treat effects such as damage due to electronic energy deposition or damage produced by excited electrons. The nuclear and electronic stopping powers used are averaging fits to experiments, and are thus not perfectly accurate either. The electronic stopping power can be readily included in binary collision approximation or molecular dynamics (MD) simulations. In MD simulations they can be included either as a frictional force or in a more advanced manner by also following the heating of the electronic systems and coupling the electronic and atomic degrees of freedom. However, uncertainties remain on what is the appropriate low-energy limit of electronic stopping power or electron-phonon coupling is.
In linear cascades the set of recoils produced in the sample can be described as a sequence of recoil generations depending on how many collision steps have passed since the original collision: primary knock-on atoms (PKA), secondary knock-on atoms (SKA), tertiary knock-on atoms (TKA), etc. Since it is extremely unlikely that all energy would be transferred to a knock-on atom, each generation of recoil atoms has on average less energy than the previous, and eventually the knock-on atom energies go below the threshold displacement energy for damage production, at which point no more damage can be produced.
Heat spikes (thermal spikes)
When the ion is heavy and energetic enough, and the material is dense, the collisions between the ions may occur so near to each other that they can not be considered independent of each other. In this case the process becomes a complicated process of many-body interactions between hundreds and tens of thousands of atoms, which can not be treated with the BCA, but can be modelled using molecular dynamics methods.
Typically, a heat spike is characterized by the formation of a transient underdense region in the center of the cascade, and an overdense region around it. After the cascade, the overdense region becomes interstitial defects, and the underdense region typically becomes a region of vacancies.
If the kinetic energy of the atoms in the region of dense collisions is recalculated into temperature (using the basic equation E = 3/2·N·kBT), one finds that the kinetic energy in units of temperature is initially of the order of 10,000 K. Because of this, the region can be considered to be very hot, and is therefore called a heat spike or thermal spike (the two terms are usually considered to be equivalent). The heat spike cools down to the ambient temperature in 1–100 ps, so the "temperature" here does not correspond to thermodynamic equilibrium temperature. However, it has been shown that after about 3 lattice vibrations, the kinetic energy distribution of the atoms in a heat spike has the Maxwell–Boltzmann distribution, making the use of the concept of temperature somewhat justified. Moreover, experiments have shown that a heat spike can induce a phase transition which is known to require a very high temperature, showing that the concept of a (non-equilibrium) temperature is indeed useful in describing collision cascades.
In many cases, the same irradiation condition is a combination of linear cascades and heat spikes. For example, 10 MeV Cu ions bombarding Cu would initially move in the lattice in a linear cascade regime, since the nuclear stopping power is low. But once the Cu ion would slow down enough, the nuclear stopping power would increase and a heat spike would be produced. Moreover, many of the primary and secondary recoils of the incoming ions would likely have energies in the keV range and thus produce a heat spike.
For instance, for copper irradiation of copper, recoil energies of around 5–20 keV are almost guaranteed to produce heat spikes. At lower energies, the cascade energy is too low to produce a liquid-like zone. At much higher energies, the Cu ions would most likely lead initially to a linear cascade, but the recoils could lead to heat spikes, as would the initial ion once it has slowed down enough. The concept subcascade breakdown threshold energy signifies the energy above which a recoil in a material is likely to produce several isolated heat spikes rather than a single dense one.
Computer simulation-based animations of collision cascades in the heat spike regime are available on YouTube.
Swift heavy ion thermal spikes
Swift heavy ions, i.e. MeV and GeV heavy ions which produce damage by a very strong electronic stopping, can also be considered to produce thermal spikes in the sense that they lead to strong lattice heating and a transient disordered atom zone. However, at least the initial stage of the damage might be better understood in terms of a Coulomb explosion mechanism. Regardless of what the heating mechanism is, it is well established that swift heavy ions in insulators typically produce ion tracks forming long cylindrical damage zones of reduced density.
Time scale
To understand the nature of collision cascade, it is very important to know the associated time scale. The ballistic phase of the cascade, when the initial ion/recoil and its primary and lower-order recoils have energies well above the threshold displacement energy, typically lasts 0.1–0.5 ps. If a heat spike is formed, it can live for some 1–100 ps until the spike temperature has cooled down essentially to the ambient temperature. The cooling down of the cascade occurs via lattice heat conductivity and by electronic heat conductivity after the hot ionic subsystem has heated up the electronic one via electron–phonon coupling. Unfortunately the rate of electron-phonon coupling from the hot and disordered ionic system is not well known, as it can not be treated equally to the fairly well known process of transfer of heat from hot electrons to an intact crystal structure. Finally, the relaxation phase of the cascade, when the defects formed possibly recombine and migrate, can last from a few ps to infinite times, depending on the material, its defect migration and recombination properties, and the ambient temperature.
Effects
Damage production
Since the kinetic energies in a cascade can be very high, it can drive the material locally far outside thermodynamic equilibrium. Typically this results in defect production. The defects can be, e.g., point defects such as
Frenkel pairs, ordered or disordered dislocation loops, stacking faults, or amorphous zones. Prolonged irradiation of many materials can lead to their full amorphization, an effect which occurs regularly during the ion implantation doping of silicon chips.
The defects production can be harmful, such as in nuclear fission and fusion reactors where the neutrons slowly degrade the mechanical properties of the materials, or a useful and desired materials modification effect, e.g., when ions are introduced into semiconductor quantum well structures to speed up the operation of a laser. or to strengthen carbon nanotubes.
A curious feature of collision cascades is that the final amount of damage produced may be much less than the number of atoms initially affected by the heat spikes. Especially in pure metals, the final damage production after the heat spike phase can be orders of magnitude smaller than the number of atoms displaced in the spike. On the other hand, in semiconductors and other covalently bonded materials the damage production is usually similar to the number of displaced atoms. Ionic materials can behave like either metals or semiconductors with respect to the fraction of damage recombined.
Other consequences
Collision cascades in the vicinity of a surface often lead to sputtering, both in the linear spike and heat spike regimes. Heat spikes near surfaces also frequently lead to crater formation. This cratering is caused by liquid flow of atoms, but if the projectile size above roughly 100,000 atoms, the crater production mechanism switches to the same mechanism as that of macroscopic craters produced by bullets or asteroids.
The fact that many atoms are displaced by a cascade means that ions can be used to deliberately mix materials, even for materials that are normally thermodynamically immiscible. This effect is known as ion beam mixing.
The non-equilibrium nature of irradiation can also be used to drive materials out of thermodynamic equilibrium, and thus form new kinds of alloys.
See also
Particle shower, a set of binary collisions between high-energy particles often involving nuclear reactions
Radiation material science
COSIRES conference
REI conference
References
External links
Condensed matter physics
Radiation effects | Collision cascade | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,033 | [
"Physical phenomena",
"Phases of matter",
"Materials science",
"Radiation",
"Condensed matter physics",
"Radiation effects",
"Matter"
] |
20,127,208 | https://en.wikipedia.org/wiki/Tabimorelin | Tabimorelin (INN) (developmental code name NN-703) is a drug which acts as a potent, orally-active agonist of the ghrelin/growth hormone secretagogue receptor (GHSR) and growth hormone secretagogue, mimicking the effects of the endogenous peptide agonist ghrelin as a stimulator of growth hormone (GH) release. It was one of the first GH secretagogues developed and is largely a modified polypeptide, but it is nevertheless orally-active in vivo. Tabimorelin produced sustained increases in levels of GH and insulin-like growth factor 1 (IGF-1), along with smaller transient increases in levels of other hormones such as adrenocorticotropic hormone (ACTH), cortisol, and prolactin. However actual clinical effects in adults with growth hormone deficiency were limited, with only the most severely GH-deficient patients showing significant benefit, and tabimorelin was also found to act as a CYP3A4 inhibitor which could cause it to have undesirable interactions with other drugs.
See also
List of growth hormone secretagogues
References
Ghrelin receptor agonists
Growth hormone secretagogues
Peptides
Experimental drugs | Tabimorelin | [
"Chemistry"
] | 268 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
20,130,487 | https://en.wikipedia.org/wiki/EF-Ts | EF-Ts (elongation factor thermo stable) is one of the prokaryotic elongation factors. It is found in human mitochondria as TSFM. It is similar to eukaryotic EF-1B.
EF-Ts serves as the guanine nucleotide exchange factor for EF-Tu (elongation factor thermo unstable), catalyzing the release of guanosine diphosphate from EF-Tu. This enables EF-Tu to bind to a new guanosine triphosphate molecule, release EF-Ts, and go on to catalyze another aminoacyl tRNA addition.
Structure
The protein Qβ-Replicase is a tetrameric protein, meaning it contains four subunits. These subunits are the two elongation factors, EF-Tu & EF-Ts, the ribosomal protein subunit S1, and the RNA dependent RNA polymerase β-subunit. The two elongation factors form a heterodimer structure known as the elongation factor complex, which is necessary for the polymerization activity of the RDRP β-Subunit. Its secondary structural components consists of α-helices, β-sheets and β-barrels.
EF-Ts comprises the majority of the top portion of the protein, while EF-Tu makes up the bottom half where the beta barrels are seen. The conformation is considered to be open, when no guanine nucleotide is bound to the active site in EF-Tu. The EF-Ts chain contains four important domains, the C-terminal domain, N-terminal domain, Dimerization domain, and the Core domain which all play a specific role in the protein's structure and functionality. The dimerization domain contains four anti-parallel α-helices which is the main source of contact between EF-Tu and EF-Ts to form the dimer structure
Domains
The N-terminal domain spans from residues 1-54 (n1-n54), core domain is from n55-n179, the dimerization domain is from n180-n228, and lastly the C-terminal domain that is from n264-n282. The core domain contains two subdomains, C and N, which interact with domains 3 and 1 of EF-Tu respectively.
Elongation process pathway
EF-Ts functions as guanine nucleotide exchange factor, it catalyzes the reaction of EF-Tu*GDP ( inactive form) to EF-Tu*GTP (active). EF-Tu (active) then delivers the aminoacyl-tRNA to the ribosome. Therefore, EF-Ts main role is recycling EF-Tu back to its active state in order to complete another elongation cycle.
The majority of this pathway is performed through conformational changes of EF-Tu domain 1 which contains the active site and manipulation of the switch 1 & 2 regions by the ribosome and tRNA. First, in domain 1 of EF-Tu the GTPase activity site is blocked by a series of hydrophobic residues that block the catalytic residue His 84 in the inactive form prior to activation via EF-Ts. Once the tRNA is bound to EF-Tu, it is then delivered to the Ribosome which hydrolyses the GTP leaving EF-Tu with a lower affinity to bind the tRNA. The ribosome does this through manipulation of the switch 1 region, after GTP hydrolysis the secondary structure switches from primarily α-helices to β-hairpin. EF-Tu is then released from the ribosome in the inactive state completing the cycle until activated once again by EF-Ts.
Helix D of EF-Tu must interact with the N-terminal domain of EF-Ts for guanine nucleotide exchange. A recent study researched the reaction kinetics of the guanine nucleotide exchange by mutating certain residues on helix D of EF-Tu in order to see the primary residues involved in the pathway. Mutation of Leu148 and Glu 152 decreased the rate at which EF-Ts N-terminal domain binds to Helix D significantly, concluding these two residues play an important role in the reaction pathway.
Amino acid conservation between organisms
This article focuses on EF-Ts as it exists in Qβ-bacteriophage; however, many organisms use a similar elongation process with proteins that have nearly the same function as EF-Ts. EF-Ts belongs both to the group of proteins known as guanine nucleotide exchange factors, found in many different biochemical pathways, and also to the tsf superfamily. The majority of the amino acid conservation seen between other organisms is located in the N-terminal domain where EF-Ts bind to EF-Tu and the guanine nucleotide exchange occurs. Below is the alignment of the important N-Terminal domain of EF-Ts as it exists in other organisms.
E. coli: 8-LVKELRERTGAGMMDCKKALT-20
LacBS: 8-LVAELRKRTEVSITKAREALS-20 (fungus, mitochondrion)
Bos taurus: 8-LLMKLRRKTGYSFINCKKALE-20 (mammal, mitochondrion)
Drosophila: 8-ALAALRKKTGYTFANCKKALE-20 (insect, mitochondrion)
conservation : **.:* : ..::**
Conserved amino acids in all four are Leu12 and Arg18 (letters seen in bold above). It can be concluded that these two residues play an important role in the guanine nucleotide exchange since they are the only two completely conserved. In eukaryotes, EF-1 performs the same function, and the mechanism for guanine nucleotide exchange is nearly identical as for EF-Ts despite the structural dissimilarities between the two elongation factors.
See also
Prokaryotic elongation factors
EF-Tu (elongation factor thermo unstable)
EF-G (elongation factor G)
EF-P (elongation factor P)
Protein translation
GTPase
References
Protein biosynthesis | EF-Ts | [
"Chemistry"
] | 1,331 | [
"Protein biosynthesis",
"Gene expression",
"Biosynthesis"
] |
20,131,133 | https://en.wikipedia.org/wiki/Quartic%20reciprocity | Quartic or biquadratic reciprocity is a collection of theorems in elementary and algebraic number theory that state conditions under which the congruence x4 ≡ p (mod q) is solvable; the word "reciprocity" comes from the form of some of these theorems, in that they relate the solvability of the congruence x4 ≡ p (mod q) to that of x4 ≡ q (mod p).
History
Euler made the first conjectures about biquadratic reciprocity. Gauss published two monographs on biquadratic reciprocity. In the first one (1828) he proved Euler's conjecture about the biquadratic character of 2. In the second one (1832) he stated the biquadratic reciprocity law for the Gaussian integers and proved the supplementary formulas. He said that a third monograph would be forthcoming with the proof of the general theorem, but it never appeared. Jacobi presented proofs in his Königsberg lectures of 1836–37. The first published proofs were by Eisenstein.
Since then a number of other proofs of the classical (Gaussian) version have been found, as well as alternate statements. Lemmermeyer states that there has been an explosion of interest in the rational reciprocity laws since the 1970s.
Integers
A quartic or biquadratic residue (mod p) is any number congruent to the fourth power of an integer (mod p). If x4 ≡ a (mod p) does not have an integer solution, a is a quartic or biquadratic nonresidue (mod p).
As is often the case in number theory, it is easiest to work modulo prime numbers, so in this section all moduli p, q, etc., are assumed to positive, odd primes.
Gauss
The first thing to notice when working within the ring Z of integers is that if the prime number q is ≡ 3 (mod 4) then a residue r is a quadratic residue (mod q) if and only if it is a biquadratic residue (mod q). Indeed, the first supplement of quadratic reciprocity states that −1 is a quadratic nonresidue (mod q), so that for any integer x, one of x and −x is a quadratic residue and the other one is a nonresidue. Thus, if r ≡ a2 (mod q) is a quadratic residue, then if a ≡ b2 is a residue, r ≡ a2 ≡ b4 (mod q) is a biquadratic residue, and if a is a nonresidue, −a is a residue, −a ≡ b2, and again, r ≡ (−a)2 ≡ b4 (mod q) is a biquadratic residue.
Therefore, the only interesting case is when the modulus p ≡ 1 (mod 4).
Gauss proved that if p ≡ 1 (mod 4) then the nonzero residue classes (mod p) can be divided into four sets, each containing (p−1)/4 numbers. Let e be a quadratic nonresidue. The first set is the quartic residues; the second one is e times the numbers in the first set, the third is e2 times the numbers in the first set, and the fourth one is e3 times the numbers in the first set. Another way to describe this division is to let g be a primitive root (mod p); then the first set is all the numbers whose indices with respect to this root are ≡ 0 (mod 4), the second set is all those whose indices are ≡ 1 (mod 4), etc. In the vocabulary of group theory, the first set is a subgroup of index 4 (of the multiplicative group Z/pZ×), and the other three are its cosets.
The first set is the biquadratic residues, the third set is the quadratic residues that are not quartic residues, and the second and fourth sets are the quadratic nonresidues. Gauss proved that −1 is a biquadratic residue if p ≡ 1 (mod 8) and a quadratic, but not biquadratic, residue, when p ≡ 5 (mod 8).
2 is a quadratic residue mod p if and only if p ≡ ±1 (mod 8). Since p is also ≡ 1 (mod 4), this means p ≡ 1 (mod 8). Every such prime is the sum of a square and twice a square.
Gauss proved
Let q = a2 + 2b2 ≡ 1 (mod 8) be a prime number. Then
2 is a biquadratic residue (mod q) if and only if a ≡ ±1 (mod 8), and
2 is a quadratic, but not a biquadratic, residue (mod q) if and only if a ≡ ±3 (mod 8).
Every prime p ≡ 1 (mod 4) is the sum of two squares. If p = a2 + b2 where a is odd and b is even, Gauss proved that
2 belongs to the first (respectively second, third, or fourth) class defined above if and only if b ≡ 0 (resp. 2, 4, or 6) (mod 8). The first case of this is one of Euler's conjectures:
2 is a biquadratic residue of a prime p ≡ 1 (mod 4) if and only if p = a2 + 64b2.
Dirichlet
For an odd prime number p and a quadratic residue a (mod p), Euler's criterion states that so if p ≡ 1 (mod 4),
Define the rational quartic residue symbol for prime p ≡ 1 (mod 4) and quadratic residue a (mod p) as It is easy to prove that a is a biquadratic residue (mod p) if and only if
Dirichlet simplified Gauss's proof of the biquadratic character of 2 (his proof only requires quadratic reciprocity for the integers) and put the result in the following form:
Let p = a2 + b2 ≡ 1 (mod 4) be prime, and let i ≡ b/a (mod p). Then
(Note that i2 ≡ −1 (mod p).)
In fact, let p = a2 + b2 = c2 + 2d2 = e2 − 2f2 ≡ 1 (mod 8) be prime, and assume a is odd. Then
where is the ordinary Legendre symbol.
Going beyond the character of 2, let the prime p = a2 + b2 where b is even, and let q be a prime such that Quadratic reciprocity says that where Let σ2 ≡ p (mod q). Then
This implies that
The first few examples are:
Euler had conjectured the rules for 2, −3 and 5, but did not prove any of them.
Dirichlet also proved that if p ≡ 1 (mod 4) is prime and then
This has been extended from 17 to 17, 73, 97, and 193 by Brown and Lehmer.
Burde
There are a number of equivalent ways of stating Burde's rational biquadratic reciprocity law.
They all assume that p = a2 + b2 and q = c2 + d2 are primes where b and d are even, and that
Gosset's version is
Letting i2 ≡ −1 (mod p) and j2 ≡ −1 (mod q), Frölich's law is
Burde stated his in the form:
Note that
Miscellany
Let p ≡ q ≡ 1 (mod 4) be primes and assume . Then e2 = p f2 + q g2 has non-trivial integer solutions, and
Let p ≡ q ≡ 1 (mod 4) be primes and assume p = r2 + q s2. Then
Let p = 1 + 4x2 be prime, let a be any odd number that divides x, and let Then a* is a biquadratic residue (mod p).
Let p = a2 + 4b2 = c2 + 2d2 ≡ 1 (mod 8) be prime. Then all the divisors of c4 − p a2 are biquadratic residues (mod p). The same is true for all the divisors of d4 − p b2.
Gaussian integers
Background
In his second monograph on biquadratic reciprocity Gauss displays some examples and makes conjectures that imply the theorems listed above for the biquadratic character of small primes. He makes some general remarks, and admits there is no obvious general rule at work. He goes on to say
The theorems on biquadratic residues gleam with the greatest simplicity and genuine beauty only when the field of arithmetic is extended to imaginary numbers, so that without restriction, the numbers of the form a + bi constitute the object of study ... we call such numbers integral complex numbers. [bold in the original]
These numbers are now called the ring of Gaussian integers, denoted by Z[i]. Note that i is a fourth root of 1.
In a footnote he adds
The theory of cubic residues must be based in a similar way on a consideration of numbers of the form a + bh where h is an imaginary root of the equation h3 = 1 ... and similarly the theory of residues of higher powers leads to the introduction of other imaginary quantities.
The numbers built up from a cube root of unity are now called the ring of Eisenstein integers. The "other imaginary quantities" needed for the "theory of residues of higher powers" are the rings of integers of the cyclotomic number fields; the Gaussian and Eisenstein integers are the simplest examples of these.
Facts and terminology
Gauss develops the arithmetic theory of the "integral complex numbers" and shows that it is quite similar to the arithmetic of ordinary integers. This is where the terms unit, associate, norm, and primary were introduced into mathematics.
The units are the numbers that divide 1. They are 1, i, −1, and −i. They are similar to 1 and −1 in the ordinary integers, in that they divide every number. The units are the powers of i.
Given a number λ = a + bi, its conjugate is a − bi and its associates are the four numbers
λ = +a + bi
iλ = −b + ai
−λ = −a − bi
−iλ = +b − ai
If λ = a + bi, the norm of λ, written Nλ, is the number a2 + b2. If λ and μ are two Gaussian integers, Nλμ = Nλ Nμ; in other words, the norm is multiplicative. The norm of zero is zero, the norm of any other number is a positive integer. ε is a unit if and only if Nε = 1. The square root of the norm of λ, a nonnegative real number which may not be a Gaussian integer, is the absolute value of lambda.
Gauss proves that Z[i] is a unique factorization domain and shows that the primes fall into three classes:
2 is a special case: 2 = i3 (1 + i)2. It is the only prime in Z divisible by the square of a prime in Z[i]. In algebraic number theory, 2 is said to ramify in Z[i].
Positive primes in Z ≡ 3 (mod 4) are also primes in Z[i]. In algebraic number theory, these primes are said to remain inert in Z[i].
Positive primes in Z ≡ 1 (mod 4) are the product of two conjugate primes in Z[i]. In algebraic number theory, these primes are said to split in Z[i].
Thus, inert primes are 3, 7, 11, 19, ... and a factorization of the split primes is
5 = (2 + i) × (2 − i),
13 = (2 + 3i) × (2 − 3i),
17 = (4 + i) × (4 − i),
29 = (2 + 5i) × (2 − 5i), ...
The associates and conjugate of a prime are also primes.
Note that the norm of an inert prime q is Nq = q2 ≡ 1 (mod 4); thus the norm of all primes other than 1 + i and its associates is ≡ 1 (mod 4).
Gauss calls a number in Z[i] odd if its norm is an odd integer. Thus all primes except 1 + i and its associates are odd. The product of two odd numbers is odd and the conjugate and associates of an odd number are odd.
In order to state the unique factorization theorem, it is necessary to have a way of distinguishing one of the associates of a number. Gauss defines an odd number to be primary if it is ≡ 1 (mod (1 + i)3). It is straightforward to show that every odd number has exactly one primary associate. An odd number λ = a + bi is primary if a + b ≡ a − b ≡ 1 (mod 4); i.e., a ≡ 1 and b ≡ 0, or a ≡ 3 and b ≡ 2 (mod 4). The product of two primary numbers is primary and the conjugate of a primary number is also primary.
The unique factorization theorem for Z[i] is: if λ ≠ 0, then
where 0 ≤ μ ≤ 3, ν ≥ 0, the πis are primary primes and the αis ≥ 1, and this representation is unique, up to the order of the factors.
The notions of congruence and greatest common divisor are defined the same way in Z[i] as they are for the ordinary integers Z. Because the units divide all numbers, a congruence (mod λ) is also true modulo any associate of λ, and any associate of a GCD is also a GCD.
Quartic residue character
Gauss proves the analogue of Fermat's theorem: if α is not divisible by an odd prime π, then
Since Nπ ≡ 1 (mod 4), makes sense, and for a unique unit ik.
This unit is called the quartic or biquadratic residue character of α (mod π) and is denoted by
It has formal properties similar to those of the Legendre symbol.
The congruence is solvable in Z[i] if and only if
where the bar denotes complex conjugation.
if π and θ are associates,
if α ≡ β (mod π),
The biquadratic character can be extended to odd composite numbers in the "denominator" in the same way the Legendre symbol is generalized into the Jacobi symbol. As in that case, if the "denominator" is composite, the symbol can equal one without the congruence being solvable:
where
If a and b are ordinary integers, a ≠ 0, |b| > 1, gcd(a, b) = 1, then
Statements of the theorem
Gauss stated the law of biquadratic reciprocity in this form:
Let π and θ be distinct primary primes of Z[i]. Then
if either π or θ or both are ≡ 1 (mod 4), then but
if both π and θ are ≡ 3 + 2i (mod 4), then
Just as the quadratic reciprocity law for the Legendre symbol is also true for the Jacobi symbol, the requirement that the numbers be prime is not needed; it suffices that they be odd relatively prime nonunits. Probably the most well-known statement is:
Let π and θ be primary relatively prime nonunits. Then
There are supplementary theorems for the units and the half-even prime 1 + i.
if π = a + bi is a primary prime, then
and thus
Also, if π = a + bi is a primary prime, and b ≠ 0 then
(if b = 0 the symbol is 0).
Jacobi defined π = a + bi to be primary if a ≡ 1 (mod 4). With this normalization, the law takes the form
Let α = a + bi and β = c + di where a ≡ c ≡ 1 (mod 4) and b and d are even be relatively prime nonunits. Then
The following version was found in Gauss's unpublished manuscripts.
Let α = a + 2bi and β = c + 2di where a and c are odd be relatively prime nonunits. Then
The law can be stated without using the concept of primary:
If λ is odd, let ε(λ) be the unique unit congruent to λ (mod (1 + i)3); i.e., ε(λ) = ik ≡ λ (mod 2 + 2i), where 0 ≤ k ≤ 3. Then for odd and relatively prime α and β, neither one a unit,
For odd λ, let Then if λ and μ are relatively prime nonunits, Eisenstein proved
See also
Quadratic reciprocity
Cubic reciprocity
Octic reciprocity
Eisenstein reciprocity
Artin reciprocity
Notes
A.Here, "rational" means laws that are stated in terms of ordinary integers rather than in terms of the integers of some algebraic number field.
References
Literature
The references to the original papers of Euler, Dirichlet, and Eisenstein were copied from the bibliographies in Lemmermeyer and Cox, and were not used in the preparation of this article.
Euler
This was actually written 1748–1750, but was only published posthumously; It is in Vol V, pp. 182–283 of
Gauss
The two monographs Gauss published on biquadratic reciprocity have consecutively numbered sections: the first contains §§ 1–23 and the second §§ 24–76. Footnotes referencing these are of the form "Gauss, BQ, § n". Footnotes referencing the Disquisitiones Arithmeticae are of the form "Gauss, DA, Art. n".
These are in Gauss's Werke, Vol II, pp. 65–92 and 93–148
German translations are in pp. 511–533 and 534–586 of the following, which also has the Disquisitiones Arithmeticae and Gauss's other papers on number theory.
Eisenstein
These papers are all in Vol I of his Werke.
Dirichlet
both of these are in Vol I of his Werke.
Modern authors
External links
These two papers by Franz Lemmermeyer contain proofs of Burde's law and related results:
Rational Quartic Reciprocity
Rational Quartic Reciprocity II
Algebraic number theory
Modular arithmetic
Theorems in number theory | Quartic reciprocity | [
"Mathematics"
] | 3,968 | [
"Theorems in number theory",
"Arithmetic",
"Algebraic number theory",
"Mathematical problems",
"Mathematical theorems",
"Modular arithmetic",
"Number theory"
] |
526,227 | https://en.wikipedia.org/wiki/Uhuru%20%28satellite%29 | Uhuru was the first satellite launched specifically for the purpose of X-ray astronomy. It was also known as the X-ray Explorer Satellite, SAS-A (for Small Astronomy Satellite A, the first of the three-spacecraft SAS series), SAS 1, or Explorer 42. The NASA observatory was launched on 12 December 1970 into an initial orbit of about 560 km apogee, 520 km perigee, 3 degrees inclination, with a period of 96 minutes. The mission ended in March 1973. Uhuru was a scanning mission, with a spin period of ~12 minutes. It performed the first comprehensive survey of the entire sky for X-ray sources, with a sensitivity of about 0.001 times the intensity of the Crab nebula.
Objectives
The main objectives of the mission were:
To survey the sky for cosmic X-ray sources in the 2–20 keV range to a limiting sensitivity of 1.5 × 10−18 joule/(cm2-sec), 5 × 10−4 the flux from the Crab Nebula
To determine discrete source locations with a precision of a few square minutes of arc for strong sources and a few tenths of a square degree at the sensitivity limit
To study the structure of extended sources or complex regions with a resolution of about 30 arc minutes
To determine gross spectral features and variability of X-ray sources
To perform, wherever possible, coordinated and/or simultaneous observations of X-ray objects with other observers.
Instrumentation
The payload consisted of two sets of proportional counters, each with ~0.084 m2 effective area.
The counters were sensitive with more than 10% efficiency to X-ray photons in the ~2–20 keV range.
The lower energy limit was determined by the attenuation of the beryllium windows of the counter plus a thin thermal shroud that was needed to maintain temperature stability of the spacecraft.
The upper energy limit was determined by the transmission properties of the counter filling gas.
Pulse-shape discrimination and anticoincidence techniques were used to filter out emissions of particles and undesirable high-energy photons in the background.
Pulse-height analysis in eight energy channels was used to obtain information on the energy spectrum of the incident photons.
The two sets of counters were placed back to back and were collimated to 0.52° × 0.52° and 5.2° × 5.2° (full width at half maximum) respectively.
While the 0.52° detector gave finer angular resolution, the 5.2° detector had higher sensitivity for isolated sources.
Results
Uhuru achieved several outstanding scientific advances, including the discovery and detailed study of the pulsing accretion-powered binary X-Ray sources such as Cen X-3, Vela X-1, and Her X-1, the identification of Cygnus X-1, the first strong candidate for an astrophysical black hole, and many important extragalactic sources.
The Uhuru Catalog, issued in four successive versions, the last being the 4U catalog, was the first comprehensive X-ray catalog, contains 339 objects and covers the whole sky in the 2–6 keV band.
The final version of the source catalog is known as the 4U Catalog; earlier versions were the 2U and 3U catalogs. Sources are referenced as, e.g., "4U 1700-37".
Naming
The satellite's name, "Uhuru", is the Swahili word for "freedom". It was named in recognition of the hospitality of Kenya from where it was launched, using the Italian San Marco launch platform near Mombasa.
See also
Timeline of artificial satellites and space probes
Small Astronomy Satellite 2
Small Astronomy Satellite 3
References
External links
SAS-A (Explorer 42) Press Kit
Uhuru Satellite at (GSFC. NASA)
NSSDC Master Catalog: Uhuru
Explorers Program
X-ray telescopes
Space telescopes
Satellites formerly orbiting Earth
Spacecraft launched in 1970
Spacecraft which reentered in 1979 | Uhuru (satellite) | [
"Astronomy"
] | 809 | [
"Space telescopes"
] |
526,616 | https://en.wikipedia.org/wiki/Elder%20abuse | Elder abuse (also called elder mistreatment, senior abuse, abuse in later life, abuse of older adults, abuse of older women, and abuse of older men) is a single or repeated act, or lack of appropriate action, occurring within any relationship where there is an expectation of trust, which causes harm or distress to an older person. This definition has been adopted by the World Health Organization (WHO) from a definition put forward by Hourglass (formerly Action on Elder Abuse) in the UK. Laws protecting the elderly from abuse are similar to and related to laws protecting dependent adults from abuse.
Elder abuse includes harms by people an older person knows or has a relationship with, such as a spouse, partner, or family member, a friend or neighbor, or people an older person relies on for services. Many forms of elder abuse are recognized as types of domestic violence or family violence since they are committed by family members. Paid caregivers have also been known to prey on elderly patients.
While a variety of circumstances are considered elder abuse, it does not include general criminal activities against older persons, such as home break-ins, robbery or muggings in the street, or "distraction burglary," where a stranger distracts an older person at the doorstep while another person enters the property to steal.
Over the years, government agencies and community professional groups worldwide have specified elder abuse as a social problem. In 2002, WHO brought international attention to the issue of elder abuse. In 2006, the International Network for Prevention of Elder Abuse (INPEA) designated June 15 as World Elder Abuse Awareness Day (WEAAD). An increasing number of events are held across the globe on this day to raise awareness of elder abuse and highlight ways to challenge it.
Types
In essence, elder abuse involves the use of power and control to harm the well-being and status of an older person. Although there are common themes of elder abuse across nations, elder abuse differs within nations according to the history, culture, and economic strength of older people, as well as the way older people are perceived.
Several types of elder abuse are generally recognized, including:
Physical abuse: e.g. hitting, punching, slapping, burning, pushing, kicking, restraining, falsely imprisoning or confining, or giving excessive or improper medication, or withholding treatment or medication.
Psychological, emotional abuse: e.g. humiliating a person. A perpetrator may identify something that matters to an older person and then use that knowledge to coerce an older person into a particular action. The abuse may take verbal forms such as yelling, blaming, accusing, name-calling, ridiculing, or constantly criticizing, or may take nonverbal forms such as ignoring, shunning, treating with silence, or withdrawing affection.
Financial abuse: also known as financial exploitation or economic abuse, involving misappropriation of financial resources by family members, caregivers, or strangers, or the use of financial means to control the person or facilitate other types of abuse. Also, failure to pay financial support to impoverished elders in jurisdictions which have filial responsibility laws, such as France, Germany, and most of the United States.
Sexual abuse: e.g. forcing a person to take part in any sexual activity without consent, including forcing them to participate in conversations of a sexual nature against their will; which may also include situations where the person is no longer able to give consent (dementia).
Neglect: e.g. depriving a person of proper medical treatment, food, heat, clothing, comfort, or essential medication, or depriving a person of needed services, to force certain kinds of actions, financial and otherwise. Neglect can include leaving unattended an elder person who is at risk (for example, from a fall). The deprivation may be intentional (active neglect) or happen out of lack of knowledge or resources (passive neglect).
In addition, some U.S. state laws also recognize the following as elder abuse:
Abandonment: deserting a dependent person with the intent to abandon them or leave them unattended long enough to endanger their health or welfare.
Rights abuse: denying the civil and constitutional rights of a person who is old but not declared by a court to be mentally incompetent. This is an aspect of elder abuse increasingly being recognized by nations.
Self-neglect: neglecting oneself by not caring about one's own health, well-being or safety. Self-neglect (harm by self) is treated as conceptually different than abuse (harm by others). Elder self-neglect can lead to illness, injury, or even death. Common needs that older adults may deny themselves or ignore include the following: sustenance (food or water); cleanliness (bathing and personal hygiene); adequate clothing for climate protection; proper shelter; adequate safety; clean and healthy surroundings; medical attention for serious illness; and essential medications. Self-neglect is often created by an individual's declining mental awareness or capability. Some older adults may choose to deny themselves some health or safety benefits, which may not be self-neglect. This may simply be their personal choice. Caregivers and other responsible individuals must honor these choices if the older adult is sound of mind. In other instances, the older adult may lack the needed resources, as a result of poverty, or other social conditions. This is also not considered "self-neglect."
Institutional abuse refers to physical or psychological harm, as well as rights violations in settings where care and assistance is provided to dependent older adults or others, such as nursing homes. Recent studies of approximately 2,000 nursing home facility residents in the United States reported a growing abuse rate of 44% and neglect up to 95%, making elder abuse in nursing homes a growing danger. Exact statistics are rare due to elder abuse in general and specifically in nursing homes being a silent condition.
Warning signs
The key to prevention and intervention of elder abuse is the ability to recognize the warning signs of its occurrence. Signs of elder abuse differ depending on the type of abuse the victim is suffering. Each type of abuse has distinct signs associated with it.
Physical abuse can be detected by visible signs on the body including bruises, scratches, scars, sprains, or broken bones. More subtle indications of physical abuse include signs of restraint such as rope marks on the wrist or broken eyeglasses.
Emotional abuse often accompanies other types of abuse and can usually be detected by changes in an elder person's personality or behavior. The elder may also exhibit behavior mimicking dementia, such as rocking or mumbling. Emotional abuse is the most underreported form of elder abuse. Elder abuse occurs when a person fails to treat an elder with respect, and it may include verbal abuse. The elder experiences social isolation or lack of acknowledgement. One indicator of emotional abuse is the elder adult's being unresponsive or uncommunicative. They can also be unreasonably suspicious or fearful, more isolated, and not wanting to be as social as they may have been before. Emotional abuse is underreported but can have the most damaging effects because it leads to more physical and mental health problems.
Financial exploitation is a more subtle form of abuse and may be more challenging to notice. Signs of financial exploitation include unpaid bills, purchases of unnecessary goods or services, significant withdrawals from accounts, and belongings or money missing from the home.
Sexual abuse, like physical abuse, can be detected by visible signs on the body, especially around the breasts or genital area. Other signs include inexplicable infections, bleeding, and torn underclothing.
Neglect can be inflicted by either a caregiver or oneself. Signs of neglect include malnutrition and dehydration, poor hygiene, noncompliance with a medical prescription, and unsafe living conditions.
In addition to observing signs in the elderly individual, one can also detect abuse by monitoring changes in the caregiver's behavior. For example, the caregiver may not allow them to speak to or receive visitors or may exhibit indifference or a lack of affection towards the elder or refer to the elder as "a burden." Caregivers who have a history of substance abuse or mental illness are more likely to commit elder abuse than other individuals.
Abuse can sometimes be subtle and therefore difficult to detect. Regardless, awareness organizations and research advise that one take any suspicion seriously and address concerns adequately and immediately.
Signs
Lack of medical aids such as glasses, walker, hearing aids.
Signs of emotional trauma.
Broken eyeglasses/frames, or physical signs of punishment or being restrained.
Signs of insufficient care or unpaid bills despite adequate financial resources.
Broken bones (fractures)
Poor physical appearance
Changes in mental status
Frequent infections
Bruising, scratches, welts, or cuts.
Unexplained weight loss
Refusal to speak
Signs of dehydration
Lack of cleanliness
Health consequences
The health consequences of elder abuse are serious. Elder abuse can destroy an elderly person's quality of life in the forms of:
Declining functional abilities
Increased dependency, sense of helplessness, and stress.
Worsening psychological decline
Premature mortality and morbidity
Depression and dementia
Malnutrition
Bedsores
Death
The risk of death for elder abuse victims is three times higher than for non-victims.
Perpetrators
An abuser can be a caregiver, spouse, partner, relative, friend, neighbor, volunteer worker, paid worker, practitioner, solicitor, or any other individual with the intent to deprive a vulnerable person of their resources. Relatives include adult children and their spouses or partners, their offspring, and other extended family members. Children and living relatives who have a history of substance abuse or have had other life troubles are of particular concern. For example, Hybrid Financial Exploitation (HFE) abusive individuals are more likely to be a relative, chronically unemployed, and dependent on the elderly person. Additionally, past studies have estimated that between 16 percent and 38 percent of all elder abusers have a history of mental illness. Elder abuse perpetrated by individuals with mental illnesses can be decreased by lessening the level of dependency that persons with serious mental illness have on family members. This can be done by funneling more resources into housing assistance programs, intensive care management services, and better welfare benefits for individuals with serious mental illness. People with substance abuse and mental health disorders typically have very small social networks, and this confinement contributes to the overall occurrence of elder abuse.
Perpetrators of elder abuse can include anyone in a position of trust, control or authority over the individual. Family relationships, neighbors and friends, are all socially considered relationships of trust, whether or not the older adult actually thinks of the people as "trustworthy." Some perpetrators may "groom" an older person (befriend or build a relationship with them) in order to establish a relationship of trust. Older people living alone who have no adult children living nearby are particularly vulnerable to "grooming" by neighbors and friends who would hope to gain control of their estates.
The majority of abusers are relatives, typically the older adult's spouse/partner or sons and daughters, although the type of abuse differs according to the relationship. In some situations the abuse is "domestic violence grown old," a situation in which the abusive behavior of a spouse or partner continues into old age. In some situations, an older couple may be attempting to care and support each other and failing, in the absence of external support. In the case of sons and daughters, it tends to be that of financial abuse, justified by a belief that it is nothing more than the "advance inheritance" of property, valuables, and money.
Though corporate abusers, such as brokerage firms and bank trust companies have been considered too regulated to be able to abuse the elderly, cases of such abuse have been reported. Such corporate abuse might escape notice both because they have more aptitude at methods of abuse that can go undetected and because they are protected by attorneys and the government in ways that individuals are not.
Within paid care environments, abuse can occur for a variety of reasons. Some abuse is the willful act of cruelty inflicted by a single individual upon an older person. In fact, a case study in Canada suggests that the high elder abuse statistics are from repeat offenders who, like in other forms of abuse, practice elder abuse for the schadenfreude associated with the act. More commonly, institutional abuses or neglect may reflect lack of knowledge, lack of training, lack of support, or insufficient resourcing. Institutional abuse may be the consequence of common practices or processes that are part of running of a care institution or service. Sometimes this type of abuse is referred to as "poor practice," although this term reflects the motive of the perpetrator (the causation) rather than the impact upon the older person.
Elder abuse is not a direct parallel to child maltreatment, as perpetrators of elder abuse do not have the same legal protection of rights as parents of children do. For example, a court order is needed to remove a child from their home but not to remove a victim of elder abuse from theirs.
Risk factors
Various risk factors increase the likelihood that an elderly person will become a victim of elder abuse, including an elderly person who:
Has memory problems (such as dementia).
Has a mental illness, either long-standing or recent.
Has physical disabilities.
Has depression, loneliness, or lack of social support.
Abuses alcohol or other substances.
Takes prescribed medications that impair judgment.
Is verbally or physically combative with the caregiver.
Has a shared living situation.
Has a criminal history.
Several other risk factors increase the likelihood that a caregiver will participate in elder abuse, including a caregiver who:
Feels overwhelmed or resentful.
Has a history of substance abuse or a history of abusing others.
Is dependent on the older person for housing, finances, or other needs.
Has mental health problems.
Is unemployed.
Has a criminal history.
Has a shared living situation.
In addition:
Lower income or poverty has been found to be associated with elder abuse. Low economic resources have been conceptualized as a contextual or situational stress or contributing to elder abuse.
Living with a large number of household members other than a spouse is associated with an increased risk of abuse, especially financial abuse.
Risk factors can also be categorized into individual, relationship, community, and sociocultural levels. At the individual level, elders who have poor physical and mental health are at higher risk. At the relationship level, a shared living situation is a huge risk factor for the elderly, and living in the same area as the abuser is more likely to result in abuse. At the community level, caregivers may knowingly or inadvertently cause social isolation of the elderly. At the sociocultural level, being represented as weak and dependent, having a lack of funds to pay for care, needing assistance but living alone, and having bonds between generations of a family destroyed are possible factors in elder abuse.
Research and statistics
There has been a general lack of reliable data in this area and it is often argued that the absence of data is a reflection of the low priority given to work associated with older people. However, over the past decade there has been a growing amount of research into the nature and extent of elder abuse. The research still varies considerably in the definitions being used, who is being asked, and what is being asked. As a result, the statistics used in this area vary considerably.
One study suggests that around 25% of vulnerable older adults will report abuse in the previous month, totaling up to 6% of the general elderly population. However, some consistent themes are beginning to emerge from interactions with abused elders, and through limited and small scale research projects. Work undertaken in Canada suggests that approximately 70% of elder abuse is perpetrated against women and this is supported by evidence from the Hourglass helpline in the UK, which identifies women as victims in 67% of calls. Also domestic violence in later life may be a continuation of long term partner abuse and in some cases, abuse may begin with retirement or the onset of a health condition. Certainly, abuse increases with age, with 78 percent of victims being over 70 years of age.
The higher proportion of spousal homicides supports the suggestion that abuse of older women is often a continuation of long term spousal abuse against women. In contrast, the risk of homicide for older men was far greater outside the family than within. This is an important point because the domestic violence of older people is often not recognized and consequently strategies, which have proved effective within the domestic violence arena, have not been routinely transferred into circumstances involving the family abuse of older people.
According to the Hourglass helpline in the UK, abuse occurs primarily in the family home (64%), followed by residential care (23%), and then hospitals (5%), although a helpline does not necessarily provide a true reflection of such situations as it is based upon the physical and mental ability of people to utilize such a resource.
Research conducted in New Zealand broadly supports the above findings, with some variations. Of 1288 cases in 2002–2004, 1201 individuals, 42 couples, and 45 groups were found to have been abused. Of these, 70 percent were female. Psychological abuse (59%), followed by material/financial (42%), and physical abuse (12%) were the most frequently identified types of abuse. Sexual abuse occurred in 2% of reported cases.
Age Concern New Zealand found that most abusers are family members (70%), most commonly sons or daughters (40%). Older abusers (those over 65 years) are more likely to be husbands.
In 2007, 4766 cases of suspected abuse, neglect, or financial exploitation involving older adults were reported, an increase of 9 percent over 2006. 19 incidents were related to a death, and a total of 303 incidents were considered life-threatening. About one in 11 incidents involved a life-threatening or fatal situation.
In 2012, the study called Pure Financial Exploitation vs. Hybrid Exploitation Co-Occurring With Physical Abuse and/or Neglect of Elderly Persons by Shelly L. Jackson and Thomas L. Hafemeister brought attention to the hybrid abuse that elderly persons can experience. This study revealed that victims of hybrid financial exploitation or HFE lost an average of $185,574, a range of $20–$750,000.
Barriers to obtaining statistics
Several conditions make it hard for researchers to obtain accurate statistics on elder abuse. Researchers may have difficulty obtaining accurate elder abuse statistics for the following reasons:
Elder abuse is largely a hidden problem and tends to be committed in the privacy of the elderly person's home, mostly by his or her family members.
Elder abuse victims are often unwilling to report their abuse for fear of others' disbelief, fear of loss of independence, fear of being institutionalized, fear of losing their only social support (especially if the perpetrator is a relative), and fear of being subject to future retaliation by the perpetrator(s).
Elder abuse victims' cognitive decline and ill health may prevent them from reporting their abuse.
Lack of proper training of service providers, such as social workers, law enforcement, nurses, etc., about elder abuse, therefore the number of cases reported tends to be low.
The subjective nature of elder abuse, which largely depends on one's interpretation.
Another reason why there is a lack of accurate statistics is the debate of whether to include self-neglect or not. Many are unsure if it should be included since it does not involve another person as an abuser. Those opposed to the inclusion of self-neglect make the claim that it is a different form of abuse and thus, should not be included in the statistics. Due to this discrepancy and the others mentioned above, it is difficult to get accurate data concerning the abuse of the elderly.
Prevention
Doctors, nurses, and other medical personnel can play a vital role in assisting elder abuse victims. Studies have shown that elderly individuals, on average, make 13.9 visits per year to a physician. Although there has been an increase in awareness of elder abuse over the years, physicians tend to only report 2% of elder abuse cases. Reasons for lack of reporting by physicians include a lack of current knowledge concerning state laws on elder abuse, concern about angering the abuser and ruining the relationship with the elderly patient, possible court appearances, lack of cooperation from elderly patients or families, and lack of time and reimbursement. Through education and training about elder abuse, health care professionals can better assist elder abuse victims.
Educating and training those in the criminal justice system, such as police, prosecutors, and the judiciary on elder abuse, as well as increased legislation to protect elders, will also help to minimize elder abuse. Increased legislations to protect elders and will also provide improved assistance to victims of elder abuse.
In addition, community involvement in responding to elder abuse can contribute to elderly persons' safety. In general, preventing the occurrence or recurrence of elder abuse helps not only the elder but it may also improve the anxiety and depression of their caregivers too. Communities can develop programs that are structured around meeting the needs of elderly persons. For example, several communities throughout the United States have created Financial Abuse Specialist Teams, which are multidisciplinary groups that consist of public and private professionals who volunteer their time to advise Adult Protective Services (APS), law enforcement, and private attorneys on matters of vulnerable adult financial abuse.
False accusations
It is important to recognize that false accusations of elder abuse are very common. An elderly person who has dementia or a mental illness may falsely claim to be a victim of abuse. By one estimate, 70% of elderly people with mental impairments such as dementia, delusions, or paranoia falsely accuse caregivers of stealing. Mentally impaired elders may claim that a caregiver is feeding them poisoned food or holding them prisoner. Websites such as Alzlive.com and DailyCaring.com offer advice for caregivers who are falsely accused of elder abuse or other crimes.
Examples
Stephen Akinmurele
Juana Barraza
Kenneth Erskine
Dana Sue Gray
Delroy Grant
Reta Mays
Thierry Paulin
Kaspars Petrovs
Dorothea Puente
Irina Gaidamachuk
John Wayne Glover
Charles Rogers (murder suspect)
Murder of Esther Brown
Murder of Valerie Graves
Murder of Patricia O'Connor
Murder of Janie Perrin
Murders of William and Patricia Wycherley
Stan Lee
Murders of Claudia Maupin and Oliver Northup
See also
Abusive power and control
Adult Protective Services
Aging in place
Assisted living
Child abuse
Cruelty to animals
Elder financial abuse
Elder rights
Elderly care
Institutional abuse
Isolation to facilitate abuse
Parental abuse by children (equivalent abuse but for the next generation)
Psychological abuse
Psychological manipulation
The Weinberg Center for Elder Abuse Prevention
References
Further reading
Nerenberg, Lisa Elder Abuse Prevention: Emerging Trends and Promising Strategies (2007)
External links
World Health Organization website
https://ncea.acl.gov/whatwedo/research/statistics.html
National Center on Elder Abuse (NCEA)
Senior Abuse Awareness & Prevention Infographic
End of Life Care – Dying with Dignity at Home
Elder abuse Centers for Disease Control and Prevention
Abuse
Domestic violence
Gerontology
Elder law
Institutional abuse | Elder abuse | [
"Biology"
] | 4,742 | [
"Behavior",
"Abuse",
"Gerontology",
"Aggression",
"Human behavior"
] |
526,941 | https://en.wikipedia.org/wiki/Chargaff%27s%20rules | Chargaff's rules (given by Erwin Chargaff) state that in the DNA of any species and any organism, the amount of guanine should be equal to the amount of cytosine and the amount of adenine should be equal to the amount of thymine. Further, a 1:1 stoichiometric ratio of purine and pyrimidine bases (i.e., A+G=T+C) should exist. This pattern is found in both strands of the DNA. They were discovered by Austrian-born chemist Erwin Chargaff in the late 1940s.
Definitions
First parity rule
The first rule holds that a double-stranded DNA molecule, globally has percentage base pair equality: A% = T% and G% = C%. The rigorous validation of the rule constitutes the basis of Watson–Crick base pairs in the DNA double helix model.
Second parity rule
The second rule holds that both Α% ≈ Τ% and G% ≈ C% are valid for each of the two DNA strands. This describes only a global feature of the base composition in a single DNA strand.
Research
The second parity rule was discovered in 1968. It states that, in single-stranded DNA, the number of adenine units is approximately equal to that of thymine (%A ≈ %T), and the number of cytosine units is approximately equal to that of guanine (%C ≈ %G).
The first empirical generalization of Chargaff's second parity rule, called the Symmetry Principle, was proposed by Vinayakumar V. Prabhu in 1993. This principle states that for any given oligonucleotide, its frequency is approximately equal to the frequency of its complementary reverse oligonucleotide. A theoretical generalization was mathematically derived by Michel E. B. Yamagishi and Roberto H. Herai in 2011.
In 2006, it was shown that this rule applies to four of the five types of double stranded genomes; specifically it applies to the eukaryotic chromosomes, the bacterial chromosomes, the double stranded DNA viral genomes, and the archaeal chromosomes. It does not apply to organellar genomes (mitochondria and plastids) smaller than ~20-30 kbp, nor does it apply to single stranded DNA (viral) genomes or any type of RNA genome. The basis for this rule is still under investigation, although genome size may play a role.
The rule itself has consequences. In most bacterial genomes (which are generally 80-90% coding) genes are arranged in such a fashion that approximately 50% of the coding sequence lies on either strand. Wacław Szybalski, in the 1960s, showed that in bacteriophage coding sequences purines (A and G) exceed pyrimidines (C and T). This rule has since been confirmed in other organisms and should probably be now termed "Szybalski's rule". While Szybalski's rule generally holds, exceptions are known to exist. The biological basis for Szybalski's rule is not yet known.
The combined effect of Chargaff's second rule and Szybalski's rule can be seen in bacterial genomes where the coding sequences are not equally distributed. The genetic code has 64 codons of which 3 function as termination codons: there are only 20 amino acids normally present in proteins. (There are two uncommon amino acids—selenocysteine and pyrrolysine—found in a limited number of proteins and encoded by the stop codons—TGA and TAG respectively.) The mismatch between the number of codons and amino acids allows several codons to code for a single amino acid—such codons normally differ only at the third codon base position.
Multivariate statistical analysis of codon use within genomes with unequal quantities of coding sequences on the two strands has shown that codon use in the third position depends on the strand on which the gene is located. This seems likely to be the result of Szybalski's and Chargaff's rules. Because of the asymmetry in pyrimidine and purine use in coding sequences, the strand with the greater coding content will tend to have the greater number of purine bases (Szybalski's rule). Because the number of purine bases will, to a very good approximation, equal the number of their complementary pyrimidines within the same strand and, because the coding sequences occupy 80–90% of the strand, there appears to be (1) a selective pressure on the third base to minimize the number of purine bases in the strand with the greater coding content; and (2) that this pressure is proportional to the mismatch in the length of the coding sequences between the two strands.
The origin of the deviation from Chargaff's rule in the organelles has been suggested to be a consequence of the mechanism of replication. During replication the DNA strands separate. In single stranded DNA, cytosine spontaneously slowly deaminates to adenosine (a C to A transversion). The longer the strands are separated the greater the quantity of deamination. For reasons that are not yet clear the strands tend to exist longer in single form in mitochondria than in chromosomal DNA. This process tends to yield one strand that is enriched in guanine (G) and thymine (T) with its complement enriched in cytosine (C) and adenosine (A), and this process may have given rise to the deviations found in the mitochondria.
Chargaff's second rule appears to be the consequence of a more complex parity rule: within a single strand of DNA any oligonucleotide (k-mer or n-gram; length ≤ 10) is present in equal numbers to its reverse complementary nucleotide. Because of the computational requirements this has not been verified in all genomes for all oligonucleotides. It has been verified for triplet oligonucleotides for a large data set. Albrecht-Buehler has suggested that this rule is the consequence of genomes evolving by a process of inversion and transposition. This process does not appear to have acted on the mitochondrial genomes. Chargaff's second parity rule appears to be extended from the nucleotide-level to populations of codon triplets, in the case of whole single-stranded Human genome DNA.
A kind of "codon-level second Chargaff's parity rule" is proposed as follows:
|+ Intra-strand relation among percentages of codon populations
! scope=col | First codon !! scope=col | Second codon !! scope=col | Relation proposed !! scope=col | Details
|-
| Twx (1st base position is T) || yzA (3rd base position is A) || % Twx % yzA || Twx and yzA are mirror codons, e.g. TCG and CGA
|-
| Cwx (1st base position is C) || yzG (3rd base position is G) || % Cwx % yzG || Cwx and yzG are mirror codons, e.g. CTA and TAG
|-
| wTx (2nd base position is T) || yAz (2nd base position is A) || % wTx % yAz || wTx and yAz are mirror codons, e.g. CTG and CAG
|-
| wCx (2nd base position is C) || yGz (2nd base position is G) || % wCx % yGz || wCx and yGz are mirror codons, e.g. TCT and AGA
|-
| wxT (3rd base position is T) || Ayz (1st base position is A) || % wxT % Ayz || wxT and Ayz are mirror codons, e.g. CTT and AAG
|-
| wxC (3rd base position is C) || Gyz (1st base position is G) || % wxC % Gyz || wxC and Gyz are mirror codons, e.g. GGC and GCC
|-
|}
Examples — computing whole human genome using the first codons reading frame provides:
36530115 TTT and 36381293 AAA (ratio % = 1.00409). 2087242 TCG and 2085226 CGA (ratio % = 1.00096), etc...
In 2020, it is suggested that the physical properties of the dsDNA (double stranded DNA) and the tendency to maximum entropy of all the physical systems are the cause of Chargaff's second parity rule. The symmetries and patterns present in the dsDNA sequences can emerge from the physical peculiarities of the dsDNA molecule and the maximum entropy principle alone, rather than from biological or environmental evolutionary pressure.
Percentages of bases in DNA
The following table is a representative sample of Erwin Chargaff's 1952 data, listing the base composition of DNA from various organisms and support both of Chargaff's rules. An organism such as φX174 with significant variation from A/T and G/C equal to one, is indicative of single stranded DNA.
! scope=col|Organism!!scope=col|Taxon!!scope=col|%A !!scope=col|%G !!scope=col|%C !!scope=col|%T !!scope=col|A / T !!scope=col|G / C !!scope=col|%GC !!scope=col|%AT
|-
| Maize || Zea || 26.8 || 22.8 || 23.2 || 27.2 || 0.99 || 0.98 || 46.1 || 54.0
|-
| Octopus || Octopus || 33.2 || 17.6 || 17.6 || 31.6 || 1.05 || 1.00 || 35.2 || 64.8
|-
| Chicken || Gallus || 28.0 || 22.0 || 21.6 || 28.4 || 0.99 || 1.02 || 43.7 || 56.4
|-
| Rat || Rattus || 28.6 || 21.4 || 20.5 || 28.4 || 1.01 || 1.00 || 42.9 || 57.0
|-
| Human || Homo || 29.3 || 20.7 || 20.0 || 30.0 || 0.98 || 1.04 || 40.7 || 59.3
|-
| Grasshopper || Orthoptera || 29.3 || 20.5 || 20.7 || 29.3 || 1.00 || 0.99 || 41.2 || 58.6
|-
| Sea urchin || Echinoidea || 32.8 || 17.7 || 17.3 || 32.1 || 1.02 || 1.02 || 35.0 || 64.9
|-
| Wheat || Triticum || 27.3 || 22.7 || 22.8 || 27.1 || 1.01 || 1.00 || 45.5 || 54.4
|-
| Yeast || Saccharomyces || 31.3 || 18.7 || 17.1 || 32.9 || 0.95 || 1.09 || 35.8 || 64.4
|-
| E. coli || Escherichia || 24.7 || 26.0 || 25.7 || 23.6 || 1.05 || 1.01 || 51.7 || 48.3
|-
| φX174 || PhiX174 || 24.0 || 23.3 || 21.5 || 31.2 || 0.77 || 1.08 || 44.8 || 55.2
|-
See also
Genetic codes
References
Further reading
External links
CBS Genome Atlas Database — contains hundreds of examples of base skews and had problems.
The Z curve database of genomes — a 3-dimensional visualization and analysis tool of genomes.
DNA
Genetics techniques
History of genetics
Biotechnology
Medical research
Biology experiments
Laboratory techniques
Molecular biology | Chargaff's rules | [
"Chemistry",
"Engineering",
"Biology"
] | 2,674 | [
"Genetics techniques",
"Biological engineering",
"Genetic engineering",
"Biotechnology",
"nan",
"Molecular biology",
"Biochemistry"
] |
527,046 | https://en.wikipedia.org/wiki/Hyperfine%20structure | In atomic physics, hyperfine structure is defined by small shifts in otherwise degenerate electronic energy levels and the resulting splittings in those electronic energy levels of atoms, molecules, and ions, due to electromagnetic multipole interaction between the nucleus and electron clouds.
In atoms, hyperfine structure arises from the energy of the nuclear magnetic dipole moment interacting with the magnetic field generated by the electrons and the energy of the nuclear electric quadrupole moment in the electric field gradient due to the distribution of charge within the atom. Molecular hyperfine structure is generally dominated by these two effects, but also includes the energy associated with the interaction between the magnetic moments associated with different magnetic nuclei in a molecule, as well as between the nuclear magnetic moments and the magnetic field generated by the rotation of the molecule.
Hyperfine structure contrasts with fine structure, which results from the interaction between the magnetic moments associated with electron spin and the electrons' orbital angular momentum. Hyperfine structure, with energy shifts typically orders of magnitudes smaller than those of a fine-structure shift, results from the interactions of the nucleus (or nuclei, in molecules) with internally generated electric and magnetic fields.
History
The first theory of atomic hyperfine structure was given in 1930 by Enrico Fermi for an atom containing a single valence electron with an arbitrary angular momentum. The Zeeman splitting of this structure was discussed by S. A. Goudsmit and R. F. Bacher later that year.
In 1935, H. Schüler and Theodor Schmidt proposed the existence of a nuclear quadrupole moment in order to explain anomalies in the hyperfine structure of Europium, Cassiopium(older name for Lutetium), Indium, Antimony, and Mercury.
Theory
The theory of hyperfine structure comes directly from electromagnetism, consisting of the interaction of the nuclear multipole moments (excluding the electric monopole) with internally generated fields. The theory is derived first for the atomic case, but can be applied to each nucleus in a molecule. Following this there is a discussion of the additional effects unique to the molecular case.
Atomic hyperfine structure
Magnetic dipole
The dominant term in the hyperfine Hamiltonian is typically the magnetic dipole term. Atomic nuclei with a non-zero nuclear spin have a magnetic dipole moment, given by:
where is the g-factor and is the nuclear magneton.
There is an energy associated with a magnetic dipole moment in the presence of a magnetic field. For a nuclear magnetic dipole moment, μI, placed in a magnetic field, B, the relevant term in the Hamiltonian is given by:
In the absence of an externally applied field, the magnetic field experienced by the nucleus is that associated with the orbital (ℓ) and spin (s) angular momentum of the electrons:
Electron orbital magnetic field
Electron orbital angular momentum results from the motion of the electron about some fixed external point that we shall take to be the location of the nucleus. The magnetic field at the nucleus due to the motion of a single electron, with charge –e at a position r relative to the nucleus, is given by:
where −r gives the position of the nucleus relative to the electron. Written in terms of the Bohr magneton, this gives:
Recognizing that mev is the electron momentum, p, and that is the orbital angular momentum in units of ħ, ℓ, we can write:
For a many-electron atom this expression is generally written in terms of the total orbital angular momentum, , by summing over the electrons and using the projection operator, , where . For states with a well defined projection of the orbital angular momentum, , we can write , giving:
Electron spin magnetic field
The electron spin angular momentum is a fundamentally different property that is intrinsic to the particle and therefore does not depend on the motion of the electron. Nonetheless, it is angular momentum and any angular momentum associated with a charged particle results in a magnetic dipole moment, which is the source of a magnetic field. An electron with spin angular momentum, s, has a magnetic moment, μs, given by:
where gs is the electron spin g-factor and the negative sign is because the electron is negatively charged (consider that negatively and positively charged particles with identical mass, travelling on equivalent paths, would have the same angular momentum, but would result in currents in the opposite direction).
The magnetic field of a point dipole moment, μs, is given by:
Electron total magnetic field and contribution
The complete magnetic dipole contribution to the hyperfine Hamiltonian is thus given by:
The first term gives the energy of the nuclear dipole in the field due to the electronic orbital angular momentum. The second term gives the energy of the "finite distance" interaction of the nuclear dipole with the field due to the electron spin magnetic moments. The final term, often known as the Fermi contact term relates to the direct interaction of the nuclear dipole with the spin dipoles and is only non-zero for states with a finite electron spin density at the position of the nucleus (those with unpaired electrons in s-subshells). It has been argued that one may get a different expression when taking into account the detailed nuclear magnetic moment distribution. The inclusion of the delta function is an admission that the singularity in the magnetic induction B owing to a magnetic dipole moment at a point is not integrable. It is B which mediates the interaction between the Pauli spinors in non-relativistic quantum mechanics. Fermi (1930) avoided the difficulty by working with the relativistic Dirac wave equation, according to which the mediating field for the Dirac spinors is the four-vector potential (V,A). The component V is the Coulomb potential. The component A is the three-vector magnetic potential (such that B = curl A), which for the point dipole is integrable.
For states with this can be expressed in the form
where:
If hyperfine structure is small compared with the fine structure (sometimes called IJ-coupling by analogy with LS-coupling), I and J are good quantum numbers and matrix elements of can be approximated as diagonal in I and J. In this case (generally true for light elements), we can project N onto J (where is the total electronic angular momentum) and we have:
This is commonly written as
with being the hyperfine-structure constant which is determined by experiment. Since (where is the total angular momentum), this gives an energy of:
In this case the hyperfine interaction satisfies the Landé interval rule.
Electric quadrupole
Atomic nuclei with spin have an electric quadrupole moment. In the general case this is represented by a rank-2 tensor, , with components given by:
where i and j are the tensor indices running from 1 to 3, xi and xj are the spatial variables x, y and z depending on the values of i and j respectively, δij is the Kronecker delta and ρ(r) is the charge density. Being a 3-dimensional rank-2 tensor, the quadrupole moment has 32 = 9 components. From the definition of the components it is clear that the quadrupole tensor is a symmetric matrix () that is also traceless (), giving only five components in the irreducible representation. Expressed using the notation of irreducible spherical tensors we have:
The energy associated with an electric quadrupole moment in an electric field depends not on the field strength, but on the electric field gradient, confusingly labelled , another rank-2 tensor given by the outer product of the del operator with the electric field vector:
with components given by:
Again it is clear this is a symmetric matrix and, because the source of the electric field at the nucleus is a charge distribution entirely outside the nucleus, this can be expressed as a 5-component spherical tensor, , with:
where:
The quadrupolar term in the Hamiltonian is thus given by:
A typical atomic nucleus closely approximates cylindrical symmetry and therefore all off-diagonal elements are close to zero. For this reason the nuclear electric quadrupole moment is often represented by .
Molecular hyperfine structure
The molecular hyperfine Hamiltonian includes those terms already derived for the atomic case with a magnetic dipole term for each nucleus with and an electric quadrupole term for each nucleus with . The magnetic dipole terms were first derived for diatomic molecules by Frosch and Foley, and the resulting hyperfine parameters are often called the Frosch and Foley parameters.
In addition to the effects described above, there are a number of effects specific to the molecular case.
Direct nuclear spin–spin
Each nucleus with has a non-zero magnetic moment that is both the source of a magnetic field and has an associated energy due to the presence of the combined field of all of the other nuclear magnetic moments. A summation over each magnetic moment dotted with the field due to each other magnetic moment gives the direct nuclear spin–spin term in the hyperfine Hamiltonian, .
where α and α are indices representing the nucleus contributing to the energy and the nucleus that is the source of the field respectively. Substituting in the expressions for the dipole moment in terms of the nuclear angular momentum and the magnetic field of a dipole, both given above, we have
Nuclear spin–rotation
The nuclear magnetic moments in a molecule exist in a magnetic field due to the angular momentum, T (R is the internuclear displacement vector), associated with the bulk rotation of the molecule, thus
Small molecule hyperfine structure
A typical simple example of the hyperfine structure due to the interactions discussed above is in the rotational transitions of hydrogen cyanide (1H12C14N) in its ground vibrational state. Here, the electric quadrupole interaction is due to the 14N-nucleus, the hyperfine nuclear spin-spin splitting is from the magnetic coupling between nitrogen, 14N (IN = 1), and hydrogen, 1H (IH = ), and a hydrogen spin-rotation interaction due to the 1H-nucleus. These contributing interactions to the hyperfine structure in the molecule are listed here in descending order of influence. Sub-doppler techniques have been used to discern the hyperfine structure in HCN rotational transitions.
The dipole selection rules for HCN hyperfine structure transitions are , , where is the rotational quantum number and is the total rotational quantum number inclusive of nuclear spin (), respectively. The lowest transition () splits into a hyperfine triplet. Using the selection rules, the hyperfine pattern of transition and higher dipole transitions is in the form of a hyperfine sextet. However, one of these components () carries only 0.6% of the rotational transition intensity in the case of . This contribution drops for increasing J. So, from upwards the hyperfine pattern consists of three very closely spaced stronger hyperfine components (, ) together with two widely spaced components; one on the low frequency side and one on the high frequency side relative to the central hyperfine triplet. Each of these outliers carry ~ ( is the upper rotational quantum number of the allowed dipole transition) the intensity of the entire transition. For consecutively higher- transitions, there are small but significant changes in the relative intensities and positions of each individual hyperfine component.
Measurements
Hyperfine interactions can be measured, among other ways, in atomic and molecular spectra and in electron paramagnetic resonance spectra of free radicals and transition-metal ions.
Applications
Astrophysics
As the hyperfine splitting is very small, the transition frequencies are usually not located in the optical, but are in the range of radio- or microwave (also called sub-millimeter) frequencies.
Hyperfine structure gives the 21 cm line observed in H I regions in interstellar medium.
Carl Sagan and Frank Drake considered the hyperfine transition of hydrogen to be a sufficiently universal phenomenon so as to be used as a base unit of time and length on the Pioneer plaque and later Voyager Golden Record.
In submillimeter astronomy, heterodyne receivers are widely used in detecting electromagnetic signals from celestial objects such as star-forming core or young stellar objects. The separations among neighboring components in a hyperfine spectrum of an observed rotational transition are usually small enough to fit within the receiver's IF band. Since the optical depth varies with frequency, strength ratios among the hyperfine components differ from that of their intrinsic (or optically thin) intensities (these are so-called hyperfine anomalies, often observed in the rotational transitions of HCN). Thus, a more accurate determination of the optical depth is possible. From this we can derive the object's physical parameters.
Nuclear spectroscopy
In nuclear spectroscopy methods, the nucleus is used to probe the local structure in materials. The methods mainly base on hyperfine interactions with the surrounding atoms and ions. Important methods are nuclear magnetic resonance, Mössbauer spectroscopy, and perturbed angular correlation.
Nuclear technology
The atomic vapor laser isotope separation (AVLIS) process uses the hyperfine splitting between optical transitions in uranium-235 and uranium-238 to selectively photo-ionize only the uranium-235 atoms and then separate the ionized particles from the non-ionized ones. Precisely tuned dye lasers are used as the sources of the necessary exact wavelength radiation.
Use in defining the SI second and meter
The hyperfine structure transition can be used to make a microwave notch filter with very high stability, repeatability and Q factor, which can thus be used as a basis for very precise atomic clocks. The term transition frequency denotes the frequency of radiation corresponding to the transition between the two hyperfine levels of the atom, and is equal to , where is difference in energy between the levels and is the Planck constant. Typically, the transition frequency of a particular isotope of caesium or rubidium atoms is used as a basis for these clocks.
Due to the accuracy of hyperfine structure transition-based atomic clocks, they are now used as the basis for the definition of the second. One second is now defined to be exactly cycles of the hyperfine structure transition frequency of caesium-133 atoms.
On October 21, 1983, the 17th CGPM defined the meter as the length of the path travelled by light in a vacuum during a time interval of of a second.
Precision tests of quantum electrodynamics
The hyperfine splitting in hydrogen and in muonium have been used to measure the value of the fine-structure constant α. Comparison with measurements of α in other physical systems provides a stringent test of QED.
Qubit in ion-trap quantum computing
The hyperfine states of a trapped ion are commonly used for storing qubits in ion-trap quantum computing. They have the advantage of having very long lifetimes, experimentally exceeding ~10 minutes (compared to ~1s for metastable electronic levels).
The frequency associated with the states' energy separation is in the microwave region, making it possible to drive hyperfine transitions using microwave radiation. However, at present no emitter is available that can be focused to address a particular ion from a sequence. Instead, a pair of laser pulses can be used to drive the transition, by having their frequency difference (detuning) equal to the required transition's frequency. This is essentially a stimulated Raman transition. In addition, near-field gradients have been exploited to individually address two ions separated by approximately 4.3 micrometers directly with microwave radiation.
See also
Dynamic nuclear polarization
Electron paramagnetic resonance
References
External links
The Feynman Lectures on Physics Vol. III Ch. 12: The Hyperfine Splitting in Hydrogen
Nuclear Magnetic and Electric Moments lookup—Nuclear Structure and Decay Data at the IAEA
Atomic physics
Foundational quantum physics | Hyperfine structure | [
"Physics",
"Chemistry"
] | 3,213 | [
"Foundational quantum physics",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
527,232 | https://en.wikipedia.org/wiki/Hydrostatics | Fluid statics or hydrostatics is the branch of fluid mechanics that studies fluids at hydrostatic equilibrium and "the pressure in a fluid or exerted by a fluid on an immersed body".
It encompasses the study of the conditions under which fluids are at rest in stable equilibrium as opposed to fluid dynamics, the study of fluids in motion. Hydrostatics is a subcategory of fluid statics, which is the study of all fluids, both compressible or incompressible, at rest.
Hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids. It is also relevant to geophysics and astrophysics (for example, in understanding plate tectonics and the anomalies of the Earth's gravitational field), to meteorology, to medicine (in the context of blood pressure), and many other fields.
Hydrostatics offers physical explanations for many phenomena of everyday life, such as why atmospheric pressure changes with altitude, why wood and oil float on water, and why the surface of still water is always level according to the curvature of the earth.
History
Some principles of hydrostatics have been known in an empirical and intuitive sense since antiquity, by the builders of boats, cisterns, aqueducts and fountains. Archimedes is credited with the discovery of Archimedes' Principle, which relates the buoyancy force on an object that is submerged in a fluid to the weight of fluid displaced by the object. The Roman engineer Vitruvius warned readers about lead pipes bursting under hydrostatic pressure.
The concept of pressure and the way it is transmitted by fluids was formulated by the French mathematician and philosopher Blaise Pascal in 1647.
Hydrostatics in ancient Greece and Rome
Pythagorean Cup
The "fair cup" or Pythagorean cup, which dates from about the 6th century BC, is a hydraulic technology whose invention is credited to the Greek mathematician and geometer Pythagoras. It was used as a learning tool.
The cup consists of a line carved into the interior of the cup, and a small vertical pipe in the center of the cup that leads to the bottom. The height of this pipe is the same as the line carved into the interior of the cup. The cup may be filled to the line without any fluid passing into the pipe in the center of the cup. However, when the amount of fluid exceeds this fill line, fluid will overflow into the pipe in the center of the cup. Due to the drag that molecules exert on one another, the cup will be emptied.
Heron's fountain
Heron's fountain is a device invented by Heron of Alexandria that consists of a jet of fluid being fed by a reservoir of fluid. The fountain is constructed in such a way that the height of the jet exceeds the height of the fluid in the reservoir, apparently in violation of principles of hydrostatic pressure. The device consisted of an opening and two containers arranged one above the other. The intermediate pot, which was sealed, was filled with fluid, and several cannula (a small tube for transferring fluid between vessels) connecting the various vessels. Trapped air inside the vessels induces a jet of water out of a nozzle, emptying all water from the intermediate reservoir.
Pascal's contribution in hydrostatics
Pascal made contributions to developments in both hydrostatics and hydrodynamics. Pascal's Law is a fundamental principle of fluid mechanics that states that any pressure applied to the surface of a fluid is transmitted uniformly throughout the fluid in all directions, in such a way that initial variations in pressure are not changed.
Pressure in fluids at rest
Due to the fundamental nature of fluids, a fluid cannot remain at rest under the presence of a shear stress. However, fluids can exert pressure normal to any contacting surface. If a point in the fluid is thought of as an infinitesimally small cube, then it follows from the principles of equilibrium that the pressure on every side of this unit of fluid must be equal. If this were not the case, the fluid would move in the direction of the resulting force. Thus, the pressure on a fluid at rest is isotropic; i.e., it acts with equal magnitude in all directions. This characteristic allows fluids to transmit force through the length of pipes or tubes; i.e., a force applied to a fluid in a pipe is transmitted, via the fluid, to the other end of the pipe. This principle was first formulated, in a slightly extended form, by Blaise Pascal, and is now called Pascal's law.
Hydrostatic pressure
In a fluid at rest, all frictional and inertial stresses vanish and the state of stress of the system is called hydrostatic. When this condition of is applied to the Navier–Stokes equations for viscous fluids or Euler equations (fluid dynamics) for ideal inviscid fluid, the gradient of pressure becomes a function of body forces only.
The Navier-Stokes momentum equations are:
By setting the flow velocity , they become simply:
or:
This is the general form of Stevin's law: the pressure gradient equals the body force force density field.
Let us now consider two particular cases of this law. In case of a conservative body force with scalar potential :
the Stevin equation becomes:
That can be integrated to give:
So in this case the pressure difference is the opposite of the difference of the scalar potential associated to the body force.
In the other particular case of a body force of constant direction along z:
the generalised Stevin's law above becomes:
That can be integrated to give another (less-) generalised Stevin's law:
where:
is the hydrostatic pressure (Pa),
is the fluid density (kg/m3),
is gravitational acceleration (m/s2),
is the height (parallel to the direction of gravity) of the test area (m),
is the height of the zero reference point of the pressure (m)
is the hydrostatic pressure field (Pa) along x and y at the zero reference point
For water and other liquids, this integral can be simplified significantly for many practical applications, based on the following two assumptions. Since many liquids can be considered incompressible, a reasonable good estimation can be made from assuming a constant density throughout the liquid. The same assumption cannot be made within a gaseous environment. Also, since the height of the fluid column between and is often reasonably small compared to the radius of the Earth, one can neglect the variation of . Under these circumstances, one can transport out of the integral the density and the gravity acceleration and the law is simplified into the formula
where is the height of the liquid column between the test volume and the zero reference point of the pressure. This formula is often called Stevin's law.
One could arrive to the above formula also by considering the first particular case of the equation for a conservative body force field: in fact the body force field of uniform intensity and direction:
is conservative, so one can write the body force density as:
Then the body force density has a simple scalar potential:
And the pressure difference follows another time the Stevin's law:
The reference point should lie at or below the surface of the liquid. Otherwise, one has to split the integral into two (or more) terms with the constant and . For example, the absolute pressure compared to vacuum is
where is the total height of the liquid column above the test area to the surface, and is the atmospheric pressure, i.e., the pressure calculated from the remaining integral over the air column from the liquid surface to infinity. This can easily be visualized using a pressure prism.
Hydrostatic pressure has been used in the preservation of foods in a process called pascalization.
Medicine
In medicine, hydrostatic pressure in blood vessels is the pressure of the blood against the wall. It is the opposing force to oncotic pressure. In capillaries, hydrostatic pressure (also known as capillary blood pressure) is higher than the opposing “colloid osmotic pressure” in blood—a “constant” pressure primarily produced by circulating albumin—at the arteriolar end of the capillary. This pressure forces plasma and nutrients out of the capillaries and into surrounding tissues. Fluid and the cellular wastes in the tissues enter the capillaries at the venule end, where the hydrostatic pressure is less than the osmotic pressure in the vessel.
Atmospheric pressure
Statistical mechanics shows that, for a pure ideal gas of constant temperature in a gravitational field, T, its pressure, p will vary with height, h, as
where
is the acceleration due to gravity
is the absolute temperature
is Boltzmann constant
is the molecular mass of the gas
is the pressure
is the height
This is known as the barometric formula, and may be derived from assuming the pressure is hydrostatic.
If there are multiple types of molecules in the gas, the partial pressure of each type will be given by this equation. Under most conditions, the distribution of each species of gas is independent of the other species.
Buoyancy
Any body of arbitrary shape which is immersed, partly or fully, in a fluid will experience the action of a net force in the opposite direction of the local pressure gradient. If this pressure gradient arises from gravity, the net force is in the vertical direction opposite that of the gravitational force. This vertical force is termed buoyancy or buoyant force and is equal in magnitude, but opposite in direction, to the weight of the displaced fluid. Mathematically,
where is the density of the fluid, is the acceleration due to gravity, and is the volume of fluid directly above the curved surface. In the case of a ship, for instance, its weight is balanced by pressure forces from the surrounding water, allowing it to float. If more cargo is loaded onto the ship, it would sink more into the water – displacing more water and thus receive a higher buoyant force to balance the increased weight.
Discovery of the principle of buoyancy is attributed to Archimedes.
Hydrostatic force on submerged surfaces
The horizontal and vertical components of the hydrostatic force acting on a submerged surface are given by the following formula:
where
is the pressure at the centroid of the vertical projection of the submerged surface
is the area of the same vertical projection of the surface
is the density of the fluid
is the acceleration due to gravity
is the volume of fluid directly above the curved surface
Liquids (fluids with free surfaces)
Liquids can have free surfaces at which they interface with gases, or with a vacuum. In general, the lack of the ability to sustain a shear stress entails that free surfaces rapidly adjust towards an equilibrium. However, on small length scales, there is an important balancing force from surface tension.
Capillary action
When liquids are constrained in vessels whose dimensions are small, compared to the relevant length scales, surface tension effects become important leading to the formation of a meniscus through capillary action. This capillary action has profound consequences for biological systems as it is part of one of the two driving mechanisms of the flow of water in plant xylem, the transpirational pull.
Hanging drops
Without surface tension, drops would not be able to form. The dimensions and stability of drops are determined by surface tension. The drop's surface tension is directly proportional to the cohesion property of the fluid.
See also
References
Further reading
External links
The Flow of Dry Water - The Feynman Lectures on Physics
Pressure
Underwater diving physics | Hydrostatics | [
"Physics"
] | 2,351 | [
"Scalar physical quantities",
"Mechanical quantities",
"Applied and interdisciplinary physics",
"Physical quantities",
"Underwater diving physics",
"Pressure",
"Wikipedia categories named after physical quantities"
] |
527,369 | https://en.wikipedia.org/wiki/Lists%20of%20mathematics%20topics | Lists of mathematics topics cover a variety of topics related to mathematics. Some of these lists link to hundreds of articles; some link only to a few. The template to the right includes links to alphabetical lists of all mathematical articles. This article brings together the same content organized in a manner better suited for browsing.
Lists cover aspects of basic and advanced mathematics, methodology, mathematical statements, integrals, general concepts, mathematical objects, and reference tables.
They also cover equations named after people, societies, mathematicians, journals, and meta-lists.
The purpose of this list is not similar to that of the Mathematics Subject Classification formulated by the American Mathematical Society. Many mathematics journals ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The subject codes so listed are used by the two major reviewing databases, Mathematical Reviews and Zentralblatt MATH. This list has some items that would not fit in such a classification, such as list of exponential topics and list of factorial and binomial topics, which may surprise the reader with the diversity of their coverage.
Basic mathematics
This branch is typically taught in secondary education or in the first year of university.
Outline of arithmetic
Outline of discrete mathematics
List of calculus topics
List of geometry topics
Outline of geometry
List of trigonometry topics
Outline of trigonometry
List of trigonometric identities
List of logarithmic identities
List of integrals of logarithmic functions
List of set identities and relations
List of topics in logic
Areas of advanced mathematics
As a rough guide, this list is divided into pure and applied sections although in reality, these branches are overlapping and intertwined.
Pure mathematics
Algebra
Algebra includes the study of algebraic structures, which are sets and operations defined on these sets satisfying certain axioms. The field of algebra is further divided according to which structure is studied; for instance, group theory concerns an algebraic structure called group.
Outline of algebra
Glossary of field theory
Glossary of group theory
Glossary of linear algebra
Glossary of ring theory
List of abstract algebra topics
List of algebraic structures
List of Boolean algebra topics
List of category theory topics
List of cohomology theories
List of commutative algebra topics
List of homological algebra topics
List of group theory topics
List of representation theory topics
List of linear algebra topics
List of reciprocity laws
Calculus and analysis
Calculus studies the computation of limits, derivatives, and integrals of functions of real numbers, and in particular studies instantaneous rates of change. Analysis evolved from calculus.
Glossary of tensor theory
List of complex analysis topics
List of functional analysis topics
List of vector spaces in mathematics
List of integration and measure theory topics
List of harmonic analysis topics
List of Fourier analysis topics
List of mathematical series
List of multivariable calculus topics
List of q-analogs
List of real analysis topics
List of variational topics
See also Dynamical systems and differential equations section below.
Geometry and topology
Geometry is initially the study of spatial figures like circles and cubes, though it has been generalized considerably. Topology developed from geometry; it looks at those properties that do not change even when the figures are deformed by stretching and bending, like dimension.
Glossary of differential geometry and topology
Glossary of general topology
Glossary of Riemannian and metric geometry
Glossary of scheme theory
List of algebraic geometry topics
List of algebraic surfaces
List of algebraic topology topics
List of cohomology theories
List of circle topics
List of topics related to pi
List of curves topics
List of differential geometry topics
List of general topology topics
List of geometric shapes
List of geometric topology topics
List of geometry topics
List of knot theory topics
List of Lie group topics
List of mathematical properties of points
List of topology topics
List of topologies
Topological property
List of triangle topics
Combinatorics
Combinatorics concerns the study of discrete (and usually finite) objects. Aspects include "counting" the objects satisfying certain criteria (enumerative combinatorics), deciding when the criteria can be met, and constructing and analyzing objects meeting the criteria (as in combinatorial designs and matroid theory), finding "largest", "smallest", or "optimal" objects (extremal combinatorics and combinatorial optimization), and finding algebraic structures these objects may have (algebraic combinatorics).
Outline of combinatorics
Glossary of graph theory
List of graph theory topics
Logic
Logic is the foundation that underlies mathematical logic and the rest of mathematics. It tries to formalize valid reasoning. In particular, it attempts to define what constitutes a proof.
List of Boolean algebra topics
List of first-order theories
List of large cardinal properties
List of mathematical logic topics
List of set theory topics
Glossary of order theory
Number theory
The branch of mathematics deals with the properties and relationships of numbers, especially positive integers.
Number theory is a branch of pure mathematics devoted primarily to the study of the integers and integer-valued functions. German mathematician Carl Friedrich Gauss said, "Mathematics is the queen of the sciences—and number theory is the queen of mathematics."
Number theory also studies the natural, or whole, numbers. One of the central concepts in number theory is that of the prime number, and there are many questions about primes that appear simple but whose resolution continues to elude mathematicians.
List of algebraic number theory topics
List of number theory topics
List of recreational number theory topics
Glossary of arithmetic and Diophantine geometry
List of prime numbers—not just a table, but a list of various kinds of prime numbers (each with an accompanying table)
List of zeta functions
Applied mathematics
Dynamical systems and differential equations
A differential equation is an equation involving an unknown function and its derivatives.
In a dynamical system, a fixed rule describes the time dependence of a point in a geometrical space. The mathematical models used to describe the swinging of a clock pendulum, the flow of water in a pipe, or the number of fish each spring in a lake are examples of dynamical systems.
List of dynamical systems and differential equations topics
List of nonlinear partial differential equations
List of partial differential equation topics
Mathematical physics
Mathematical physics is concerned with "the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of physical theories".
List of mathematical topics in classical mechanics
List of mathematical topics in quantum theory
List of mathematical topics in relativity
List of string theory topics
Index of wave articles
Theory of computation
The fields of mathematics and computing intersect both in computer science, the study of algorithms and data structures, and in scientific computing, the study of algorithmic methods for solving problems in mathematics, science, and engineering.
List of algorithm general topics
List of computability and complexity topics
Lists for computational topics in geometry and graphics
List of combinatorial computational geometry topics
List of computer graphics and descriptive geometry topics
List of numerical computational geometry topics
List of computer vision topics
List of formal language and literal string topics
List of numerical analysis topics
List of terms relating to algorithms and data structures
Information theory and signal processing
Information theory is a branch of applied mathematics and social science involving the quantification of information. Historically, information theory was developed to find fundamental limits on compressing and reliably communicating data.
Signal processing is the analysis, interpretation, and manipulation of signals. Signals of interest include sound, images, biological signals such as ECG, radar signals, and many others. Processing of such signals includes filtering, storage and reconstruction, separation of information from noise, compression, and feature extraction.
List of algebraic coding theory topics
List of information theory topics
List of cryptography topics
Probability and statistics
Probability theory is the formalization and study of the mathematics of uncertain events or knowledge. The related field of mathematical statistics develops statistical theory with mathematics. Statistics, the science concerned with collecting and analyzing data, is an autonomous discipline (and not a subdiscipline of applied mathematics).
Catalog of articles in probability theory
List of probability topics
List of stochastic processes topics
List of probability distributions
List of statistics topics
Outline of regression analysis
Game theory
Game theory is a branch of mathematics that uses models to study interactions with formalized incentive structures ("games"). It has applications in a variety of fields, including economics, anthropology, political science, social psychology and military strategy.
Glossary of game theory
List of games in game theory
Operations research
Operations research is the study and use of mathematical models, statistics, and algorithms to aid in decision-making, typically with the goal of improving or optimizing the performance of real-world systems.
List of knapsack problems
List of network theory topics
Methodology
List of graphical methods
List of mathematics-based methods
List of rules of inference
Mathematical statements
A mathematical statement amounts to a proposition or assertion of some mathematical fact, formula, or construction. Such statements include axioms and the theorems that may be proved from them, conjectures that may be unproven or even unprovable, and also algorithms for computing the answers to questions that can be expressed mathematically.
List of algorithms
List of axioms
List of conjectures
List of conjectures by Paul Erdős
Combinatorial principles
List of equations
List of formulae involving pi
List of representations of e
List of inequalities
List of lemmas
List of mathematical identities
List of mathematical proofs
List of theorems
General concepts
List of convexity topics
List of dualities
List of exceptional set concepts
List of exponential topics
List of factorial and binomial topics
List of fractal topics
List of logarithm topics
List of mathematical properties of points
List of numeral system topics
List of order topics
List of partition topics
List of permutation topics
List of polynomial topics
List of properties of sets of reals
List of transforms
Mathematical objects
Among mathematical objects are numbers, functions, sets, a great variety of things called "spaces" of one kind or another, algebraic structures such as rings, groups, or fields, and many other things.
List of mathematical examples
List of algebraic surfaces
List of curves
List of complex reflection groups
List of complexity classes
List of examples in general topology
List of finite simple groups
List of Fourier-related transforms
List of manifolds
List of mathematical constants
List of mathematical functions
List of mathematical knots and links
List of mathematical shapes
List of mathematical spaces
List of matrices
List of numbers
List of polygons, polyhedra and polytopes
List of regular polytopes
List of simple Lie groups
List of small groups
List of special functions and eponyms
List of surfaces
Table of Lie groups
Equations named after people
Scientific equations named after people
About mathematics
List of letters used in mathematics and science
List of mathematical societies
List of mathematics competitions
List of mathematics history topics
List of publications in mathematics
List of mathematics journals
Mathematicians
Mathematicians study and research in all the different areas of mathematics. The publication of new discoveries in mathematics continues at an immense rate in hundreds of scientific journals, many of them devoted to mathematics and many devoted to subjects to which mathematics is applied (such as theoretical computer science and theoretical physics).
List of films about mathematicians
List of game theorists
List of geometers
List of logicians
List of mathematicians
List of mathematical probabilists
List of statisticians
Work of particular mathematicians
List of things named after Niels Henrik Abel
List of things named after George Airy
List of things named after Jean d'Alembert
List of things named after Archimedes
List of things named after Vladimir Arnold
List of things named after Emil Artin
List of things named after Stefan Banach
List of things named after Thomas Bayes
List of things named after members of the Bernoulli family
List of things named after Jakob Bernoulli
List of things named after Friedrich Bessel
List of things named after Élie Cartan
List of things named after Augustin-Louis Cauchy
List of things named after Arthur Cayley
List of things named after Pafnuty Chebyshev
List of things named after John Horton Conway
List of things named after Richard Dedekind
List of things named after Pierre Deligne
List of things named after Peter Gustav Lejeune Dirichlet
List of things named after Albert Einstein
List of things named after Euclid
List of things named after Leonhard Euler
List of things named after Paul Erdős
List of things named after Pierre de Fermat
List of things named after Fibonacci
List of things named after Joseph Fourier
List of things named after Erik Fredholm
List of things named after Ferdinand Georg Frobenius
List of things named after Carl Friedrich Gauss
List of things named after Évariste Galois
List of things named after Hermann Grassmann
List of things named after Alexander Grothendieck
List of things named after Jacques Hadamard
List of things named after William Rowan Hamilton
List of things named after Erich Hecke
List of things named after Eduard Heine
List of things named after Charles Hermite
List of things named after David Hilbert
List of things named after W. V. D. Hodge
List of things named after Carl Gustav Jacob Jacobi
List of things named after Johannes Kepler
List of things named after Felix Klein
List of things named after Joseph-Louis Lagrange
List of things named after Johann Lambert
List of things named after Pierre-Simon Laplace
List of things named after Adrien-Marie Legendre
List of things named after Gottfried Leibniz
List of things named after Sophus Lie
List of things named after Joseph Liouville
List of things named after Andrey Markov
List of things named after John Milnor
List of things named after Hermann Minkowski
List of things named after John von Neumann
List of things named after Isaac Newton
List of things named after Emmy Noether
List of things named after Henri Poincaré
List of things named after Siméon Denis Poisson
List of things named after Pythagoras
List of things named after Srinivasa Ramanujan
List of things named after Bernhard Riemann
List of things named after Issai Schur
List of things named after Anatoliy Skorokhod
List of things named after George Gabriel Stokes
List of things named after Jean-Pierre Serre
List of things named after James Joseph Sylvester
List of things named after Alfred Tarski
List of things named after Alan Turing
List of things named after Stanislaw Ulam
List of things named after Karl Weierstrass
List of things named after André Weil
List of things named after Hermann Weyl
List of things named after Norbert Wiener
List of things named after Ernst Witt
Reference tables
List of mathematical reference tables
List of moments of inertia
Table of derivatives
Integrals
In calculus, the integral of a function is a generalization of area, mass, volume, sum, and total. The following pages list the integrals of many different functions.
Lists of integrals
List of integrals of exponential functions
List of integrals of hyperbolic functions
List of integrals of inverse hyperbolic functions
List of integrals of inverse trigonometric functions
List of integrals of irrational functions
List of integrals of logarithmic functions
List of integrals of rational functions
List of integrals of trigonometric functions
Journals
List of mathematics journals
List of mathematics education journals
:Category:History of science journals
:Category:Philosophy of science literature
Meta-lists
Glossary of mathematical symbols
List of important publications in mathematics
List of important publications in statistics
List of mathematical theories
List of mathematics categories
List of mathematical symbols by subject
Table of logic symbols
Table of mathematical symbols
See also
Areas of mathematics
Glossary of areas of mathematics
Outline of mathematics
Timeline of women in mathematics
Others
Lists of unsolved problems in mathematics
List of order theory topics
List of topics related to π
Notes
: Definition from the Journal of Mathematical Physics .
External links and references
2000 Mathematics Subject Classification from the American Mathematical Society, scheme authors find many mathematics research journals asking them to use to classify their submissions; those published then include these classifications.
The Mathematical Atlas
Maths Formula
Outlines of mathematics and logic
Outlines
Lists of topics | Lists of mathematics topics | [
"Mathematics"
] | 3,182 | [
"nan"
] |
527,390 | https://en.wikipedia.org/wiki/Hilbert%27s%20eighth%20problem | Hilbert's eighth problem is one of David Hilbert's list of open mathematical problems posed in 1900. It concerns number theory, and in particular the Riemann hypothesis, although it is also concerned with the Goldbach conjecture. It asks for more work on the distribution of primes and generalizations of Riemann hypothesis to other rings where prime ideals take the place of primes.
Riemann hypothesis and generalizations
Hilbert calls for a solution to the Riemann hypothesis, which has long been regarded as the deepest open problem in mathematics. Given the solution, he calls for more thorough investigation into Riemann's zeta function and the prime number theorem.
Goldbach conjecture
Hilbert calls for a solution to the Goldbach conjecture, as well as more general problems, such as finding infinitely many pairs of primes solving a fixed linear diophantine equation.
Generalized Riemann conjecture
Finally, Hilbert calls for mathematicians to generalize the ideas of the Riemann hypothesis to counting prime ideals in a number field.
External links
English translation of Hilbert's original address
08
References | Hilbert's eighth problem | [
"Mathematics"
] | 212 | [
"Hilbert's problems",
"Mathematical problems"
] |
528,155 | https://en.wikipedia.org/wiki/Solar%20radius | Solar radius is a unit of distance used to express the size of stars in astronomy relative to the Sun. The solar radius is usually defined as the radius to the layer in the Sun's photosphere where the optical depth equals 2/3:
is approximately 10 times the average radius of Jupiter, 109 times the radius of the Earth, and 1/215th of an astronomical unit, the approximate distance between Earth and the Sun. The solar radius to either pole and that to the equator differ slightly due to the Sun's rotation, which induces an oblateness in the order of 10 parts per million.
Measurements
The uncrewed SOHO spacecraft was used to measure the radius of the Sun by timing transits of Mercury across the surface during 2003 and 2006. The result was a measured radius of .
Haberreiter, Schmutz & Kosovichev (2008) determined the radius corresponding to the solar photosphere to be . This new value is consistent with helioseismic estimates; the same study showed that previous estimates using inflection point methods had been overestimated by approximately .
Nominal solar radius
In 2015, the International Astronomical Union passed Resolution B3, which defined a set of nominal conversion constants for stellar and planetary astronomy. Resolution B3 defined the nominal solar radius (symbol ) to be equal to exactly . The nominal value, which is the rounded value, within the uncertainty, given by Haberreiter, Schmutz & Kosovichev (2008), was adopted to help astronomers avoid confusion when quoting stellar radii in units of the Sun's radius, even when future observations will likely refine the Sun's actual photospheric radius (which is currently only known to about an accuracy of ±).
Examples
Solar radii as a unit are common when describing spacecraft moving close to the sun. Two spacecraft in the 2010s include:
Solar Orbiter (as close as )
Parker Solar Probe (as close as )
See also
Astronomical unit
Earth radius
Jupiter radius
List of largest stars
Orders of magnitude (length)
Solar luminosity
Solar mass
Solar parallax
References
External links
Radius
Stellar astronomy
Units of length
Radii | Solar radius | [
"Astronomy",
"Mathematics"
] | 442 | [
"Astronomical sub-disciplines",
"Units of length",
"Quantity",
"Units of measurement",
"Stellar astronomy"
] |
528,236 | https://en.wikipedia.org/wiki/Legacy%20Virus | The Legacy Virus is a fictional plague appearing in American comic books featuring the X-Men published by Marvel Comics. It first appeared in an eponymous storyline in Marvel Comics titles, from 1993 to 2001, during which it swept through the mutant population of the Marvel Universe, killing hundreds, as well as mutating so that it affected non-mutant humans as well.
Description
The Legacy Virus, contrary to the name, was a viroid and was released by Stryfe, a terrorist (and clone of Cable raised by Apocalypse) from approximately 2,000 years in the future. It originally existed in two forms, Legacy-1 and Legacy-2, but later mutated into a third form, Legacy-3; all were airborne agents.
Legacy-1 and Legacy-2 searched for a target organism's "X-factor," the sequence of mutant genes that gave a mutant their superpowers. If it did not find an activated X-factor in the target, the viroid would die off, leaving the person completely unaffected. If, however, it did detect the X-factor, it would begin inserting introns into the transcription codings of the victim's mutant RNA, the process commonly being triggered after the patient used their powers for the first time after contracting the disease. The result was a major compromise of the replication and transcription process so disruptive that it eventually rendered the body incapable of creating healthy cells, ultimately resulting in the death of the victim. Prior to death, the viroid causes its host's powers to flare out of control.
Legacy-1 attacked general transcription and replication of all cells, a messy and non-selective process that resulted in a condition akin to a fast-replicating cancer. This is the version that infected Illyana "Magik" Rasputin, sister of Piotr "Colossus" Rasputin.
Legacy-2 was much closer to Stryfe's original template and more in tune with his desire to stir a species war between non-mutant humans and mutants. Its attacks were selective, working only on the X-factor genes. The net result was that a victim would eventually lose control of his superhuman powers. In addition to developing at a far slower rate than Legacy-1, victims of Legacy-2 developed skin lesions, fever, cough and overall weakness (symptoms displayed by the telepathic X-Man Revanche). The slow nature of Legacy-2 is why St. John "Pyro" Allerdyce survived for years following his initial infection.
Legacy-3 was accidentally created in the body of the mutant woman Infectia. Her powers allowed her to scan and visualize the genetic structure of a living being, then alter it according to her own whims. When Infectia was infected with the Legacy-2 Virus, her powers caused a replication error that removed the viroid's conditioning to infect individuals only if the X-gene was present. Legacy-3 was capable of infecting any hominid.
The Legacy Virus is strongly suggested to be an allegory for the AIDS epidemic. Although all strains of the Legacy Virus were more dangerous than HIV, they shared similar symptoms such as skin lesions, fever, fatigue, and coughing. In addition, comics featuring the Legacy Virus illustrated the similar social impact of the further isolation of a stigmatized group.
History
The Legacy Virus first appeared in X-Force #18. It was based on a virus created by Apocalypse in the distant future, which was intended to kill the remaining non-mutants. At the time that this alternate version of Apocalypse was killed, the virus had not been perfected, and much like Legacy-3, it targeted all humans indiscriminately. As a result, this virus was never deployed, until Stryfe acquired it and altered it for his own purposes.
During the X-Cutioner's Song crossover, the villain Stryfe gave Mister Sinister a canister that he claimed contained 2,000 years worth of genetic material from the Summers bloodline. When Gordon Lefferts, a scientist working for Sinister, opened the canister after Stryfe was apparently killed by Cable, they found nothing inside. Far worse than that, the canister actually contained a plague, Stryfe's "legacy" to the world.
When Colossus' sister Illyana fell ill and died of the Legacy Virus in The Uncanny X-Men #303 (Aug. 1993), he left the X-Men and joined Magneto's Acolytes.
Eventually, reporter Trish Tilby, Beast's former lover, reported to the general public the existence of the Legacy Virus. Later, Xavier and Beast call a press conference to assuage fears in the general populace. While watching the press conference, Moira MacTaggert has an insight that the virus worked as a "designer gene".
The virus raged on for some time in the mutant population, until Mystique, in an effort to make the world safe for mutants, modified the virus to affect only humans. When Moira found out about this strain of the virus, she finally grasped what the key to the cure was. Unfortunately, she was mortally wounded by Mystique during the Brotherhood of Mutants' attack on Muir Island and did not live to complete the cure. Professor X did manage to telepathically retrieve the critical information before Moira died.
With this information, Beast was able to synthesize the cure a few weeks later, though one that had a price; the virus had first been released by the death of the first victim, and the release of the cure would have the same effect. Colossus, who did not want any more people to suffer his sister's fate, snuck into McCoy's lab and injected the cure into himself and activated his mutant powers, transforming his body into organic steel. This "supercharged" the Legacy cure, simultaneously killing him and stopping the spread of the Legacy virus, instantaneously curing even those dying of the virus at that moment (Although it was later revealed Colossus had been resurrected by alien technology and was being used as a test subject for an experimental formula that would reverse mutations before he was rescued by the X-Men).
Unfortunately, this rapid cure had unforeseen geopolitical effects. Thousands of Legacy-infected mutants had been quarantined on the island nation of Genosha, which was controlled by Magneto at the time. The instant cure gave Magneto a vast army overnight and allowed him to begin carrying out his plans for world conquest in the Eve of Destruction crossover.
In X-Factor vol. 3 #10, it was revealed that Singularity Investigations was creating a virus designed to kill mutants. While Jamie Madrox referred to this as the Legacy Virus, it is unclear whether Singularity is actually recreating Stryfe's virus, creating what is to later be Stryfe's virus, or merely engineering a new one with a similar purpose.
In X-Force #7, the Vanisher is seen to be in possession of a mutated strain of the Legacy Virus. It was later destroyed by Elixir in X-Force #10.
During the Skrull Invasion of Earth, Beast discovers that the Legacy Virus can infect Skrulls as well. Beast ponders whether to use it against the invading aliens. Cyclops decides to use it to get the Skrulls to surrender.
The Legacy Virus has returned once more as it turned out there were other samples that fell into the hands of Bastion. Samples have been injected into Beautiful Dreamer and Fever Pitch by the Leper Queen in order to cause their powers to go berserk and kill themselves and thousands of humans during an anti-mutant rally held by the Friends of Humanity. It was later revealed that Hellion and Surge were also injected with the Legacy Virus.
Dark Beast is also rumored to have a sample of the Legacy Virus.
Infection list
Listed below in alphabetical order are the characters infected by the Legacy Virus:
Other versions
The Ultimate Marvel universe version of the Legacy Virus is created by Nick Fury, in an attempt to replicate the Super Soldier experiment that created Captain America, using Beast's blood. The virus turns normal humans into super-strong beings, but is fatal to mutants, prompting Fury to hold Beast in S.H.I.E.L.D. custody to coerce him to find a cure for it, in the event that there is ever an outbreak.
In other media
Television
In X-Men: The Animated Series, a variation of the Legacy Virus was used in a brief storyline where it was the creation of Apocalypse, who had created the virus with the aid of Graydon Creed and the Friends of Humanity, infecting innocent people and claiming that mutants were the ones who had caused the plague. In an attempt to stop the plague, Bishop came back from the future to destroy Apocalypse's work before the virus could move on to mutants, but as a result vital antibodies that would allow the mutant race to survive future plagues were never created. Traveling back from even further in the future, Cable was able to come up with a compromise that allowed both Bishop's and his own missions to succeed; although the plague never made the jump to mutants on a large-scale basis, Cable nevertheless ensured that Wolverine would be infected, thus creating the necessary antibodies while not killing any mutants thanks to Wolverine's healing factor.
Film
In Logan, a variation of the Legacy Virus is created by Zander Rice to wipe out the world's mutant population to avenge his father's death by Wolverine during the Weapon X program. By the year 2029, mutants are on the brink of extinction and Rice is working for Alkali-Transigen to engineer child mutants to be used as soldiers.
Video games
In X-Men 2: Game Master's Legacy, the game's plot centers around the X-Men and the Legacy Virus.
In the video game Marvel: Ultimate Alliance, a side mission includes saving a S.H.I.E.L.D. Omega Base Computer, which contains research data on the Legacy Virus. If the computer is saved, the data will be used to create a cure for the virus. If not, the Legacy Virus will become a plague that drives the mutant race nearly to extinction.
References
External links
Legacy Virus at Marvel Wiki
X-Men
Fictional viruses
Fictional microorganisms
X-Men storylines
Viral outbreaks in comics | Legacy Virus | [
"Biology"
] | 2,141 | [
"Viruses",
"Microorganisms",
"Fictional viruses",
"Fictional microorganisms"
] |
528,425 | https://en.wikipedia.org/wiki/Half-month | The half-month is a calendar subdivision used in astronomy. Each calendar month is separated into two parts:
days 1 to 15
day 16 until the end of the month.
Newly identified small Solar System bodies, such as comets and asteroids, are given systematic designations that contain the half-month encoded as a letter of the English alphabet. For example, an object discovered in the second half of January would be identified with the letter B; if found in the first half of February, the letter would be C. The letter I is not used, to prevent confusion with the number 1. Instead, the letters proceed directly from H (April 16–30) to J (May 1–15). The letter appears in the provisional designation, then when the object is confirmed the letter is incorporated into the comet designation (for comets) or minor planet designation (for asteroids and other minor planets).
See also
Naming of comets
Fortnight
References
External links
Astronomy.com explains the usage of "half-month"
Units of time
Astronomy | Half-month | [
"Physics",
"Astronomy",
"Mathematics"
] | 206 | [
"Physical quantities",
"Time",
"Time stubs",
"Units of time",
"Quantity",
"nan",
"Spacetime",
"Units of measurement"
] |
529,245 | https://en.wikipedia.org/wiki/Karl%20Fischer%20titration | In analytical chemistry, Karl Fischer titration is a classic titration method that uses coulometric or volumetric titration to determine trace amounts of water in a sample. It was invented in 1935 by the German chemist Karl Fischer. Today, the titration is done with an automated Karl Fischer titrator.
Chemical principle
The elementary reaction responsible for water quantification in the Karl Fischer titration is oxidation of sulfur dioxide () with iodine:
H2O + SO2 + I2 → SO3 + 2 HI
This elementary reaction consumes exactly one molar equivalent of water vs. iodine. Iodine is added to the solution until it is present in excess, marking the end point of the titration, which can be detected by potentiometry. The reaction is run in an alcohol solution containing a base, which consumes the sulfur trioxide and hydroiodic acid produced.
Coulometric titration
The main compartment of the titration cell contains the anode solution plus the analyte. The anode solution consists of an alcohol (ROH), a base (B), sulfur dioxide () and KI. Typical alcohols that may be used include ethanol, diethylene glycol monoethyl ether, or methanol, sometimes referred to as Karl Fischer grade. A common base is imidazole.
The titration cell also consists of a smaller compartment with a cathode immersed in the anode solution of the main compartment. The two compartments are separated by an ion-permeable membrane.
The Pt anode generates from the KI when current is provided through the electric circuit. The net reaction as shown below is oxidation of by . One mole of is consumed for each mole of . In other words, 2 moles of electrons are consumed per mole of water.
2 I− → I2 + 2 e−
B·I2 + B·SO2 + B + H2O → 2 BH+I− + BSO3
BSO3 + ROH → BHRSO4
The end point is detected most commonly by a bipotentiometric titration method. A second pair of Pt electrodes is immersed in the anode solution. The detector circuit maintains a constant current between the two detector electrodes during titration. Prior to the equivalence point, the solution contains but little . At the equivalence point, excess appears and an abrupt voltage drop marks the end point. The amount of charge needed to generate I2 and reach the end point can then be used to calculate the amount of water in the original sample.
Volumetric titration
The volumetric titration is based on the same principles as the coulometric titration, except that the anode solution above now is used as the titrant solution. The titrant consists of an alcohol (ROH), base (B), and a known concentration of . Pyridine has been used as the base in this case.
One mole of is consumed for each mole of . The titration reaction proceeds as above, and the end point may be detected by a bipotentiometric method as described above.
Disadvantages and advantages
The popularity of the Karl Fischer titration (henceforth referred to as KF) is due in large part to several practical advantages that it holds over other methods of moisture determination, such as accuracy, speed and selectivity.
KF is selective for water, because the titration reaction itself consumes water. In contrast, measurement of mass loss on drying will detect the loss of any volatile substance. However, the strong redox chemistry () means that redox-active sample constituents may react with the reagents. For this reason, KF is unsuitable for solutions containing e.g. dimethyl sulfoxide.
KF has a high accuracy and precision, typically within 1% of available water, e.g. 3.00% appears as 2.97–3.03%. Although KF is a destructive analysis, the sample quantity is small and is typically limited by the accuracy of weighing. For example, in order to obtain an accuracy of 1% using a scale with the typical accuracy of 0.2 mg, the sample must contain 20 mg water, which is e.g. 200 mg for a sample with 10% water. For coulometers, the measuring range is from 1–5 ppm to about 5%. Volumetric KF readily measures samples up to 100%, but requires impractically large amounts of sample for analytes with less than 0.05% water. The KF response is linear. Therefore, single-point calibration using a calibrated 1% water standard is sufficient and no calibration curves are necessary.
Little sample preparation is needed: a liquid sample can usually be directly injected using a syringe. The analysis is typically complete within a minute. However, KF suffers from an error called drift, which is an apparent water input that can confuse the measurement. The glass walls of the vessel adsorb water, and if any water leaks into the cell, the slow release of water into the titration solution can continue for a long time. Therefore, before measurement, it is necessary to carefully dry the vessel and run a 10–30-minute "dry run" in order to calculate the rate of drift. The drift is then subtracted from the result.
KF is suitable for measuring liquids and, with special equipment, gases. The major disadvantage with solids is that the water has to be accessible and easily brought into methanol solution. Many common substances, especially foods such as chocolate, release water slowly and with difficulty, requiring additional efforts to reliably bring the total water content into contact with the Karl Fischer reagents. For example, a high-shear mixer may be installed to the cell in order to break the sample. KF has problems with compounds with strong binding to water, as in water of hydration, for example with lithium chloride, so KF is unsuitable for the special solvent LiCl/DMAc.
KF is suitable for automation. Generally, KF is conducted using a separate KF titrator, or for volumetric titration, a KF titration cell installed into a general-purpose titrator. There are also oven attachments that can be used for materials that have problems being analyzed normally in the cell. The important aspect about the oven attachment is that the material doesn't decompose into water when heated to release the water. The oven attachment also supports automation of samples.
Using volumetric titration with visual detection of a titration endpoint is also possible with coloured samples by UV/VIS spectrophotometric detection.
See also
Titration
Moisture analysis
Literature
Water determination by Karl Fischer Titration by Peter A. Bruttel, Regina Schlink, Metrohm AG
References
External links
EMD Chemicals AQUASTAR Tech Notes
Titration
German inventions of the Nazi period | Karl Fischer titration | [
"Chemistry"
] | 1,446 | [
"Instrumental analysis",
"Titration"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.