id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
14,764,190 | https://en.wikipedia.org/wiki/EPHB6 | Ephrin type-B receptor 6 is a protein that in humans is encoded by the EPHB6 gene.
Ephrin receptors and their ligands, the ephrins, mediate numerous developmental processes, particularly in the nervous system. Based on their structures and sequence relationships, ephrins are divided into the ephrin-A (EFNA) class, which are anchored to the membrane by a glycosylphosphatidylinositol linkage, and the ephrin-B (EFNB) class, which are transmembrane proteins. The Eph family of receptors are divided into 2 groups based on the similarity of their extracellular domain sequences and their affinities for binding ephrin-A and ephrin-B ligands. Ephrin receptors make up the largest subgroup of the receptor tyrosine kinase (RTK) family. The ephrin receptor encoded by this gene lacks the kinase activity of most receptor tyrosine kinases and binds to ephrin-B ligands.
References
Further reading
Tyrosine kinase receptors | EPHB6 | [
"Chemistry"
] | 228 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
14,765,577 | https://en.wikipedia.org/wiki/TRIM24 | Tripartite motif-containing 24 (TRIM24) also known as transcriptional intermediary factor 1α (TIF1α) is a protein that, in humans, is encoded by the TRIM24 gene.
Function
The protein encoded by this gene mediates transcriptional control by interaction with the activation function 2 (AF2) region of several nuclear receptors, including the estrogen, retinoic acid, and vitamin D3 receptors. The protein localizes to nuclear bodies and is thought to associate with chromatin and heterochromatin-associated factors. The protein is a member of the tripartite motif (TRIM) family. The TRIM motif includes three zinc-binding domains – a RING, a B-box type 1 and a B-box type 2 – and a coiled-coil region. Two alternatively spliced transcript variants encoding different isoforms have been described for this gene.
Interactions
TRIM24 has been shown to interact with Mineralocorticoid receptor, TRIM33, Estrogen receptor alpha and Retinoid X receptor alpha.
See also
Transcription coregulator
References
Further reading
External links
Gene expression
Transcription coregulators | TRIM24 | [
"Chemistry",
"Biology"
] | 233 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
14,766,164 | https://en.wikipedia.org/wiki/EPHA8 | Ephrin type-A receptor 8 is a protein that in humans is encoded by the EPHA8 gene.
Function
This gene encodes a member of the ephrin receptor subfamily of the protein-tyrosine kinase family. EPH and EPH-related receptors have been implicated in mediating developmental events, particularly in the nervous system. Receptors in the EPH subfamily typically have a single kinase domain and an extracellular region containing a Cys-rich domain and 2 fibronectin type III repeats. The ephrin receptors are divided into 2 groups based on the similarity of their extracellular domain sequences and their affinities for binding ephrin-A and ephrin-B ligands. The protein encoded by this gene functions as a receptor for ephrin A2, A3 and A5 and plays a role in short-range contact-mediated axonal guidance during development of the mammalian nervous system.
Interactions
EPHA8 has been shown to interact with FYN.
References
Further reading
Tyrosine kinase receptors | EPHA8 | [
"Chemistry"
] | 211 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
14,766,178 | https://en.wikipedia.org/wiki/ERV3 | HERV-R_7q21.2 provirus ancestral envelope (Env) polyprotein is a protein that in humans is encoded by the ERV3 gene.
Function
The human genome includes many retroelements including the human endogenous retroviruses (HERVs), which compose about 7-8% of the human genome. ERV3, one of the most studied HERVs, is thought to have integrated 30 to 40 million years ago and is present in higher primates with the exception of gorillas. Taken together, the observation of genome conservation, the detection of transcript expression, and the presence of conserved ORFs is circumstantial evidence for a functional role. Similar endogenous retroviral Env genes like syncytin-1 have important roles in placental formation and embryonic development by enabling cell-cell fusion. Despite its origin as an Env gene, ERV3 has a premature stop codon that precludes any cell-cell fusion functionality. However, it does have an immunosuppressive function that helps the fetus evade a damaging maternal immune response, which may explain its high expression in the placenta.
There is speculation that ERV3 originally did have cell-cell fusion functionality in the placenta, but that it was eventually supplanted by other Env genes like syncytin, leading to a loss of this function.
Another functional role is suggested by the observation that downregulation of ERV3 is reported in choriocarcinoma.
References
Further reading
External links
Transcription factors | ERV3 | [
"Chemistry",
"Biology"
] | 322 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
1,620,216 | https://en.wikipedia.org/wiki/Algebraic%20graph%20theory | Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.
Branches of algebraic graph theory
Using linear algebra
The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Laplacian matrix of a graph (this part of algebraic graph theory is also called spectral graph theory). For the Petersen graph, for example, the spectrum of the adjacency matrix is (−2, −2, −2, −2, 1, 1, 1, 1, 1, 3). Several theorems relate properties of the spectrum to other graph properties. As a simple example, a connected graph with diameter D will have at least D+1 distinct values in its spectrum. Aspects of graph spectra have been used in analysing the synchronizability of networks.
Using group theory
The second branch of algebraic graph theory involves the study of graphs in connection to group theory, particularly automorphism groups and geometric group theory. The focus is placed on various families of graphs based on symmetry (such as symmetric graphs, vertex-transitive graphs, edge-transitive graphs, distance-transitive graphs, distance-regular graphs, and strongly regular graphs), and on the inclusion relationships between these families. Certain of such categories of graphs are sparse enough that lists of graphs can be drawn up. By Frucht's theorem, all groups can be represented as the automorphism group of a connected graph (indeed, of a cubic graph). Another connection with group theory is that, given any group, symmetrical graphs known as Cayley graphs can be generated, and these have properties related to the structure of the group.
This second branch of algebraic graph theory is related to the first, since the symmetry properties of a graph are reflected in its spectrum. In particular, the spectrum of a highly symmetrical graph, such as the Petersen graph, has few distinct values (the Petersen graph has 3, which is the minimum possible, given its diameter). For Cayley graphs, the spectrum can be related directly to the structure of the group, in particular to its irreducible characters.
Studying graph invariants
Finally, the third branch of algebraic graph theory concerns algebraic properties of invariants of graphs, and especially the chromatic polynomial, the Tutte polynomial and knot invariants. The chromatic polynomial of a graph, for example, counts the number of its proper vertex colorings. For the Petersen graph, this polynomial is . In particular, this means that the Petersen graph cannot be properly colored with one or two colors, but can be colored in 120 different ways with 3 colors. Much work in this area of algebraic graph theory was motivated by attempts to prove the four color theorem. However, there are still many open problems, such as characterizing graphs which have the same chromatic polynomial, and determining which polynomials are chromatic.
See also
Spectral graph theory
Algebraic combinatorics
Algebraic connectivity
Dulmage–Mendelsohn decomposition
Graph property
Adjacency matrix
References
.
External links | Algebraic graph theory | [
"Mathematics"
] | 677 | [
"Mathematical relations",
"Graph theory",
"Algebra",
"Algebraic graph theory"
] |
1,620,643 | https://en.wikipedia.org/wiki/Strain%20energy | In physics, the elastic potential energy gained by a wire during elongation with a tensile (stretching) or compressive (contractile) force is called strain energy. For linearly elastic materials, strain energy is:
where is stress, is strain, is volume, and is Young's modulus:
Molecular strain
In a molecule, strain energy is released when the constituent atoms are allowed to rearrange themselves in a chemical reaction. The external work done on an elastic member in causing it to distort from its unstressed state is transformed into strain energy which is a form of potential energy. The strain energy in the form of elastic deformation is mostly recoverable in the form of mechanical work.
For example, the heat of combustion of cyclopropane (696 kJ/mol) is higher than that of propane (657 kJ/mol) for each additional CH2 unit. Compounds with unusually large strain energy include tetrahedranes, propellanes, cubane-type clusters, fenestranes and cyclophanes.
References
Chemical bonding
Structural analysis | Strain energy | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 228 | [
"Structural engineering",
"Aerospace engineering",
"Structural analysis",
"Condensed matter physics",
"Mechanical engineering",
"nan",
"Chemical bonding"
] |
1,621,152 | https://en.wikipedia.org/wiki/Spiritual%20materialism | Spiritual materialism is a term coined by Chögyam Trungpa in his book Cutting Through Spiritual Materialism. The book is a compendium of his talks explaining Buddhism given while opening the Karma Dzong meditation center in Boulder, Colorado. He expands on the concept in later seminars that became books such as Work, Sex, Money. He uses the term to describe mistakes spiritual seekers commit which turn the pursuit of spirituality into an ego-building and confusion-creating endeavor, based on the idea that ego development is counter to spiritual progress.
Conventionally, it is used to describe capitalist and spiritual narcissism, commercial efforts such as "new age" bookstores and wealthy lecturers on spirituality; it might also mean the attempt to build up a list of credentials or accumulate teachings in order to present oneself as a more realized or holy person. Author Jorge Ferrer equates the terms "Spiritual materialism" and "Spiritual Narcissism", though others draw a distinction, that spiritual narcissism is believing that one deserves love and respect or is better than another because one has accumulated spiritual training instead of the belief that accumulating training will bring an end to suffering.
Lords of Materialism
In Trungpa's presentation, spiritual materialism can fall into three categories — what he calls the three "Lords of Materialism" (Tibetan: lalo literally "barbarian") — in which a form of materialism is misunderstood as bringing long-term happiness but instead brings only short-term entertainment followed by long-term suffering:
Physical materialism is the belief that possessions can bring release from suffering. In Trungpa's view, they may bring temporary happiness but then more suffering in the endless pursuit of creating one's environment to be just right. Or on another level it may cause a misunderstanding like, "I am rich because I have this or that" or "I am a teacher (or whatever) because I have a diploma (or whatever)."
Psychological materialism is the belief that a particular philosophy, belief system, or point of view will bring release from suffering. So seeking refuge by strongly identifying with a particular religion, philosophy, political party or viewpoint, for example, would be psychological materialism. From this the conventional usage of spiritual materialism arises, by identifying oneself as Buddhist or some other label, or by collecting initiations and spiritual accomplishments, one further constructs a solidified view of ego. Trungpa characterizes the goal of psychological materialism as using external concepts, pretexts, and ideas to prove that the ego-driven self exists, which manifests in a particular competitive attitude.
Spiritual materialism is the belief that a certain temporary state of mind is a refuge from suffering. An example would be using meditation practices to create a peaceful state of mind, or using drugs or alcohol to remain in a numbed out or a euphoric state. According to Trungpa, these states are temporary and merely heighten the suffering when they cease. So attempting to maintain a particular emotional state of mind as a refuge from suffering, or constantly pursuing particular emotional states of mind like being in love, will actually lead to more long-term suffering.
Ego
The underlying source of these three approaches to finding happiness is based, according to Trungpa, on the mistaken notion that one's ego is inherently existent and a valid point of view. He claims that is incorrect, and therefore the materialistic approaches have an invalid basis to begin with. The message in summary is, "Don't try to reinforce your ego through material things, belief systems like religion, or certain emotional states of mind." In his view, the point of religion is to show you that your ego doesn't really exist inherently. Ego is something you build up to make you think you exist, but it is not necessary and in the long run causes more suffering.
References
Carson, Richard David (2003) Taming Your Gremlin: A Surprisingly Simple Method for Getting Out of Your Own Way
Ferrer, Jorge Noguera (2001) Revisioning Transpersonal Theory: A Participatory Vision of Human Spirituality
Hart, Tobin (2004) The Secret Spiritual World of Children
Potter, Richard and Potter, Jan (2006) Spiritual Development for Beginners: A Simple Guide to Leading a Purpose Filled Life
Trungpa, Chögyam (1973). Cutting Through Spiritual Materialism. Boston, Massachusetts: Shambhala Publications, Inc. .
Trungpa, Chögyam (2011). Work, Sex, Money: Real Life on the Path of Mindfulness. Boston, Massachusetts: Shambhala Publications, Inc. . Based on a series of talks given between 1971 and 1981.
External links
Cutting Through Spiritual Materialism excerpts
Work, Sex, Money excerpts
Spiritual Finances
Video of Boulder talks on the subject by Chögyam Trungpa
Materialism
Spiritual philosophy
Tibetan Buddhist philosophical concepts | Spiritual materialism | [
"Physics"
] | 992 | [
"Materialism",
"Matter"
] |
1,621,242 | https://en.wikipedia.org/wiki/Pythagorean%20hammers | According to legend, Pythagoras discovered the foundations of musical tuning by listening to the sounds of four blacksmith's hammers, which produced consonance and dissonance when they were struck simultaneously. According to Nicomachus in his 2nd-century CE Enchiridion harmonices, Pythagoras noticed that hammer A produced consonance with hammer B when they were struck together, and hammer C produced consonance with hammer A, but hammers B and C produced dissonance with each other. Hammer D produced such perfect consonance with hammer A that they seemed to be "singing" the same note. Pythagoras rushed into the blacksmith shop to discover why, and found that the explanation was in the weight ratios. The hammers weighed 12, 9, 8, and 6 pounds respectively. Hammers A and D were in a ratio of 2:1, which is the ratio of the octave. Hammers B and C weighed 8 and 9 pounds. Their ratios with hammer D were (12:8 = 3:2 = perfect fifth) and (12:9 = 4:3 = perfect fourth). The space between B and C is a ratio of 9:8, which is equal to the musical whole tone, or whole step interval ().
The legend is, at least with respect to the hammers, demonstrably false. It is probably a Middle Eastern folk tale. These proportions are indeed relevant to string length (e.g. that of a monochord) — using these founding intervals, it is possible to construct the chromatic scale and the basic seven-tone diatonic scale used in modern music, and Pythagoras might well have been influential in the discovery of these proportions (hence, sometimes referred to as Pythagorean tuning) — but the proportions do not have the same relationship to hammer weight and the tones produced by them. However, hammer-driven chisels with equal cross-section, show an exact proportion between length or weight and Eigenfrequency.
Earlier sources mention Pythagoras' interest in harmony and ratio. Xenocrates (4th century BCE), while not as far as we know mentioning the blacksmith story, described Pythagoras' interest in general terms: "Pythagoras discovered also that the intervals in music do not come into being apart from number; for they are an interrelation of quantity with quantity. So he set out to investigate under what conditions concordant intervals come about, and discordant ones, and everything well-attuned and ill-tuned." Whatever the details of the discovery of the relationship between music and ratio, it is regarded as historically the first empirically secure mathematical description of a physical fact. As such, it is symbolic of, and perhaps leads to, the Pythagorean conception of mathematics as nature's modus operandi. As Aristotle was later to write, "the Pythagoreans construct the whole universe out of numbers". The Micrologus of Guido of Arezzo repeats the legend in Chapter XX.
Contents of the legend
According to the oldest recorded version of the legend, Pythagoras, who lived in the 6th century BC, sought a tool to measure acoustic perceptions, similar to how geometric quantities are measured with a compass or weights with a scale. As he passed by a forge where four (according to a later version, five) craftsmen were working with hammers, he noticed that each strike produced tones of different pitch, which resulted in harmonies when paired. He was able to distinguish Octave, fifth, and fourth. Only one pair, which formed the interval between fourth and fifth (a major second), he perceived as dissonant. Excitedly, he ran into the forge to conduct experiments. There, he discovered that the difference in pitch was not dependent on the shape of the hammer, the position of the struck iron, or the force of the blow. Rather, he could associate the pitches with the weights of the hammers, which he measured precisely. He then returned home to continue the experiments.
He hung four equally long, equally strong, and equally twisted strings in succession on a peg attached diagonally to the corner of the walls, weighting them differently by attaching different weights at the bottom. Then he struck the strings in pairs, and the same harmonies resonated as in the forge. The string with the heaviest load of twelve units, when paired with the least burdened string carrying six units, produced an octave. Thus, it was evident that the octave was based on the ratio 12:6, or 2:1. The most tense string yielded a fifth with the second loosest string (eight units), and a fourth with the second tightest string (nine units). From this, it followed that the fifth was based on the ratio 12:8, or 3:2, and the fourth on the ratio 12:9, or 4:3. Again, the ratio of the second tightest string to the loosest, with 9:6, or 3:2, yielded a fifth, and the ratio of the second loosest to the loosest, with 8:6, or 4:3, yielded a fourth. For the dissonant interval between fifth and fourth, it was revealed that it was based on the ratio 9:8, which coincided with the weight measurements carried out in the forge. The octave proved to be the product of the fifth and fourth:
Pythagoras then extended the experiment to various instruments, experimented with vessels, flutes, triangles, the Monochord, etc., always finding the same numerical ratios. Finally, he introduced the commonly used terminology for relative pitch.
Further traditions
With the invention of the monochord to investigate and demonstrate the harmonies of pairs of strings with different integer length ratios, Pythagoras is said to have introduced a convenient means of illustrating the mathematical foundation of music theory that he discovered. The monochord, called κανών (kanōn) in Greek and regula in Latin, is a resonating box with a string stretched over it. A measurement scale is attached to the box. The device is equipped with a movable bridge, which allows the vibrating length of the string to be divided; the division can be precisely determined using the measurement scale. This enables measurement of intervals. Despite the name "monochord", which means "one-stringed", there were also multi-stringed monochords that could produce simultaneous intervals. However, it is unclear when the monochord was invented. Walter Burkert dates this achievement to a time after the era of Aristotle, who apparently did not know the device; thus, it was introduced long after Pythagoras' death. On the other hand, Leonid Zhmud suggests that Pythagoras probably conducted his experiment, which led to the discovery of numerical ratios, using the monochord.
Hippasus of Metapontum, an early Pythagorean (late 6th and early 5th centuries BCE), conducted quantitative investigations into musical intervals. The experiment attributed to Hippasus, involving freely oscillating circular plates of varying thicknesses, is physically correct, unlike the alleged experiments of Pythagoras. It is unclear whether Archytas of Tarentum, an important Pythagorean of the 5th/4th centuries BCE, conducted relevant experiments. He was probably more of a theoretician than a practitioner in music, but he referred to the acoustic observations of his predecessors. The musical examples he cites in support of his acoustic theory involve wind instruments; he does not mention experiments with stringed instruments or individual strings. Archytas proceeded from the mistaken hypothesis that pitch depends on the speed of sound propagation and the force of impact on the sound-producing body; in reality, the speed of sound is constant in a given medium, and the force only affects the volume.
Interpretation of the legend
Walter Burkert is of the opinion that despite its physical impossibility, the legend should not be regarded as an arbitrary invention, but rather as having a meaning that can be found in Greek mythology. The Idaean Dactyls, the mythical inventors of blacksmithing, were also, according to myth, the inventors of music. Thus, there already existed a very ancient tradition associating blacksmithing with music, in which the mythical blacksmiths were depicted as possessors of the secret of magical music. Burkert sees the legend of Pythagoras in the blacksmiths as a late transformation and rationalization of the ancient Dactyl myth: In the legend of Pythagoras, the blacksmiths no longer appear as possessors of ancient magical knowledge, but rather, without intending to, they become - albeit unknowing - "teachers" of Pythagoras.
In the Early Middle Ages, Isidore of Seville referred to the biblical blacksmith Tubal as the inventor of music; later authors followed him in this. This tradition once again shows the idea of a relationship between blacksmithing and music, which also appears in non-European myths and legends. Tubal was the half-brother of Jubal, who was considered the ancestor of all musicians. Both were sons of Lamech and thus grandsons of Cain. In some Christian traditions of the Middle Ages, Jubal, who observed his brother Tubal, was equated with Pythagoras.
Another explanation is suggested by Jørgen Raasted, following Leonid Zhmud. Raasted's hypothesis states that the starting point of the legend formation was a report on the experiments of Hippasus. Hippasus used vessels called "sphaírai". This word was mistakenly confused with "sphýrai" (hammers) due to a scribal error, and instead of Hippasus' name, that of Pythagoras was used as the originator of the experiments. From this, the legend of the forge emerged.
Basis of music theory
The whole numbers 6, 8, 9, and 12, in relation to the lowest tone (number 12), correspond to the pure intervals fourth (number 9), fifth (number 8), and octave (number 6) upwards:
Such pure intervals are perceived by the human ear as beat-free, as the volume of the tones does not vary. In sheet music, these four Pythagorean tones can, for example, be expressed with the melodic sequence c' – f' – g' – c":
If this sequence of tones is not considered from the lowest, but from the highest tone (number 6), the following intervals also result: a fourth (number 8), a fifth (number 9), and an octave (number 12) - in this case, however, downward:
The fifth and the octave appear in relation to the fundamental tone in natural harmonic series, but not the fourth or its octave equivalent. This interval of a fourth occurs in the ventless brass instruments known since ancient times and in the harmonic overtones of stringed instruments.
Significance for the later development of tonal systems
The further investigation of intervals consisting of octaves, fifths, and fourths, and their multiples, eventually led from diatonic scales with seven different tones (heptatonic scale) in Pythagorean tuning to a chromatic scale with twelve tones. The wolf intervals in Pythagorean tuning posed a problem: instead of the pure fifths A♭-E♭ and D♭-A♭, the fifths G♯-E♭ and C♯-A♭, which were detuned by the Pythagorean comma, sounded.
With the advent of polyphony in the second half of the 15th century, in addition to the octave and fifth, the pure third became crucial for major and minor triads. Although this tuning could not be realized on a twelve-note keyboard, it could be well achieved in the meantone temperament. Its disadvantage was that not all keys of the circle of fifths were playable. To remedy this deficiency, tempered tunings were introduced, albeit with the trade-off that the pure third sounded harsher in some keys. Nowadays, most instruments are tuned in equal temperament with 12 keys, so that the octaves are perfectly pure, the fifths are almost pure, and the thirds sound rough.
The four Pythagorean tones in music
In music, the four harmonic Pythagorean tones play a prominent role in the pentatonic scale, particularly on the first, fourth, fifth, and eighth degrees of diatonic scales (especially in major and minor) and in the composition of cadences as fundamental tones of tonic, subdominant, and dominant. This sequence of tones often appears in cadences with the corresponding chords:
The four Pythagorean tones appear in many compositions. The first tones of the medieval antiphons "Ad te levavi" and "Factus est repente" consist essentially of the four Pythagorean tones, apart from some ornaments and high notes.
Another example is the beginning of the Passacaglia in C minor by Johann Sebastian Bach. The theme consists of fifteen tones, of which a total of ten tones and especially the last four tones are derived from the sequence.
Refutation
Absolute Pitch of Hammers
The resonance frequency of steel hammers that can be moved by human hands is usually in the ultrasonic range and therefore inaudible. Pythagoras could not have perceived these tones, especially when the hammers had a difference of one octave in pitch.
Pitch Depending on Hammer Weight
The vibration frequency of a freely oscillating solid body, such as a longitudinal wave, is usually not proportional to its weight or volume, but it is proportional to its length, which changes with similar geometry only with the cube root of the volume.
For the Pythagorean hammers, the following ratio numbers apply for similar geometry (values in arbitrary units):
Pitch in relation to string tension
The assumption that the vibration frequency of a string is proportional to the tension is not correct. Rather, the vibration frequency is proportional to the square root of the tension. To double the vibration frequency, four times the tension must be applied and thus a weight four times as heavy must be hung on a string.
Physical considerations
Consonance
Integer frequency ratios
The fact that a tone with the fundamental frequency is in consonance with a second tone with an integer multiple (with and ) of this fundamental frequency is immediately evident from the fact that the maxima and minima of the tone vibrations are synchronous in time, but can also be explained as follows:
The beat frequency of the two simultaneously sounding tones is mathematically calculated from the difference between the frequencies of these two tones and can be heard as a combination tone:
(see Mathematical description of the beat).
This difference is itself in an integer ratio to the fundamental frequency :
For all integer multiples of the fundamental frequency in the second tone, there are also integer multiples for the beat frequency (see the table on the right), so that all tones sound consonant.
Rational Frequency Ratios
Even for two tones whose frequencies are in a rational ratio of to , there is a consonance. The frequency of the second tone is given by:
Consequently, the beat frequency of the two simultaneously sounding tones is given by:
Under this condition, the fundamental frequency is always an integer multiple of the beat frequency (see the table on the right). Therefore, no dissonance occurs.
Longitudinal Oscillations and Natural Frequency of Solid Bodies
To estimate a metal block, let's consider a homogeneous rectangular prism with a maximum length and made of a material with a speed of sound . For the vibration mode along its longest side (longitudinal oscillation), it has the lowest natural frequency with antinodes at both ends and a node in the middle.
.
Therefore, the pitch is independent of the mass and cross-sectional area of the prism, and the cross-sectional area can even vary. Moreover, the force and velocity when striking the body also do not play a role. At least this fact corresponds to the observation attributed to Pythagoras that the perceived pitch was not dependent on the hands (and thus the forces) of the craftsmen.
Bodies with more complex geometry, such as bells, cups, or bowls, which may even be filled with liquids, have natural frequencies that require considerably more elaborate physical descriptions since not only the shape but also the wall thickness or even the striking location must be considered. In these cases, transverse oscillations may also be excited and audible.
Hammers
A very large sledgehammer (the speed of sound in steel is approximately = 5000 meters per second) with a hammer head length = 0.2 meters has a natural frequency of 12.5 kilohertz. With a square cross-sectional area of 0.1 square meters (0.1 meters by 0.1 meters), it would have an unusually large mass of almost 16 kilograms at a density of 7.86 grams per cubic centimeter. Frequencies above approximately 15 kilohertz cannot be perceived by many people anymore (see auditory threshold); therefore, the natural frequency of such a large hammer is hardly audible. Hammers with shorter heads have even higher natural frequencies that are therefore inaudible.
Anvils
A large steel anvil with a length = 0.5 meters has a natural frequency of only 5 kilohertz and is therefore easily audible.
There are a variety of compositions in which the composer specifies the use of anvils as musical instruments. Particularly well-known are the two operas from the music drama Der Ring des Nibelungen by Richard Wagner:
Das Rheingold, Scene 3, 18 anvils in F in three octaves
Siegfried, Act 1, Siegfried's smithing song Nothung! Nothung! Neidliches Schwert!
Materials with a lower speed of sound than steel, such as granite or brass, produce even lower frequencies with congruent geometry. In any case, anvils are not mentioned in the early accounts and audible sounds of anvils are attributed to hammers in the later versions of the legend.
Metal rods
It is possible to compare metal rods, such as chisels used by stonemasons or splitting wedges for stone breaking, in order to arrive at an observation similar to the one attributed to Pythagoras, namely that the vibration frequency of tools is proportional to their weight. If the metal rods, neglecting the tapering cutting edges, all have the same uniform cross-sectional area A but different lengths l, then their weight is proportional to the length and thus also to the vibration frequency, provided that the metal rods are excited to longitudinal vibrations by blows along the longitudinal axis (sound examples can be found in the box on the right).
For bending oscillators, such as tuning forks or the plates of metallophones, different conditions and laws apply; therefore, these considerations do not apply to them.
String vibrations
Strings can be fixed at two ends, each on a bridge. Unlike a solid with longitudinal vibrations, the two bridges establish the boundary conditions for two nodal points of vibration; hence, the vibrational node is located in the middle.
The natural frequency and thus the pitch of strings with length are not proportional to the tension , but to the square root of the tension. Moreover, the frequency increases with higher tensile weight and thus higher tension, rather than decreasing:
Nevertheless, the vibration frequency is inversely proportional to the length of the string at constant tension, which can be directly demonstrated with the monochord—allegedly invented by Pythagoras.
Reception
Antiquity
The earliest mention of Pythagoras' discovery of the mathematical basis of musical intervals is found in the Platonist Xenocrates (4th century BC); as it is only a quote from a lost work of this thinker, it is unclear whether he knew the forge legend. In the 4th century BC, criticism of the Pythagorean theory of intervals was already expressed, although without reference to the Pythagoras legend; the philosopher and music theorist Aristoxenus considered it to be false.
The oldest recorded version of the legend was presented centuries after the time of Pythagoras by the Neopythagorean Nicomachus of Gerasa, who in the 1st or 2nd century AD documented the story in his Harmonikḗ Encheirídion ("Handbook of Harmony"). He relied on the philosopher Philolaus, a Pythagorean of the 5th century BC, for his representation of the numerical ratios in music theory.
The famous mathematician and music theorist Ptolemy (2nd century AD) was aware of the weight method transmitted by the legend but rejected it. However, he did not recognize the falsity of the weight experiments; he only criticized their inaccuracies compared to the precise measurements on the monochord. It is probable that he obtained his knowledge of the legendary tradition not from Nicomachus but from an older source, now lost.
The chronologically difficult to place music theorist of the Imperial era, Gaudentius, described the legend in his Harmonikḗ Eisagōgḗ ("Introduction to Harmony"), in a version slightly shorter than that of Nicomachus. The Neoplatonist philosopher Iamblichus of Chalcis, who worked as a philosophy teacher in the late 3rd and early 4th centuries, wrote a Pythagoras biography titled On the Pythagorean Life, in which he reproduced the blacksmith legend in the version of Nicomachus.
In the first half of the 5th century, the writer Macrobius extensively discussed the blacksmith legend in his commentary on Cicero's Somnium Scipionis, which he described in a similar manner to Nikomachos.
The strongest repercussion among the ancient music theorists who took up the narrative was achieved by Boethius with his textbook De institutione musica ("Introduction to Music"), written in the early 6th century, in which he initially describes Pythagoras' efforts of understanding in the forge and then at home. It is unclear whether he relied on Nikomachus' account or another source. In contrast to the entire earlier tradition, he reports five hammers instead of the four assumed by earlier authors. He claims that Pythagoras rejected the fifth hammer because it resulted in dissonance with all the other hammers. According to Boethius' account (as with Macrobius), Pythagoras tested his initial assumption that the difference in sound was due to different strength in the arms of the men by having the smiths exchange hammers, which led to its refutation. Regarding the experiments at Pythagoras' home, Boethius writes that the philosopher first hung strings with weights equal to those of the hammers in the forge and then experimented with pipes and cups, with all the experiments yielding the same results as the initial ones with the hammers. Using the legend as a basis, Boethius addresses the question of the reliability of sensory perceptions in terms of science and epistemology. The crucial point is that Pythagoras was initially prompted by sensory perception to formulate his question and hypotheses, and through empirical testing of hypotheses, he arrived at irrefutable certainty. The path to knowledge went from sensory perception to the initial hypothesis, which turned out to be erroneous, then to the formation of a correct opinion, and finally to its verification. Boethius acknowledges the necessity and value of sensory perception and opinion formation on the path to insight, although as a Platonist, he is inherently skeptical of sensory perception due to its proneness to error. Genuine knowledge, for him, arises only when the regularity is grasped, allowing the researcher to emancipate themselves from their initial dependence on unreliable sensory perception. The judgment of the researcher must not be based solely on sensory judgment derived from empirical experience, but rather it should only be made once they have found a rule through deliberation that enables them to position themselves beyond the realm of possible sensory deception.
In the 6th century, the scholar Cassiodorus wrote in his Institutiones that Gaudentius attributed the beginnings "of music" to Pythagoras in his account of the legend of the blacksmith. He was referring to music theory, as Iamblichus had also done, who, with reference to the blacksmith narrative and the experiments described there, had referred to Pythagoras as the inventor "of music".
Middle Ages
In the Early Middle Ages, Isidore of Seville mentioned the legend of the blacksmith in his Etymologiae, which became a fundamental reference work for the educated in the Middle Ages. He briefly mentioned the legend, adopting Cassiodorus' wording and also designating Pythagoras as the inventor of music. As Cassiodorus and Isidore were first-rate authorities in the Middle Ages, the notion spread that Pythagoras had discovered the fundamental law of music and thus had been its founder. Despite such sweeping statements, medieval music theorists assumed that music had existed before Pythagoras and that the "invention of music" referred to the discovery of its principles.
In the 9th century, the musicologist Aurelian of Réomé recounted the legend in his Musica disciplina ("Music Theory"). Aurelian's account was followed in the 10th century by Regino of Prüm in his work De harmonica institutione ("Introduction to Harmonic Theory"). Both emphasized that Pythagoras had been given the opportunity to make his discovery in the blacksmith's forge through a divine providence. In antiquity, Nicomachus and Iamblichus had already spoken of a daimonic providence, and Boethius had transformed it into a divine decree.
In the 11th century, the legendary material was processed in the Carmina Cantabrigiensia.
In the first half of the 11th century, Guido of Arezzo, the most famous music theorist of the Middle Ages, recounted the legend of the blacksmith in the final chapter of his Micrologus, basing it on the version of Boethius, whom he named specifically. Guido remarked at the outset: Nor would anyone ever have discovered anything certain about this art (music) if, in the end, divine goodness had not brought about the following event at its behest. He attributed the fact that the hammers weighed 12, 9, 8, and 6 units and thus produced harmonious sound to God's providence. He also mentioned that Pythagoras, starting from his discovery, had invented the monochord, but did not go into detail about its properties.
The work De musica by Johannes Cotto (also known as John Cotton or Johannes Afflighemensis) was illustrated with the blacksmith scene around 1250 by an anonymous book illuminator in the Cistercian Abbey of Aldersbach.
Among the medieval music theorists who told the legend of the forge according to Boethius' version, were also Juan Gil de Zámora (Johannes Aegidius von Zamora), active in the late 13th and early 14th centuries, Johannes de Muris and Simon Tunstede in the 14th century, and Adam von Fulda on the threshold of the early modern period in the 15th century.
As an opponent of the Pythagorean conception, which held that consonances were based on certain numerical ratios, Johannes de Grocheio emerged in the 13th century, starting from an Aristotelian perspective. Although he explicitly stated that Pythagoras had discovered the principles of music, and he told the legend of the forge citing Boethius, whom he considered trustworthy, he rejected the Pythagorean theory of consonance, which he wanted to reduce to a merely metaphorical expression.
Early modern period
Franchino Gaffurio published his work Theoricum opus musice discipline ("Theoretical Music Theory") in Naples in 1480, which was revised and republished in 1492 under the title Theorica musice ("Music Theory"). In it, he presented a version of the legend of the forge that surpassed all previous accounts in detail. He based his version on that of Boethius and added a sixth hammer in order to include as many tones of the octave as possible in the narrative. In four pictorial representations, he presented musical instruments or sound generators, each with six harmonic tones, and indicated the numbers 4, 6, 8, 9, 12, and 16 associated with the tones in the labels. In addition to the four traditional ratios of the legend (6, 8, 9, and 12), he added 4 and 16, which represent a tone a fifth lower and another tone a fourth higher. The entire sequence of tones now extends not only over one, but over two octaves. These numbers correspond, for example, to the tones f – c' – f' – g' – c" – f":
The painter Erhard Sanßdorffer was commissioned in 1546 to create a fresco in the Hessian Büdingen Castle, which is well preserved and represents the history of music starting from the forge of Pythagoras like a compendium.
Gioseffo Zarlino also recounted the legend in his work Le istitutioni harmoniche ("The Foundations of Harmony"), which he published in 1558; like Gaffurio, he based his account on Boethius' version.
The music theorist Vincenzo Galilei, the father of Galileo Galilei, published his treatise Discorso intorno all'opere di messer Gioseffo Zarlino ("Discourse on the Works of Mr. Gioseffo Zarlino") in 1589, which was directed against the views of his teacher Zarlino. In it, he pointed out that the information in the legend about the loading of strings with weights is not accurate.[1]
In 1626, the Thesaurus philopoliticus by Daniel Meisner featured a copper engraving titled "Duynkirchen" by Eberhard Kieser, depicting only three blacksmiths at an anvil. The Latin and German caption reads:[2]
Triplicibus percussa sonat varie ictibus incus.
Musica Pythagoras struit hinc fundamina princ(eps).
The anvil sounds with triple strikes, producing three different tones.
Music is the foundation built by Pythagoras, which no donkey's head could have achieved.
A few years later, the matter was definitively clarified after Galileo Galilei and Marin Mersenne discovered the laws of string vibrations. In 1636, Mersenne published his Harmonie universelle, in which he explained the physical error in the legend: the vibration frequency is not proportional to the tension, but to its square root.[3]
Several composers incorporated this subject matter into their works, including Georg Muffat at the end of the 17th century. and Rupert Ignaz Mayr.
Modern era
Even in the 19th century, Hegel assumed the physical accuracy of the alleged measurements mentioned in the Pythagoras legend in his lectures on the history of philosophy.
Werner Heisenberg emphasized in an essay first published in 1937 that the Pythagorean "discovery of the mathematical determinacy of harmony" is based on "the idea of the meaningful power of mathematical structures", a "fundamental idea that modern exact science has inherited from antiquity"; the discovery attributed to Pythagoras belongs "to the strongest impulses of human science in general".
Even more recently, accounts have been published in which the legend is uncritically reproduced without reference to its physical and historical falsehood. For example, in the non-fiction book The Fifth Hammer: Pythagoras and the Disharmony of the World by Daniel Heller-Roazen.
Sources
Gottfried Friedlein (Hrsg.): Anicii Manlii Torquati Severini Boetii de institutione arithmetica libri duo, de institutione musica libri quinque. Minerva, Frankfurt am Main 1966 (Nachdruck der Ausgabe Leipzig 1867, online; deutsche Übersetzung online)
Michael Hermesdorff (Übersetzer): Micrologus Guidonis de disciplina artis musicae, d. i. Kurze Abhandlung Guidos über die Regeln der musikalischen Kunst. Trier 1876 (online)
Ilde Illuminati, Fabio Bellissima (Hrsg.): Franchino Gaffurio: Theorica musice. Edizioni del Galluzzo, Firenze 2005, , S. 66–71 (lateinischer Text und italienische Übersetzung)
Further reading
Walter Burkert: Weisheit und Wissenschaft. Studien zu Pythagoras, Philolaos und Platon (= Erlanger Beiträge zur Sprach- und Kunstwissenschaft. Band 10). Hans Carl, Nürnberg 1962
Anja Heilmann: Boethius' Musiktheorie und das Quadrivium. Eine Einführung in den neuplatonischen Hintergrund von "De institutione musica". Vandenhoeck & Ruprecht, Göttingen 2007, , S. 203–222 ()
Werner Keil (Hrsg.): Basistexte Musikästhetik und Musiktheorie. Wilhelm Fink, Paderborn 2007, , S. 342–346 ()
Barbara Münxelhaus: Pythagoras musicus. Zur Rezeption der pythagoreischen Musiktheorie als quadrivialer Wissenschaft im lateinischen Mittelalter (= Orpheus-Schriftenreihe zu Grundfragen der Musik. Band 19). Verlag für systematische Musikwissenschaft, Bonn–Bad Godesberg 1976
Jørgen Raasted: A neglected version of the anecdote about Pythagoras's hammer experiment. In: Cahiers de l'Institut du Moyen-Âge grec et latin. Band 31a, 1979, S. 1–9
Leonid Zhmud: Wissenschaft, Philosophie und Religion im frühen Pythagoreismus. Akademie Verlag, Berlin 1997,
See also
Just intonation
References
Acoustics
Ancient Greek science
Pythagoreanism
Musical tuning | Pythagorean hammers | [
"Physics"
] | 7,151 | [
"Classical mechanics",
"Acoustics"
] |
1,621,535 | https://en.wikipedia.org/wiki/Woodstock%20of%20physics | The Woodstock of physics was the popular name given by physicists to the marathon session of the American Physical Society’s meeting on March 18, 1987, which featured 51 presentations of recent discoveries in the science of high-temperature superconductors. Various presenters anticipated that these new materials would soon result in revolutionary technological applications, but in the three subsequent decades, this proved to be overly optimistic. The name is a reference to the 1969 Woodstock Music and Art Festival.
Leading up to the meeting
Before a series of breakthroughs in the mid-1980s, most scientists believed that the extremely low temperature requirements of superconductors rendered them impractical for everyday use. However, in June 1986, K. Alex Muller and Georg Bednorz working in IBM Zurich broke the record of critical temperature superconductivity in lanthanum barium copper oxide (LBCO) to 35 K above absolute zero, which had remained unbroken at 23 K for 17 years. Their discovery stimulated a great deal of additional research in high-temperature superconductivity.
By March 1987, a flurry of recent research on ceramic superconductors had succeeded in creating ever-higher superconducting temperatures, including the discovery of Maw-Kue Wu and Jim Ashburn at the University of Alabama, who found a critical temperature of 77 K in yttrium barium copper oxide (YBCO). This result was followed by Paul C. W. Chu at the University of Houston's of a superconductor that operated at a temperature that could be achieved by cooling with liquid nitrogen. The scientific community was abuzz with excitement.
Events
The discoveries were so recent that no papers on them had been submitted by the deadline. However, the Society added a last-minute session to their annual meeting to discuss the new research. The session was chaired by physicist M. Brian Maple, a superconductor researcher himself, who was one of the meeting's organizers. It was scheduled to start at 7:30 pm in the Sutton ballroom of the New York Hilton, but excited scientists started lining up at 5:30. Key researchers such as Chu and Müller were given 10 minutes to describe their research; other physicists were given five minutes. Nearly 2,000 scientists tried to squeeze into the ballroom. Those who could not find a seat filled the aisles or watched outside the room on television monitors. The session ended at 3:15 am, but many lingered until dawn to discuss the presentations. The meeting caused a surge in mainstream media interest in superconductors, and laboratories around the world raced to pursue breakthroughs in the field.
In October of the same year, Bednorz and Muller were awarded the Nobel Prize in Physics "for their important break-through in the discovery of superconductivity in ceramic materials", setting a record for the shortest time between the discovery and the prize award for any scientific Nobel Prize category.
Sequels
Woodstock of physics II
By the following year (1988) two new families of copper-oxide superconductorsthe bismuth based or so-called BSCCO and the thallium based or TBCCO materialshad been discovered. Both of these have superconducting transitions above . So in the follow-up March APS meeting at New Orleans a special evening session called Woodstock of Physics-II was hastily organized to highlight the synthesis and properties of these new, first-ever 'triple digit superconductors'. The format of the session was the same as in New York. Some of the panelists were repeats from the original "Woodstock" session. Additional researchers including Allen M. Hermann (at that time at the University of Arkansas), the co-discoverer of the thallium system, and Laura H. Greene (then with AT&T Labs) were panelists. The 1988 session was chaired by Timir Datta from the University of South Carolina.
20 year anniversary
On March 5, 2007, many of the original participants reconvened in Denver to recognize and review the session on its 20-year anniversary; the "reunion" was again chaired by Maple.
See also
List of physics conferences
Notes
References
External links
Video recordings (published in 2016 by the American Physical Society, announcement: Experience the 1987 "Woodstock of Physics" Online)
Superconductivity
1987 in science
History of physics
Physics conferences | Woodstock of physics | [
"Physics",
"Materials_science",
"Engineering"
] | 883 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
1,621,705 | https://en.wikipedia.org/wiki/National%20Ambient%20Air%20Quality%20Standards | The U.S. National Ambient Air Quality Standards (NAAQS, pronounced ) are limits on atmospheric concentration of six pollutants that cause smog, acid rain, and other health hazards. Established by the United States Environmental Protection Agency (EPA) under authority of the Clean Air Act (42 U.S.C. 7401 et seq.), NAAQS is applied for outdoor air throughout the country.
The six criteria air pollutants (CAP), or criteria pollutants, for which limits are set in the NAAQS are ozone (O3), atmospheric particulate matter (PM2.5/PM10), lead (Pb), carbon monoxide (CO), sulfur oxides (SOx), and nitrogen oxides (NOx). These are typically emitted from many sources in industry, mining, transportation, electricity generation and agriculture. In many cases they are the products of the combustion of fossil fuels or industrial processes.
The National Emissions Standards for Hazardous Air Pollutants cover many other chemicals, and require the maximum achievable reduction that the EPA determines is feasible.
Background
The six criteria air pollutants were the first set of pollutants recognized by the United States Environmental Protection Agency as needing standards on a national level. The Clean Air Act requires the EPA to set US National Ambient Air Quality Standards (NAAQS) for the six CAPs. The NAAQS are health based and the EPA sets two types of standards: primary and secondary. The primary standards are designed to protect the health of 'sensitive' populations such as asthmatics, children, and the elderly. The secondary standards are concerned with protecting the environment. They are designed to address visibility, damage to crops, vegetation, buildings, and animals.
The EPA established the NAAQS according to Sections 108 and 109 of the U.S. Clean Air Act, which was last amended in 1990. These sections require the EPA "(1) to list widespread air pollutants that reasonably may be expected to endanger public health or welfare; (2) to issue air quality criteria for them that assess the latest available scientific information on nature and effects of ambient exposure to them; (3) to set primary NAAQS to protect human health with adequate margin of safety and to set secondary NAAQS to protect against welfare effects (e.g., effects on vegetation, ecosystems, visibility, climate, manmade materials, etc); and (5) to periodically review and revise, as appropriate, the criteria and NAAQS for a given listed pollutant or class of pollutants."
Descriptions
Ground level ozone (O3): Ozone found on the surface-level, also known as tropospheric ozone is also regulated by the NAAQS under the Clean Air Act. Ozone was originally found to be damaging to grapes in the 1950s. The US EPA set "oxidants" standards in 1971, which included ozone. These standards were created to reduce agricultural impacts and other related damages. Like lead, ozone requires a reexamination of new findings of health and vegetation effects periodically. This aspect necessitated the creation of a US EPA criteria document. Further analysis done in 1979 and 1997 made it necessary to significantly modify the pollution standards.
Atmospheric particulate matter
PM10, coarse particles: 2.5 micrometers (μm) to 10 μm in size (although current implementation includes all particles 10 μm or less in the standard)
PM2.5, fine particles: 2.5 μm in size or less. Particulate Matter (PM) was listed in the 1996 Criteria document issued by the EPA. In April 2001, the EPA created a Second External Review Draft of the Air Quality Criteria for PM, which addressed updated studies done on particulate matter and the modified pollutant standards done since the First External Review Draft. In May 2002, a Third External Review Draft was made, and the EPA revised PM requirements again. After issuing a fourth version of the document, the EPA issued the final version in October 2004.
Lead (Pb): In the mid-1970s, lead was listed as a criteria air pollutant that required NAAQS regulation. In 1977, the EPA published a document which detailed the Air Quality Criteria for lead. This document was based on the scientific assessments of lead at the time. Based on this report (1977 Lead AQCD), the EPA established a "1.5 μg/m3 (maximum quarterly calendar average) Pb NAAQS in 1978." The Clean Air Act requires periodic review of NAAQS, and new scientific data published after 1977 made it necessary to revise the standards previously established in the 1977 Lead AQCD document. An Addendum to the document was published in 1986 and then again as a Supplement to the 1986 AQCD/Addendum in 1990. In 1990, a Lead Staff Paper was prepared by the EPA's Office of Air Quality Planning and Standards (OPQPS), which was based on information presented in the 1986 Lead/AQCD/Addendum and 1990 Supplement, in addition to other OAQPS sponsored lead exposure/risk analyses. In this paper, it was proposed that the Pb NAAQS be revised further and presented options for revision to the EPA. The EPA elected to not modify the Pb NAAQS further, but decided to instead focus on the 1991 U.S. EPA Strategy for Reducing Lead Exposure. The EPA concentrated on regulatory and remedial clean-up efforts to minimize Pb exposure from numerous non-air sources that caused more severe public health risks, and undertook actions to reduce air emissions.
Carbon monoxide (CO): The EPA set the first NAAQS for carbon monoxide in 1971. The primary standard was set at 9 ppm averaged over an 8-hour period and 35 ppm over a 1-hour period. The majority of CO emitted into the ambient air is from mobile sources. The EPA has reviewed and assessed the current scientific literature with respect to CO in 1979, 1984, 1991, and 1994. After the review in 1984 the EPA decided to remove the secondary standard for CO due to lack of significant evidence of the adverse environmental impacts. On January 28, 2011, the EPA decided that the current NAAQS for CO were sufficient and proposed to keep the existing standards as they stood. The EPA is strengthening monitoring requirements for CO by calling for CO monitors to be placed in strategic locations near large urban areas. Specifically, the EPA has called for monitors to be placed and operational in CBSA's (core based statistical areas) with populations over 2.5 million by January 1, 2015; and in CBSA's with populations of 1 million or more by January 1, 2017. In addition they are requiring the collocation of CO monitors with NO2 monitors in urban areas having a population of 1 million for more. As of May 2011 there were approximately 328 operational CO monitors in place nationwide. The EPA has provided some authority to the EPA Regional Administrators to oversee case-by-case requested exceptions and in determining the need for additional monitoring systems above the minimum required. The EPA reports the national average concentration of CO has decreased by 82% since 1980. The last nonattainment designation was deemed in attainment on September 27, 2010. Currently all areas in the US are in attainment.
Sulfur oxides (SOx): SOx refers to the oxides of sulfur, a highly reactive group of gases. SO2 is of greatest interest and is used as the indicator for the entire SOx family. The EPA first set primary and secondary standards in 1971. Dual primary standards were set at 140 ppb averaged over a 24-hour period, and at 30 ppb averaged annually. The secondary standard was set at 500 ppb averaged over a 3-hour period, not to be exceeded more than once a year. The most recent review took place in 1996 during which the EPA considered implementing a new NAAQS for 5-minute peaks of SO2 affecting sensitive populations such as asthmatics. The Agency did not establish this new NAAQS and kept the existing standards. In 2010 the EPA decided to replace the dual primary standards with a new 1-hour standard set at 75 ppb. On March 20, 2012, the EPA "took final action" to maintain the existing NAAQS as they stood. Only three monitoring sites have exceeded the current NAAQS for SO2, all of which are located in the Hawaii Volcanoes National Park. The violations occurred between 2007–2008 and the state of Hawaii suggested these should be exempt from regulatory actions due to an 'exceptional event' (volcanic activity). Since 1980 the national concentration of SO2 in the ambient air has decreased by 83%. Annual average concentrations hover between 1–6 ppb. Currently all ACQR's are in attainment for SO2.
Nitrogen oxides (NOx): The EPA first set primary and secondary standards for the oxides of nitrogen in 1971. Among these are nitric oxide (NO), nitrous oxide (N2O), and nitrogen dioxide (NO2), all of which are covered in the NAAQS. NO2 is the oxide measured and used as the indicator for the entire NOx family as it is of the most concern due to its quick formation and contribution to the formation of harmful ground level ozone. In 1971 the primary and secondary NAAQS for NO2 were both set at an annual average of 0.053 ppm. The EPA reviewed this NAAQS in 1985 and 1996, and in both cases concluded that the existing standard was sufficient. The most recent review by the EPA occurred in 2010, resulting in a new 1-hour NO2 primary standard set at 100 ppb; the annual average of 0.053 ppm remained the same. Also considered was a new 1-hour secondary standard of 100 ppb. This was the first time the EPA reviewed the environmental impacts separate from the health impacts for this group of criteria air pollutants. Also, in 2010, the EPA decided to ensure compliance by strengthening monitoring requirements, calling for increased numbers of monitoring systems near large urban areas and major roadways. On March 20, 2012, the EPA "took final action" to maintain the existing NAAQS as they stand. The national average of NOx concentrations has dropped by 52% since 1980. The annual concentration for NO2 is reported to be averaging around 10–20 ppb, and is expected to decrease further with new mobile source regulations. Currently all areas of the US are classified as in attainment.
In April 2023, the EPA finalized its "Good Neighbor Plan", which phases in tighter standards for NOx, using a cap and trade system during the summer "ozone season". This is intended to reduce ground-level ozone in non-attainment areas downwind of industrial sources like power plants, incinerators, and industrial furnaces, often in other states.
Standards
The standards are listed in . Primary standards are designed to protect human health, with an adequate margin of safety, including sensitive populations such as children, the elderly, and individuals suffering from respiratory diseases. Secondary standards are designed to protect public welfare, damage to property, transportation hazards, economic values, and personal comfort and well-being from any known or anticipated adverse effects of a pollutant. A district meeting a given standard is known as an "attainment area" for that standard, and otherwise a "non-attainment area".
Standards are required to "accurately reflect the latest scientific knowledge," and are reviewed every five years by a Clean Air Scientific Advisory Committee (CASAC), consisting of "seven members appointed by the EPA administrator."
EPA has set NAAQS for six major pollutants listed as below. These six are also the criteria air pollutants.
As of June 15, 2005, the 1-hour ozone standard no longer applies to areas designated with respect to the 8-hour ozone standard (which includes most of the United States, except for portions of 10 states).
Source: USEPA
Detection methods
The EPA National Exposure Research Laboratory can designate a measurement device using an established technological basis as a Federal Reference Method (FRM) to certify that the device has undergone a testing and analysis protocol, and can be used to monitor NAAQS compliance. Devices based on new technologies can be designated as a Federal Equivalent Method (FEM). FEMs are based on different sampling and/or analyzing technologies than FRMs, but are required to provide the same decision making quality when making NAAQS attainment determinations. Approved new methods are formally announced through publication in the Federal Register. A complete list of FRMs and FEMs is available.
Air quality control region
An air quality control region is an area, designated by the federal government, where communities share a common air pollution problem.
See also
Air pollution
Air quality index
Asthma
Atmospheric dispersion modeling
Contamination control
Clean Air Act (1990)
Portable emissions measurement system
Toxic Substances Control Act of 1976
References
External links
EPA summary of the National Ambient Air Quality Standards
US Environmental Protection Agency - Criteria Air Pollutants
EPA Green Book showing non-attainment, maintenance, and attainment areas
EPA Alumni Association Oral History Video "Early Implementation of the Clean Air Act of 1970 in California."
Air pollution in the United States
Air pollution organizations
Environmental law in the United States
Environmental science
Environmental chemistry
Natural resource management
Smog
United States Environmental Protection Agency | National Ambient Air Quality Standards | [
"Physics",
"Chemistry",
"Environmental_science"
] | 2,745 | [
"Visibility",
"Physical quantities",
"Smog",
"Environmental chemistry",
"nan"
] |
1,621,854 | https://en.wikipedia.org/wiki/Outflow%20boundary | An outflow boundary, also known as a gust front, is a storm-scale or mesoscale boundary separating thunderstorm-cooled air (outflow) from the surrounding air; similar in effect to a cold front, with passage marked by a wind shift and usually a drop in temperature and a related pressure jump. Outflow boundaries can persist for 24 hours or more after the thunderstorms that generated them dissipate, and can travel hundreds of kilometers from their area of origin. New thunderstorms often develop along outflow boundaries, especially near the point of intersection with another boundary (cold front, dry line, another outflow boundary, etc.). Outflow boundaries can be seen either as fine lines on weather radar imagery or else as arcs of low clouds on weather satellite imagery. From the ground, outflow boundaries can be co-located with the appearance of roll clouds and shelf clouds.
Outflow boundaries create low-level wind shear which can be hazardous during aircraft takeoffs and landings. If a thunderstorm runs into an outflow boundary, the low-level wind shear from the boundary can cause thunderstorms to exhibit rotation at the base of the storm, at times causing tornadic activity. Strong versions of these features known as downbursts can be generated in environments of vertical wind shear and mid-level dry air. Microbursts have a diameter of influence less than , while macrobursts occur over a diameter greater than . Wet microbursts occur in atmospheres where the low levels are saturated, while dry microbursts occur in drier atmospheres from high-based thunderstorms. When an outflow boundary moves into a more stable low level environment, such as into a region of cooler air or over regions of cooler water temperatures out at sea, it can lead to the development of an undular bore.
Definition
An outflow boundary, also known as a gust front or arc cloud, is the leading edge of gusty, cooler surface winds from thunderstorm downdrafts; sometimes associated with a shelf cloud or roll cloud. A pressure jump is associated with its passage. Outflow boundaries can persist for over 24 hours and travel hundreds of kilometers (miles) from their area of origin. A wrapping gust front is a front that wraps around the mesocyclone, cutting off the inflow of warm moist air and resulting in occlusion. This is sometimes the case during the event of a collapsing storm, in which the wind literally "rips it apart".
Origin
A microburst is a very localized column of sinking air known as a downburst, producing damaging divergent and straight-line winds at the surface that are similar to but distinguishable from tornadoes which generally have convergent damage. The term was defined as affecting an area in diameter or less, distinguishing them as a type of downburst and apart from common wind shear which can encompass greater areas. They are normally associated with individual thunderstorms. Microburst soundings show the presence of mid-level dry air, which enhances evaporative cooling.
Organized areas of thunderstorm activity reinforce pre-existing frontal zones, and can outrun cold fronts. This outrunning occurs within the westerlies in a pattern where the upper-level jet splits into two streams. The resultant mesoscale convective system (MCS) forms at the point of the upper level split in the wind pattern in the area of best low level inflow. The convection then moves east and toward the equator into the warm sector, parallel to low-level thickness lines. When the convection is strong and linear or curved, the MCS is called a squall line, with the feature placed at the leading edge of the significant wind shift and pressure rise which is normally just ahead of its radar signature. This feature is commonly depicted in the warm season across the United States on surface analyses, as they lie within sharp surface troughs.
A macroburst, normally associated with squall lines, is a strong downburst larger than . A wet microburst consists of precipitation and an atmosphere saturated in the low-levels. A dry microburst emanates from high-based thunderstorms with virga falling from their base. All types are formed by precipitation-cooled air rushing to the surface. Downbursts can occur over large areas. In the extreme case, a derecho can cover a huge area more than wide and over long, lasting up to 12 hours or more, and is associated with some of the most intense straight-line winds, but the generative process is somewhat different from that of most downbursts.
Appearance
At ground level, shelf clouds and roll clouds can be seen at the leading edge of outflow boundaries. Through satellite imagery, an arc cloud is visible as an arc of low clouds spreading out from a thunderstorm. If the skies are cloudy behind the arc, or if the arc is moving quickly, high wind gusts are likely behind the gust front. Sometimes a gust front can be seen on weather radar, showing as a thin arc or line of weak radar echos pushing out from a collapsing storm. The thin line of weak radar echoes is known as a fine line. Occasionally, winds caused by the gust front are so high in velocity that they also show up on radar. This cool outdraft can then energize other storms which it hits by assisting in updrafts. Gust fronts colliding from two storms can even create new storms. Usually, however, no rain accompanies the shifting winds. An expansion of the rain shaft near ground level, in the general shape of a human foot, is a telltale sign of a downburst. Gustnadoes, short-lived vertical circulations near ground level, can be spawned by outflow boundaries.
Effects
Gust fronts create low-level wind shear which can be hazardous to planes when they takeoff or land. Flying insects are swept along by the prevailing winds. As such, fine line patterns within weather radar imagery, associated with converging winds, are dominated by insect returns. At the surface, clouds of dust can be raised by outflow boundaries. If squall lines form over arid regions, a duststorm known as a haboob can result from the high winds picking up dust in their wake from the desert floor. If outflow boundaries move into areas of the atmosphere which are stable in the low levels, such through the cold sector of extratropical cyclones or a nocturnal boundary layer, they can create a phenomenon known as an undular bore, which shows up on satellite and radar imagery as a series of transverse waves in the cloud field oriented perpendicular to the low-level winds.
See also
Density
Derecho
Gustnado
Haboob
Heat burst
Inflow (meteorology)
Lake-effect snow
Mathematical singularity
Sea breeze
Tropical cyclogenesis
Wake low
Weather front
Pseudo-cold front
References
External links
Outflow boundary over south Florida MPEG, 854KB
Atmospheric dynamics
Wind | Outflow boundary | [
"Chemistry"
] | 1,432 | [
"Atmospheric dynamics",
"Fluid dynamics"
] |
1,621,913 | https://en.wikipedia.org/wiki/Bulk%20Richardson%20number | The Bulk Richardson Number (BRN) is an approximation of the Gradient Richardson number. The BRN is a dimensionless ratio in meteorology related to the consumption of turbulence divided by the shear production (the generation of turbulence kinetic energy caused by wind shear) of turbulence. It is used to show dynamic stability and the formation of turbulence.
The BRN is used frequently in meteorology due to widely available radiosonde data and numerical weather forecasts that supply wind and temperature measurements at discrete points in space.
Formula
Below is the formula for the BRN, where g is gravitational acceleration, Tv is absolute virtual temperature, Δθv is the virtual potential temperature difference across a layer of thickness, Δz is vertical depth, and ΔU and ΔV are the changes in horizontal wind components across that same layer.
Critical values and interpretation
High values indicate unstable and/or weakly-sheared environments; low values indicate weak instability and/or strong vertical shear. Generally, values in the range of around 10 to 50 suggest environmental conditions favorable for supercell development.
In the limit of layer thickness becoming small, the Bulk Richardson number approaches the Gradient Richardson number, for which a critical Richardson number is roughly Ric= 0.25. Numbers less than this critical value are dynamically unstable and likely to become or remain turbulent.
The critical value of 0.25 applies only for local gradients, not for finite differences across thick layers. The thicker the layer is the more likely we are to average out large gradients that occur within small sub-regions of the layer of interest. This results in uncertainty of our prediction of the occurrence of turbulence, and now one must use an artificially large value of the critical Richardson number to give reasonable results using our smoothed gradients. This means that the thinner the layer, the closer the value to the theory.
See also
Monin–Obukhov length
Richardson number
Atmospheric dynamics
Atmospheric thermodynamics
References
Further reading
Help - Bulk Richardson Number - NOAA Storm Prediction Center
Boundary layer meteorology
Turbulence
Severe weather and convection | Bulk Richardson number | [
"Chemistry"
] | 413 | [
"Turbulence",
"Fluid dynamics"
] |
1,624,084 | https://en.wikipedia.org/wiki/Pospiviroid | Pospiviroid is a genus of ssRNA viroids that infects plants, most commonly tubers. It belongs to the family Pospiviroidae. The first viroid discovered was a pospiviroid, the PSTVd species (potato spindle tuber viroid).
Taxonomy
Pospiviroid has 10 virus species
References
External links
ICTV Report: Pospiviroidae
Viroids
Virus genera | Pospiviroid | [
"Biology"
] | 91 | [
"Virus stubs",
"Viruses"
] |
1,624,240 | https://en.wikipedia.org/wiki/CAR%20T%20cell | In biology, chimeric antigen receptors (CARs)—also known as chimeric immunoreceptors, chimeric T cell receptors or artificial T cell receptors—are receptor proteins that have been engineered to give T cells the new ability to target a specific antigen. The receptors are chimeric in that they combine both antigen-binding and T cell activating functions into a single receptor.
CAR T cell therapy uses T cells engineered with CARs to treat cancer. T cells are modified to recognize cancer cells and destroy them. The standard approach is to harvest T cells from patients, genetically alter them, then infuse the resulting CAR T cells into patients to attack their tumors.
CAR T cells can be derived either autologously from T cells in a patient's own blood or allogeneically from those of a donor. Once isolated, these T cells are genetically engineered to express a specific CAR, using a vector derived from an engineered lentivirus such as HIV (see Lentiviral vector in gene therapy). The CAR programs the T cells to target an antigen present on the tumor cell surface. For safety, CAR T cells are engineered to be specific to an antigen that is expressed on a tumor cell but not on healthy cells.
After the modified T cells are infused into a patient, they act as a "living drug" against cancer cells. When they come in contact with their targeted antigen on a cell's surface, T cells bind to it and become activated, then proceed to proliferate and become cytotoxic. CAR T cells destroy cells through several mechanisms, including extensive stimulated cell proliferation, increasing the degree to which they are toxic to other living cells (cytotoxicity), and by causing the increased secretion of factors that can affect other cells such as cytokines, interleukins and growth factors.
The surface of CAR T cells can bear either of two types of co-receptors, CD4 and CD8. These two cell types, called CD4+ and CD8+, respectively, have different and interacting cytotoxic effects. Therapies employing a 1-to-1 ratio of the cell types apparently provide synergistic antitumor effects.
History
The first chimeric receptors containing portions of an antibody and the T cell receptor was described in 1987 by Yoshihisa Kuwana et al. at Fujita Health University and Kyowa Hakko Kogyo, Co. Ltd. in Japan, and independently in 1989 by Gideon Gross and Zelig Eshhar at the Weizmann Institute in Israel. Originally termed "T-bodies", these early approaches combined an antibody's ability to specifically bind to diverse targets with the constant domains of the TCR-α or TCR-β proteins.
In 1991, chimeric receptors containing the intracellular signaling domain of CD3ζ were shown to activate T cell signaling by Arthur Weiss at the University of California, San Francisco. This work prompted CD3ζ intracellular domains to be added to chimeric receptors with antibody-like extracellular domains, commonly single-chain fraction variable (scFv) domains, as well as proteins such as CD4, subsequently termed first generation CARs.
A first generation CAR containing a CD4 extracellular domain and a CD3ζ intracellular domain was used in the first clinical trial of chimeric antigen receptor T cells by the biotechnology company Cell Genesys in the mid 1990s, allowing adoptively transferred T cells to target HIV infected cells, although it failed to show any clinical improvement. Similar early clinical trials of CAR T cells in solid tumors in the 1990s using first generation CARs targeting a solid tumor antigens such as MUC1 did not show long-term persistence of the transferred T cells or result in significant remissions.
In the early 2000s, co-stimulatory domains such as CD28 or 4-1BB were added to first generation CAR's CD3ζ intracellular domain. Termed second generation CARs, these constructs showed greater persistence and improved tumor clearance in pre-clinical models. Clinical trials in the early 2010s using second generation CARs targeting CD19, a protein expressed by normal B cells as well as B-cell leukemias and lymphomas, by investigators at the NCI, University of Pennsylvania, and Memorial Sloan Kettering Cancer Center demonstrated the clinical efficacy of CAR T cell therapies and resulted in complete remissions in many heavily pre-treated patients. These trials ultimately led in the US to the FDA's first two approvals of CAR T cells in 2017, those for tisagenlecleucel (Kymriah), marketed by Novartis originally for B-cell precursor acute lymphoblastic leukemia (B-ALL), and axicabtagene ciloleucel (Yescarta), marketed by Kite Pharma originally for diffuse large B-cell lymphoma (DLBCL). There are now six FDA-approved CAR T therapies.
Production
The first step in the production of CAR T-cells is the isolation of T cells from human blood. CAR T-cells may be manufactured either from the patient's own blood, known as an autologous treatment, or from the blood of a healthy donor, known as an allogeneic treatment. The manufacturing process is the same in both cases; only the choice of initial blood donor is different.
First, leukocytes are isolated using a blood cell separator in a process known as leukocyte apheresis. Peripheral blood mononuclear cells (PBMCs) are then separated and collected. The products of leukocyte apheresis are then transferred to a cell-processing center. In the cell processing center, specific T cells are stimulated so that they will actively proliferate and expand to large numbers. To drive their expansion, T cells are typically treated with the cytokine interleukin 2 (IL-2) and anti-CD3 antibodies. Anti-CD3/CD28 antibodies are also used in some protocols.
The expanded T cells are purified and then transduced with a gene encoding the engineered CAR via a retroviral vector, typically either an integrating gammaretrovirus (RV) or a lentiviral (LV) vector. These vectors are very safe in modern times due to a partial deletion of the U3 region. The new gene editing tool CRISPR/Cas9 has recently been used instead of retroviral vectors to integrate the CAR gene into specific sites in the genome.
The patient undergoes lymphodepletion chemotherapy prior to the introduction of the engineered CAR T-cells. The depletion of the number of circulating leukocytes in the patient upregulates the number of cytokines that are produced and reduces competition for resources, which helps to promote the expansion of the engineered CAR T-cells.
Clinical applications
As of March 2019, there were around 364 ongoing clinical trials happening globally involving CAR T cells. The majority of those trials target blood cancers: CAR T therapies account for more than half of all trials for hematological malignancies. CD19 continues to be the most popular antigen target, followed by BCMA (commonly expressed in multiple myeloma). In 2016, studies began to explore the viability of other antigens, such as CD20. Trials for solid tumors are less dominated by CAR T, with about half of cell therapy-based trials involving other platforms such as NK cells.
Cancer
T cells are genetically engineered to express chimeric antigen receptors specifically directed toward antigens on a patient's tumor cells, then infused into the patient where they attack and kill the cancer cells. Adoptive transfer of T cells expressing CARs is a promising anti-cancer therapeutic, because CAR-modified T cells can be engineered to target potentially any tumor associated antigen.
Early CAR T cell research has focused on blood cancers. The first approved treatments use CARs that target the antigen CD19, present in B-cell-derived cancers such as acute lymphoblastic leukemia (ALL) and diffuse large B-cell lymphoma (DLBCL). There are also efforts underway to engineer CARs targeting many other blood cancer antigens, including CD30 in refractory Hodgkin's lymphoma; CD33, CD123, and FLT3 in acute myeloid leukemia (AML); and BCMA in multiple myeloma. Aside from CD19, CARs targeting the multiple myeloma antigen B-cell maturation antigen (BCMA) have achieved the most clinical success so far. CARs targeting BCMA were initially reported by Robert Carpenter and James Kochenderfer et al. Anti-BCMA CAR T cells have now been tested in many clinical trials, and anti-BCMA CAR T-cell products have been approved by the U.S. Food and Drug Administration.
CAR T cells have also been found to be effective in treating glioblastoma. A single infusion is enough to show rapid tumor regression in a matter of days.
Solid tumors have presented a more difficult target. Identification of good antigens has been challenging: such antigens must be highly expressed on the majority of cancer cells, but largely absent on normal tissues. CAR T cells are also not trafficked efficiently into the center of solid tumor masses, and the hostile tumor microenvironment suppresses T cell activity.
Autoimmune disease
While most CAR T cell studies focus on creating a CAR T cell that can eradicate a certain cell population (for instance, CAR T cells that target lymphoma cells), there are other potential uses for this technology. T cells can also mediate tolerance to antigens. A regulatory T cell outfitted with a CAR could have the potential to confer tolerance to a specific antigen, something that could be utilized in organ transplantation or rheumatologic diseases like lupus.
Approved therapies
Safety
There are serious side effects that result from CAR T-cells being introduced into the body, including cytokine release syndrome and neurological toxicity. Because it is a relatively new treatment, there are few data about the long-term effects of CAR T-cell therapy. There are still concerns about long-term patient survival, as well as pregnancy complications in female patients treated with CAR T-cells. Anaphylaxis may be a side effect, as the CAR is made with a foreign monoclonal antibody, and as a result provokes an immune response.
On-target/off-tumor recognition occurs when the CAR T-cell recognizes the correct antigen, but the antigen is expressed on healthy, non-pathogenic tissue. This results in the CAR T-cells attacking non-tumor tissue, such as healthy B cells that express CD19 causing B-cell aplasia. The severity of this adverse effect can vary but the combination of prior immunosuppression, lymphodepleting chemotherapy and on-target effects causing hypogammaglobulinaemia and prolonged cytopenias places patients at increased risk of serious infections.
There is also the unlikely possibility that the engineered CAR T-cells will themselves become transformed into cancerous cells through insertional mutagenesis, due to the viral vector inserting the CAR gene into a tumor suppressor or oncogene in the host T cell's genome. Some retroviral (RV) vectors carry a lower risk than lentiviral (LV) vectors. However, both have the potential to be oncogenic. Genomic sequencing analysis of CAR insertion sites in T cells has been established for better understanding of CAR T-cell function and persistence in vivo.
Cytokine release syndrome
The most common issue after treatment with CAR T-cells is cytokine release syndrome (CRS), a condition in which the immune system is activated and releases an increased number of inflammatory cytokines. The clinical manifestation of this syndrome resembles sepsis with high fever, fatigue, myalgia, nausea, capillary leakages, tachycardia and other cardiac dysfunction, liver failure, and kidney impairment. CRS occurs in almost all patients treated with CAR T-cell therapy; in fact, the presence of CRS is a diagnostic marker that indicates the CAR T-cells are working as intended to kill the cancer cells. The severity of CRS does not correlate with an increased response to the treatment, but rather higher disease burden. Severe cytokine release syndrome can be managed with immunosuppressants such as corticosteroids, and with tocilizumab, an anti-IL-6 monoclonal antibody. Early intervention using tocilizumab was shown to reduce the frequency of severe CRS in multiple studies without affecting the therapeutic effect of the treatment. A novel strategy aimed to ameliorate CRS is based on the simultaneous expression of an artificial non-signaling IL-6 receptor on the surface of CAR T-cells. This construct neutralizes macrophage-derived IL-6 through sequestration, thus decreasing the severity of CRS without interfering with the antitumor capability of the CAR T-cell itself.
Immune effector cell-associated neurotoxicity
Neurological toxicity is also often associated with CAR T-cell treatment. The underlying mechanism is poorly understood, and may or may not be related to CRS. Clinical manifestations include delirium, the partial loss of the ability to speak coherently while still having the ability to interpret language (expressive aphasia), lowered alertness (obtundation), and seizures. During some clinical trials, deaths caused by neurotoxicity have occurred. The main cause of death from neurotoxicity is cerebral edema. In a study carried out by Juno Therapeutics, Inc., five patients enrolled in the trial died as a result of cerebral edema. Two of the patients were treated with cyclophosphamide alone and the remaining three were treated with a combination of cyclophosphamide and fludarabine. In another clinical trial sponsored by the Fred Hutchinson Cancer Research Center, there was one reported case of irreversible and fatal neurological toxicity 122 days after the administration of CAR T-cells.
Hypokinetic movement disorder (parkinsonism, or movement and neurocognitive treatment emergent adverse events) has been observed with BCMA-chimeric antigen receptor (CAR) T-cell treatment for multiple myeloma.
Chimeric antigen receptor structure
Chimeric antigen receptors combine many facets of normal T cell activation into a single protein. They link an extracellular antigen recognition domain to an intracellular signalling domain, which activates the T cell when an antigen is bound. CARs are composed of four regions: an antigen recognition domain, an extracellular hinge region, a transmembrane domain, and an intracellular T cell signaling domain.
Antigen recognition domain
The antigen recognition domain is exposed to the outside of the cell, in the ectodomain portion of the receptor. It interacts with potential target molecules and is responsible for targeting the CAR T cell to any cell expressing a matching molecule.
The antigen recognition domain is typically derived from the variable regions of a monoclonal antibody linked together as a single-chain variable fragment (scFv). An scFv is a chimeric protein made up of the light (VL) and heavy (VH) chains of immunoglobins, connected with a short linker peptide. These VL and VH regions are selected in advance for their binding ability to the target antigen (such as CD19). The linker between the two chains consists of hydrophilic residues with stretches of glycine and serine in it for flexibility as well as stretches of glutamate and lysine for added solubility. Single domain antibodies (e.g. VH, VHH, VNAR) have been engineered and developed as antigen recognition domains in the CAR format due to their high transduction efficiency in T cells.
In addition to antibody fragments, non-antibody-based approaches have also been used to direct CAR specificity, usually taking advantage of ligand/receptor pairs that normally bind to each other. Cytokines, innate immune receptors, TNF receptors, growth factors, and structural proteins have all been successfully used as CAR antigen recognition domains.
Hinge region
The hinge, also called a spacer, is a small structural domain that sits between the antigen recognition region and the cell's outer membrane. An ideal hinge enhances the flexibility of the scFv receptor head, reducing the spatial constraints between the CAR and its target antigen. This promotes antigen binding and synapse formation between the CAR T cells and target cells. Hinge sequences are often based on membrane-proximal regions from other immune molecules including IgG, CD8, and CD28.
Transmembrane domain
The transmembrane domain is a structural component, consisting of a hydrophobic alpha helix that spans the cell membrane. It anchors the CAR to the plasma membrane, bridging the extracellular hinge and antigen recognition domains with the intracellular signaling region. This domain is essential for the stability of the receptor as a whole. Generally, the transmembrane domain from the most membrane-proximal component of the endodomain is used, but different transmembrane domains result in different receptor stability. The CD28 transmembrane domain is known to result in a highly expressed, stable receptor.
Using the CD3-zeta transmembrane domain is not recommended, as it can result in incorporation of the artificial TCR into the native TCR.
Intracellular T cell signaling domain
The intracellular T cell signaling domain lies in the receptor's endodomain, inside the cell. After an antigen is bound to the external antigen recognition domain, CAR receptors cluster together and transmit an activation signal. Then the internal cytoplasmic end of the receptor perpetuates signaling inside the T cell.
Normal T cell activation relies on the phosphorylation of immunoreceptor tyrosine-based activation motifs (ITAMs) present in the cytoplasmic domain of CD3-zeta. To mimic this process, CD3-zeta's cytoplasmic domain is commonly used as the main CAR endodomain component. Other ITAM-containing domains have also been tried, but are not as effective.
T cells also require co-stimulatory molecules in addition to CD3 signaling in order to persist after activation. For this reason, the endodomains of CAR receptors typically also include one or more chimeric domains from co-stimulatory proteins. Signaling domains from a wide variety of co-stimulatory molecules have been successfully tested, including CD28, CD27, CD134 (OX40), and CD137 (4-1BB).
The intracellular signaling domain used defines the generation of a CAR T cell. First generation CARs include only a CD3-zeta cytoplasmic domain. Second generation CARs add a co-stimulatory domain, like CD28 or 4-1BB. The involvement of these intracellular signaling domains improve T cell proliferation, cytokine secretion, resistance to apoptosis, and in vivo persistence. Third generation CARs combine multiple co-stimulatory domains, such as CD28-41BB or CD28-OX40, to augment T cell activity. Preclinical data show the third-generation CARs exhibit improved effector functions and better in vivo persistence as compared to second-generation CARs.
Research directions
Antigen recognition
Although the initial clinical remission rates after CAR T cell therapy in all patients are as high as 90%, long-term survival rates are much lower. The cause is typically the emergence of leukemia cells that do not express CD19 and so evade recognition by the CD19–CAR T cells, a phenomenon known as antigen escape. Preclinical studies developing CAR T cells with dual targeting of CD19 plus CD22 or CD19 plus CD20 have demonstrated promise, and trials studying bispecific targeting to circumvent CD19 down-regulation are ongoing.
In 2018, a version of CAR was developed that is referred to as SUPRA CAR, or split, universal, and programmable. Multiple mechanisms can be deployed to finely regulate the activity of SUPRA CAR, which limits overactivation. In contrast to the traditional CAR design, SUPRA CAR allows targeting of multiple antigens without further genetic modification of a person's immune cells.
Treatment of antigenically heterogeneous tumors can be achieved by administration of a mixture of the desired antigen-specific adaptors.
CAR T function
Fourth generation CARs (also known as TRUCKs or armored CARs) further add factors that enhance T cell expansion, persistence, and anti-tumoral activity. This can include cytokines, such is IL-2, IL-5, IL-12 and co-stimulatory ligands.
Control mechanisms
Adding a synthetic control mechanism to engineered T cells allows doctors to precisely control the persistence or activity of the T cells in the patient's body, with the goal of reducing toxic side effects. The major control techniques trigger T cell death or limit T cell activation, and often regulate the T cells via a separate drug that can be introduced or withheld as needed.
Suicide genes: Genetically modified T cells are engineered to include one or more genes that can induce apoptosis when activated by an extracellular molecule. Herpes simplex virus thymidine kinase (HSV-TK) and inducible caspase 9 (iCasp9) are two types of suicide genes that have been integrated into CAR T cells. In the iCasp9 system, the suicide gene complex has two elements: a mutated FK506-binding protein with high specificity to the small molecule rimiducid/AP1903, and a gene encoding a pro-domain-deleted human caspase 9. Dosing the patient with rimiducid activates the suicide system, leading to rapid apoptosis of the genetically modified T cells. Although both the HSV-TK and iCasp9 systems demonstrate a noticeable function as a safety switch in clinical trials, some defects limit their application. HSV-TK is virus-derived and may be immunogenic to humans. It is also currently unclear whether the suicide gene strategies will act quickly enough in all situations to halt dangerous off-tumor cytotoxicity.
Dual-antigen receptor: CAR T cells are engineered to express two tumor-associated antigen receptors at the same time, reducing the likelihood that the T cells will attack non-tumor cells. Dual-antigen receptor CAR T cells have been reported to have less intense side effects. An in vivo study in mice shows that dual-receptor CAR T cells effectively eradicated prostate cancer and achieved complete long-term survival.
ON-switch and OFF-switch: In this system, CAR T cells can only function in the presence of both tumor antigen and a benign exogenous molecule. To achieve this, the CAR T cell's engineered chimeric antigen receptor is split into two separate proteins that must come together in order to function. The first receptor protein typically contains the extracellular antigen binding domain, while the second protein contains the downstream signaling elements and co-stimulatory molecules (such as CD3ζ and 4-1BB). In the presence of an exogenous molecule (such as a rapamycin analog), the binding and signaling proteins dimerize together, allowing the CAR T cells to attack the tumor. Human EGFR truncated form (hEGFRt) has been used as an OFF-switch for CAR T cells using cetuximab.
Bispecific molecules as switches: Bispecific molecules target both a tumor-associated antigen and the CD3 molecule on the surface of T cells. This ensures that the T cells cannot become activated unless they are in close physical proximity to a tumor cell. The anti-CD20/CD3 bispecific molecule shows high specificity to both malignant B cells and cancer cells in mice. FITC is another bifunctional molecule used in this strategy. FITC can redirect and regulate the activity of the FITC-specific CAR T cells toward tumor cells with folate receptors.
Advances in CAR T cell manufacturing.
Due to the high costs of CAR T cell therapy, a number of alternative efforts are being investigated to improve CAR T cell manufacturing and reduce costs. In vivo CAR T cell manufacturing strategies are being tested. In addition, bioinstructive materials have been developed for CAR T cell generation. Rapid CAR T cell generation is also possible through shortening or eliminating the activation and expansion steps.
In situ modification
Another approach is to modify T cells and/or B cells still in the body using viral vectors.
Alternative Activating Domains
Recent advancements in CAR T-cell therapy have focused on alternative activating domains to enhance efficacy and overcome resistance in solid tumors. For instance, Toll-like receptor 4 (TLR4) signaling components can be incorporated into CAR constructs to modulate cytokine production and boost T-cell activation and proliferation, leading to enhanced CAR T-cell expansion and persistence. Similarly, the FYN kinase, a member of the Src family kinases involved in T-cell receptor signaling, can be integrated to improve the signaling cascade within CAR T-cells, resulting in better targeting and elimination of cancer cells. Additionally, KIR-based CARs (KIR-CAR), which use the transmembrane and intracellular domains of the activating receptor KIR2DS2 combined with the DAP-12 signaling adaptor, have shown improved T-cell proliferation and antitumor activity. These strategies, including the use of nonconventional costimulatory molecules like MyD88/CD40, highlight the innovative approaches being taken to optimize CAR T-cell therapies for more effective cancer treatments.
Economics
The cost of CAR T cell therapies has been criticized, with the initial costs of tisagenlecleucel (Kymriah) and axicabtagene ciloleucel (Yescarta) being $375,000 and $475,000 respectively. The high cost of CAR T therapies is due to complex cellular manufacturing in specialized good manufacturing practice (GMP) facilities as well as the high level of hospital care necessary after CAR T cells are administered due to risks such as cytokine release syndrome. In the United States, CAR T cell therapies are covered by Medicare and by many but not all private insurers. Manufacturers of CAR T cells have developed alternative payment programs due to the high cost of CAR T therapy, such as by requiring payment only if the CAR T therapy induces a complete remission by a certain time point after treatment.
Additionally, CAR T cell therapies are not available worldwide yet. CAR T cell therapies have been approved in China, Australia, Singapore, the United Kingdom, and some European countries. In February 2022 Brazil approved tisagenlecleucel (Kymriah) treatment.
See also
Cell therapy
Checkpoint inhibitor
Glofitamab
Mosunetuzumab
Epcoritamab
Gene therapy
Immune checkpoint
References
External links
CAR T Cells: Engineering Patients' Immune Cells to Treat Their Cancers. National Cancer Institute, July 2019
Cancer immunotherapy
Gene therapy
Immune system
Leukemia
Lymphoma
T cells | CAR T cell | [
"Engineering",
"Biology"
] | 5,671 | [
"Immune system",
"Organ systems",
"Gene therapy",
"Genetic engineering"
] |
1,624,622 | https://en.wikipedia.org/wiki/AAI%20Aerosonde | The AAI Aerosonde is a small unmanned aerial vehicle (UAV) designed to collect weather data, including temperature, atmospheric pressure, humidity, and wind measurements over oceans and remote areas. The Aerosonde was developed by Insitu, and is now manufactured by Aerosonde Ltd, which is a strategic business of AAI Corporation. The Aerosonde is powered by a modified Enya R120 model aircraft engine, and carries on board a small computer, meteorological instruments, and a GPS receiver for navigation. It is also used by the United States Armed Forces for intelligence, surveillance and reconnaissance (ISR).
Design and development
On August 21, 1998, a Phase 1 Aerosonde nicknamed "Laima", after the ancient Latvian deity of good fortune, completed a 2,031 mile (3,270 km) flight across the Atlantic Ocean. This was the first crossing of the Atlantic Ocean by a UAV; at the time, it was also the smallest aircraft ever to cross the Atlantic (the smallest aircraft record was subsequently broken by the Spirit of Butts Farm UAV). Launched from a roof rack of a moving car due to its lack of undercarriage, Laima flew from Newfoundland, Canada to Benbecula, an island off the coast of Scotland in 26 hours 45 minutes in stormy weather, using approximately 1.5 U.S. gallons (1.25 imperial gallons or 5.7 litres) of gasoline (petrol). Other than for take-off and landing, the flight was autonomous, without external control, at an altitude of 5,500 ft (1,680 meters). Aerosondes have also been the first unmanned aircraft to penetrate tropical cyclones, with an initial mission in 2001 followed by eye penetrations in 2005.
Operational history
On 5 March 2012, the U.S. Special Operations Command (SOCOM) awarded AAI a contract to provide the Aerosonde-G for their Mid-Endurance UAS II program. The catapult-launched air vehicle has a takeoff weight depending on engine type, with endurance of over 10 hours and an electro-optic/infrared and laser-pointer payload. The Aerosonde has been employed by SOCOM and U.S. Naval Air Systems Command (NAVAIR) under the designation MQ-19 under service provision contracts. A typical system comprises four air vehicles and two ground control stations that are accommodated in tents or tailored to fit in most vehicles. The system can also include remote video terminals for individual users to uplink new navigation waypoints and sensor commands to, and receive sensor imagery and video from, the vehicle from a ruggedized tablet device. Originally, the Aerosonde suffered from engine-reliability issues, but Textron says it has rectified those issues.
By November 2015, Textron Systems was performing Aerosonde operations in "eight or nine" countries for its users, including the U.S. Marine Corps, U.S. Air Force, and SOCOM, as well as for commercial users consisting of a customer in the oil and gas industry. Instead of buying hardware, customers pay for "sensor hours," and the company decides how many aircraft are produced to meet requirements. 4,000 fee-for-service hours were being performed monthly, and the Aerosonde had exceeded 110,000 flight hours in service.
Variants
Specifications (Aerosonde)
General characteristics
Crew: Remote-controlled
Length: 5 ft 8 in (1.7 m)
Wingspan: 9 ft 8 in (2.9 m)
Height: 2 ft 0 in (0.60 m)
Wing area: 6.1 ft2 (0.57 m2)
Empty: 22lb (10 kg)
Loaded: 28.9 lb (13.1 kg)
Maximum take-off: 28.9 lb (13.1 kg)
Powerplant: Modified Enya R120 model aircraft engine, 1.74 hp (1280 W)
Lycoming El-005 Multi Fuel power plant
Performance
Maximum speed: 69 mph (111 km/h)
Range: 100 miles (150 km)
Service ceiling: 15,000 ft (4,500 m)
Rate of climb: 2.5 m/sec (8.2 ft/sec)
Wing loading: 5 lb/ft2 (23 kg/m2)
Power/Mass: 0.06 hp/lb (98 W/kg)
References
Display information at Museum of Flight in Seattle, Washington.
G.J. Holland, T. McGeer and H.H. Youngren. Autonomous aerosondes for economical atmospheric soundings anywhere on the globe. Bulletin of the American Meteorological Society 73(12):1987-1999, December 1992.
Cyclone reconnaissance
P-H Lin & C-S Lee. Fly into typhoon Haiyan with UAV Aerosonde. American Meteorological Society conference paper 52113 (2002).
NASA Wallops Flight Facility press release: "Aerosonde UAV Completes First Operational Flights at NASA Wallops"
Laima flight
Tad McGeer. "Laima: The first Atlantic crossing by unmanned aircraft" (1998)
Aerosonde Pty Ltd. press release: "First UAV across the Atlantic"
University of Washington, Aeronautics and Astronautics Program, College of Engineering: (Aerosonde project web page)
External links
Aerosonde Pty Ltd. web site
Meteorological instrumentation and equipment
Single-engined pusher aircraft
1990s United States special-purpose aircraft
Unmanned aerial vehicles of Australia | AAI Aerosonde | [
"Technology",
"Engineering"
] | 1,108 | [
"Meteorological instrumentation and equipment",
"Measuring instruments"
] |
8,933,657 | https://en.wikipedia.org/wiki/Aczel%27s%20anti-foundation%20axiom | In the foundations of mathematics, Aczel's anti-foundation axiom is an axiom set forth by , as an alternative to the axiom of foundation in Zermelo–Fraenkel set theory. It states that every accessible pointed directed graph corresponds to exactly one set. In particular, according to this axiom, the graph consisting of a single vertex with a loop corresponds to a set that contains only itself as element, i.e. a Quine atom. A set theory obeying this axiom is necessarily a non-well-founded set theory.
Accessible pointed graphs
An accessible pointed graph is a directed graph with a distinguished vertex (the "root") such that for any node in the graph there is at least one path in the directed graph from the root to that node.
The anti-foundation axiom postulates that each such directed graph corresponds to the membership structure of exactly one set. For example, the directed graph with only one node and an edge from that node to itself corresponds to a set of the form x = {x}.
See also
von Neumann universe
References
Axioms of set theory
Directed graphs
de:Fundierungsaxiom#Mengenlehren ohne Fundierungsaxiom | Aczel's anti-foundation axiom | [
"Mathematics"
] | 252 | [
"Axioms of set theory",
"Mathematical axioms"
] |
8,940,073 | https://en.wikipedia.org/wiki/Jetstream%20furnace | Jetstream furnaces (later tempest wood-burning boilers), were an advanced design of wood-fired water heaters conceived by Dr. Richard Hill of the University of Maine in Orono, Maine, USA. The design heated a house to prove the theory, then, with government funding, became a commercial product.
Wood-burning water furnaces, boilers and melters
The furnace used a forced and induced draft fan to draw combustion air and exhaust gases through the combustion chamber at 1/3 of the speed of sound (100 m/s+). The wood was loaded into a vertical tube which passed through the water jacket into a refractory lined combustion chamber. In this chamber the burning took place and was limited to the ends of the logs. The water jacket prevented the upper parts of the logs from burning so they would gravity feed as the log was consumed.
The products of combustion left the chamber and passed through a narrow ceramic neck which reached temperatures of 2000 degrees F where the gases and tars released by the wood completed their burning. The products then passed through a refractory lined ash chamber which slowed the flow and let ash settle out. From here the hot gases travelled up through the boiler tubes which pass through the water jacket. Turbulators in the tubes improve heat transfer to the water jacket.
All this resulted in total efficiencies as high as 85% but more commonly 75-80% and allowed partly dry unsplit wood to be burned just as effectively and cleanly. The particulate production was 100 times less than airtight stoves of the 1970s and 1980s and was less than representative oil fired furnaces. The Jetstream produced approximately 0.1 grams/hours of soot while EPA certified woodstoves produce up to 7.2 grams per hour. The high combustion chamber velocities do result in fine particulate flyash being ejected from the stack.
The other aspect of Dr. Hill's design was the use of water storage. The furnace only operated at one setting, wide-open burn. A full load of hardwood, approximately 40 lbs would be consumed in four hours and the heat released was stored in water tanks for use through the day.
The Hampton Industries model was designed to produce .
A Hampton Jetstream Mk II which was set to be the next model offered by Hampton Industries existed in prototype form. It was an upsized version of the unit offered for sale. The only component changed was the diameter of the burning chamber. This was enlarged within the standard casting. The prototype shares many of the design improvements seen in the Kerr Jetstream.
The Tempest was produced by Dumont Industries of Monmouth, ME, USA and is very similar to the Jetstream.
The patent for this device, termed a WoodFired Quick Recovery Water Heater, number 4583495, issued April 22, 1986, is assigned to the board of trustees of the University of Maine. There is no current production using the design of this patent. (January, 2008)
Production history
Hampton Industries of Hampton, PEI, Canada, pursued the design to fit into houses more easily.
Hampton Industries produced the Jetstream from January 1980 to June 1981 producing 500 units. At this point the company ceased operations with unfilled orders for hundreds more stoves and sales approximately 25% higher than projected. It was stated the advertising costs incurred before production depleted the principals in the business and a deal with a venture capitalist fell through at the last minute.
Within 4 weeks of entering receivership, Kerr Controls Ltd of Truro, Nova Scotia had purchased the manufacturing rights and resumed production of the slightly redesigned Jetstream in mid-September 1981 and produced 150 units just in the last quarter of 1981.
The Kerr Jetstream incorporated several updates including the available belt-driven fan replacing the Electrolux vacuum cleaner motor originally used. A removable refractory plug allowing access to the tunnel was added in the back of the unit. An updated control panel was adopted and the option of an electronic panel was added.
The design of the Hampton Industries furnaces and spare parts belong to Kerr Heating Products of Parrsboro, Nova Scotia. Some molds to replace parts still exist and are available through Kerr Controls or Kerr Heating.
Alternate designs
Current (2007) furnaces with similar designs:
Solo Series Wood Gasification Boilers by HS -Tarm
Alternate Heating Systems (AHS)
The Greenwood Hydronic Wood Furnace
Garn WHS
Kunzel Wood Gasification Boilers
Alternative Fuel Gasification
The EKO-LINE and KP-PYRO Boilers and Goliath Commercial Boiler from New Horizon Corporation Inc.
These companies use a process called gasification but the basics of forced draft, twin refractory lined combustion and ash chambers linked by a ceramic or refractory burner nozzle or tube and shell and tube heat exchanger remain common.
External links
(Hill's 1979 DOE Grant Report)
See also
Furnace
Hydronics
Gasification
Wood gas
Wood gas generator
References
Boilers
Plumbing
Heating, ventilation, and air conditioning | Jetstream furnace | [
"Chemistry",
"Engineering"
] | 1,008 | [
"Construction",
"Boilers",
"Plumbing",
"Pressure vessels"
] |
9,628,193 | https://en.wikipedia.org/wiki/Darwin%E2%80%93Radau%20equation | In astrophysics, the Darwin–Radau equation (named after Rodolphe Radau and Charles Galton Darwin) gives an approximate relation between the moment of inertia factor of a planetary body and its rotational speed and shape. The moment of inertia factor is directly related to the largest principal moment of inertia, C. It is assumed that the rotating body is in hydrostatic equilibrium and is an ellipsoid of revolution. The Darwin–Radau equation states
where M and Re represent the mass and mean equatorial radius of the body. Here λ is known as d'Alembert's parameter and the Radau parameter η is defined as
where q is the geodynamical constant
and ε is the geometrical flattening
where Rp is the mean polar radius and Re is the mean equatorial radius.
For Earth, and , which yields , a good approximation to the measured value of 0.3307.
References
Astrophysics
Planetary science
Equations of astronomy | Darwin–Radau equation | [
"Physics",
"Astronomy"
] | 205 | [
"Astronomical sub-disciplines",
"Concepts in astronomy",
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Equations of astronomy",
"Planetary science",
"Planetary science stubs"
] |
9,629,714 | https://en.wikipedia.org/wiki/Adenine%20nucleotide%20translocator | Adenine nucleotide translocator (ANT), also known as the ADP/ATP translocase (ANT), ADP/ATP carrier protein (AAC) or mitochondrial ADP/ATP carrier, exchanges free ATP with free ADP across the inner mitochondrial membrane. ANT is the most abundant protein in the inner mitochondrial membrane and belongs to the mitochondrial carrier family.
Free ADP is transported from the cytoplasm to the mitochondrial matrix, while ATP produced from oxidative phosphorylation is transported from the mitochondrial matrix to the cytoplasm, thus providing the cell with its main energy currency. ADP/ATP translocases are exclusive to eukaryotes and are thought to have evolved during eukaryogenesis. Human cells express four ADP/ATP translocases: SLC25A4, SLC25A5, SLC25A6 and SLC25A31, which constitute more than 10% of the protein in the inner mitochondrial membrane. These proteins are classified under the mitochondrial carrier superfamily.
Types
In humans, there exist three paraologous ANT isoforms:
SLC25A4 – found primarily in heart and skeletal muscle
SLC25A5 – primarily expressed in fibroblasts
SLC25A6 – primarily express in liver
Structure
ANT has long been thought to function as a homodimer, but this concept was challenged by the projection structure of the yeast Aac3p solved by electron crystallography, which showed that the protein was three-fold symmetric and monomeric, with the translocation pathway for the substrate through the centre. The atomic structure of the bovine ANT confirmed this notion, and provided the first structural fold of a mitochondrial carrier. Further work has demonstrated that ANT is a monomer in detergents and functions as a monomer in mitochondrial membranes.
ADP/ATP translocase 1 is the major AAC in human cells and the archetypal protein of this family. It has a mass of approximately 30 kDa, consisting of 297 residues. It forms six transmembrane α-helices that form a barrel that results in a deep cone-shaped depression accessible from the outside where the substrate binds. The binding pocket, conserved throughout most isoforms, mostly consists of basic residues that allow for strong binding to ATP or ADP and has a maximal diameter of 20 Å and a depth of 30 Å. Indeed, arginine residues 96, 204, 252, 253, and 294, as well as lysine 38, have been shown to be essential for transporter activity.
Function
ADP/ATP translocase transports ATP synthesized from oxidative phosphorylation into the cytoplasm, where it can be used as the principal energy currency of the cell to power thermodynamically unfavorable reactions. After the consequent hydrolysis of ATP into ADP, ADP is transported back into the mitochondrial matrix, where it can be rephosphorylated to ATP. Because a human typically exchanges the equivalent of their own mass of ATP on a daily basis, ADP/ATP translocase is an important transporter protein with major metabolic implications.
ANT transports the free, i.e. deprotonated, non-Magnesium, non-Calcium bound forms of ADP and ATP, in a 1:1 ratio. Transport is fully reversible, and its directionality is governed by the concentrations of its substrates (ADP and ATP inside and outside mitochondria), the chelators of the adenine nucleotides, and the mitochondrial membrane potential. The relationship of these parameters can be expressed by an equation solving for the 'reversal potential of the ANT" (Erev_ANT), a value of the mitochondrial membrane potential at which no net transport of adenine nucleotides takes place by the ANT. The ANT and the F0-F1 ATP synthase are not necessarily in directional synchrony.
Apart from exchange of ADP and ATP across the inner mitochondrial membrane, the ANT also exhibits an intrinsic uncoupling activity
ANT is an important modulatory and possible structural component of the Mitochondrial Permeability Transition Pore, a channel involved in various pathologies whose function still remains elusive. Karch et al. propose a "multi-pore model" in which ANT is at least one of the molecular components of the pore.
Translocase mechanism
Under normal conditions, ATP and ADP cannot cross the inner mitochondrial membrane due to their high negative charges, but ADP/ATP translocase, an antiporter, couples the transport of the two molecules. The depression in ADP/ATP translocase alternatively faces the matrix and the cytoplasmic sides of the membrane. ADP in the intermembrane space, coming from the cytoplasm, binds the translocase and induces its eversion, resulting in the release of ADP into the matrix. Binding of ATP from the matrix induces eversion and results in the release of ATP into the intermembrane space, subsequently diffusing to the cytoplasm, and concomitantly brings the translocase back to its original conformation. ATP and ADP are the only natural nucleotides recognized by the translocase.
The net process is denoted by:
ADP3−cytoplasm + ATP4−matrix → ADP3−matrix + ATP4−cytoplasm
ADP/ATP exchange is energetically expensive: about 25% of the energy yielded from electron transfer by aerobic respiration, or one hydrogen ion, is consumed to regenerate the membrane potential that is tapped by ADP/ATP translocase.
The translocator cycles between two states, called the cytoplasmic and matrix state, opening up to these compartments in an alternating way. There are structures available that show the translocator locked in a cytoplasmic state by the inhibitor carboxyatractyloside, or in the matrix state by the inhibitor bongkrekic acid.
Alterations
Rare but severe diseases such as mitochondrial myopathies are associated with dysfunctional human ADP/ATP translocase. Mitochondrial myopathies (MM) refer to a group of clinically and biochemically heterogeneous disorders that share common features of major mitochondrial structural abnormalities in skeletal muscle. The major morphological hallmark of MM is ragged, red fibers containing peripheral and intermyofibrillar accumulations of abnormal mitochondria. In particular, autosomal dominant progressive external ophthalmoplegia (adPEO) is a common disorder associated with dysfunctional ADP/ATP translocase and can induce paralysis of muscles responsible for eye movements. General symptoms are not limited to the eyes and can include exercise intolerance, muscle weakness, hearing deficit, and more. adPEO shows Mendelian inheritance patterns but is characterized by large-scale mitochondrial DNA (mtDNA) deletions. mtDNA contains few introns, or non-coding regions of DNA, which increases the likelihood of deleterious mutations. Thus, any modification of ADP/ATP translocase mtDNA can lead to a dysfunctional transporter, particularly residues involved in the binding pocket which will compromise translocase efficacy. MM is commonly associated with dysfunctional ADP/ATP translocase, but MM can be induced through many different mitochondrial abnormalities.
Inhibition
ADP/ATP translocase is very specifically inhibited by two families of compounds. The first family, which includes atractyloside (ATR) and carboxyatractyloside (CATR), binds to the ADP/ATP translocase from the cytoplasmic side, locking it in a cytoplasmic side open conformation. In contrast, the second family, which includes bongkrekic acid (BA) and isobongkrekic acid (isoBA), binds the translocase from the matrix, locking it in a matrix side open conformation. The negatively charged groups of the inhibitors bind strongly to the positively charged residues deep within the binding pocket. The high affinity (Kd in the nanomolar range) makes each inhibitor a deadly poison by obstructing cellular respiration/energy transfer to the rest of the cell. There are structures available that show the translocator locked in a cytoplasmic state by the inhibitor carboxyatractyloside, or in the matrix state by the inhibitor bongkrekic acid.
History
In 1955, Siekevitz and Potter demonstrated that adenine nucleotides were distributed in cells in two pools located in the mitochondrial and cytosolic compartments. Shortly thereafter, Pressman hypothesized that the two pools could exchange nucleotides. However, the existence of an ADP/ATP transporter was not postulated until 1964 when Bruni et al. uncovered an inhibitory effect of atractyloside on the energy-transfer system (oxidative phosphorylation) and ADP binding sites of rat liver mitochondria.
Soon after, an overwhelming amount of research was done in proving the existence and elucidating the link between ADP/ATP translocase and energy transport. cDNA of ADP/ATP translocase was sequenced for bovine in 1982 and a yeast species Saccharomyces cerevisiae in 1986 before finally Battini et al. sequenced a cDNA clone of the human transporter in 1989. The homology in the coding sequences between human and yeast ADP/ATP translocase was 47% while bovine and human sequences extended remarkable to 266 out of 297 residues, or 89.6%. In both cases, the most conserved residues lie in the ADP/ATP substrate binding pocket.
See also
Mitochondrial carrier
Cellular respiration
Oxidative phosphorylation
References
External links
Solute carrier family
Cellular respiration | Adenine nucleotide translocator | [
"Chemistry",
"Biology"
] | 2,057 | [
"Biochemistry",
"Cellular respiration",
"Metabolism"
] |
9,629,917 | https://en.wikipedia.org/wiki/Building-integrated%20photovoltaics | Building-integrated photovoltaics (BIPV) are photovoltaic materials that are used to replace conventional building materials in parts of the building envelope such as the roof, skylights, or façades. They are increasingly being incorporated into the construction of new buildings as a principal or ancillary source of electrical power, although existing buildings may be retrofitted with similar technology. The advantage of integrated photovoltaics over more common non-integrated systems is that the initial cost can be offset by reducing the amount spent on building materials and labor that would normally be used to construct the part of the building that the BIPV modules replace. In addition, BIPV allows for more widespread solar adoption when the building's aesthetics matter and traditional rack-mounted solar panels would disrupt the intended look of the building.
The term building-applied photovoltaics (BAPV) is sometimes used to refer to photovoltaics that are retrofit – integrated into the building after construction is complete. Most building-integrated installations are actually BAPV. Some manufacturers and builders differentiate new construction BIPV from BAPV.
History
PV applications for buildings began appearing in the 1970s. Aluminum-framed photovoltaic modules were connected to, or mounted on, buildings that were usually in remote areas without access to an electric power grid. In the 1980s photovoltaic module add-ons to roofs began being demonstrated. These PV systems were usually installed on utility-grid-connected buildings in areas with centralized power stations. In the 1990s BIPV construction products specially designed to be integrated into a building envelope became commercially available. A 1998 doctoral thesis by Patrina Eiffert, entitled An Economic Assessment of BIPV, hypothesized that one day there would an economic value for trading Renewable Energy Credits (RECs). A 2011 economic assessment and brief overview of the history of BIPV by the U.S. National Renewable Energy Laboratory suggests that there may be significant technical challenges to overcome before the installed cost of BIPV is competitive with photovoltaic panels. However, there is a growing consensus that through their widespread commercialization, BIPV systems will become the backbone of the zero energy building (ZEB) European target for 2020. Despite the technical promise, social barriers to widespread use have also been identified, such as the conservative culture of the building industry and integration with high-density urban design. These authors suggest enabling long-term use likely depends on effective public policy decisions as much as the technological development.
Forms
The majority of BIPV products use one of two technologies: Crystalline Solar Cells (c-SI) or Thin-Film Solar Cells. C-SI technologies comprise wafers of single-cell crystalline silicon which generally operate at a higher efficiency that Thin-Film cells but are more expensive to produce. The applications of these two technologies can be categorized by five main types of BIPV products:
Standard in-roof systems. These generally take the form of applicable strips of photovoltaic cells.
Semi-transparent systems. These products are typically used in greenhouse or cold-weather applications where solar energy must simultaneously be captured and allowed into the building.
Cladding systems. There are a broad range of these systems; their commonality being their vertical application on a building façade.
Solar Tiles and Shingles. These are the most common BIPV systems as they can easily be swapped out for conventional shingle roof finishes.
Flexible Laminates. Commonly procured in thin-sheet form, these products can be adhered to a variety of forms, primarily roof forms.
With the exception of flexible laminates, each of the above categories can utilize either c-SI or Thin-Film technologies, with Thin-Film technologies only being applicable to flexible laminates – this renders Thin-Film BIPV products ideal for advanced design applications that have a kinetic aspect.
Between the five categories, BIPV products can be applied in a variety of scenarios: pitched roofs, flat roofs, curved roofs, semi-transparent façades, skylights, shading systems, external walls, and curtain walls, with flat roofs and pitched roofs being the most ideal for solar energy capture. The ranges of roofing and shading system BIPV products are most commonly used in residential applications whereas the wall and cladding systems are most commonly used in commercial settings. Overall, roofing BIPV systems currently have more of the market share and are generally more efficient than façade and cladding BIPV systems due to their orientation to the sun.
Building-integrated photovoltaic modules are available in several forms:
Flat roofs
The most widely installed to date is an amorphous thin film solar cell integrated to a flexible polymer module which has been attached to the roofing membrane using an adhesive sheet between the solar module backsheet and the roofing membrane. Copper Indium Gallium Selenide (CIGS) technology is now able to deliver cell efficiency of 17% as produced by a US-based company and comparable building-integrated module efficiencies in TPO single ply membranes by the fusion of these cells by a UK-based company.
Pitched roofs
Solar roof tiles are (ceramic) roof tiles with integrated solar modules. The ceramic solar roof tile is developed and patented by a Dutch company in 2013.
Modules shaped like multiple roof tiles.
Solar shingles are modules designed to look and act like regular shingles, while incorporating a flexible thin film cell.
It extends normal roof life by protecting insulation and membranes from ultraviolet rays and water degradation. It does this by eliminating condensation because the dew point is kept above the roofing membrane.
Metal pitched roofs (both structural and architectural) are now being integrated with PV functionality either by bonding a free-standing flexible module or by heat and vacuum sealing of the CIGS cells directly onto the substrate
Façade
Façades can be installed on existing buildings, giving old buildings a whole new look. These modules are mounted on the façade of the building, over the existing structure, which can increase the appeal of the building and its resale value.
Glazing
Photovoltaic windows are (semi)transparent modules that can be used to replace a number of architectural elements commonly made with glass or similar materials, such as windows and skylights. In addition to producing electric energy, these can create further energy savings due to superior thermal insulation properties and solar radiation control.
Photovoltaic Stained Glass: The integration of energy harvesting technologies into homes and commercial buildings has opened up additional areas of research which place greater considerations on the end product's overall aesthetics. While the goal is still to maintain high levels of efficiency, new developments in photovoltaic windows also aim to offer consumers optimal levels of glass transparency and/or the opportunity to select from a range of colors. Different colored 'stained glass' solar panels can be optimally designed to absorb specific ranges of wavelengths from the broader spectrum. Colored photovoltaic glass has been successfully developed using semi transparent, perovskite, and dye sensitized solar cells.
Plasmonic solar cells that absorb and reflect colored light have been created with Fabry-Pérot etalon technology. These cells are composed of "two parallel reflecting metal films and a dielectric cavity film between them." The two electrodes are made from Ag and the cavity between them is Sb2O3 based. Modifying the thickness and refractance of the dielectric cavity changes which wavelength will be most optimally absorbed. Matching the color of the absorption layer glass to the specific portion of the spectrum that the cell's thickness and refractance index is best tuned to transmit both enhances the aesthetic of the cell by intensifying its color and helps to minimize photocurrent losses. 34.7% and 24.6% transmittance was achieved in red and blue light devices respectively. Blue devices can convert 13.3% of light absorbed into power, making it the most efficient across all colored devices developed and tested.
Perovskite solar cell technology can be tuned to red, green and blue by changing the metallic nanowire thickness to 8, 20 and 45 nm respectively. Maximum power efficiencies of 10.12%, 8.17% and 7.72% were achieved by matching glass reflectance to the wavelength that the specific cell is designed to most optimally transmit.
Dye-sensitized solar cells employ liquid electrolytes to capture light and convert it into usable energy; this is achieved in a similar way to how natural pigments facilitate photosynthesis in plants. While chlorophyll is the specific pigment responsible for producing the green color in leaves, other dyes found in nature such as, carotenoid and anthocyanin, produce variations of orange and purples dyes. Researchers from the University of Concepcion have proved the viability of dye sensitized colored solar cells that both appear and selectively absorb specific wavelengths of light. This low cost solution uses extracting natural pigments from maqui fruit, black myrtle and spinach as sensitizers. These natural sensitizers are then placed between two layers of transparent glass. While the efficiency levels of these particularly low cost cells remains unclear, past research in organic dye cells have been able to achieve a "high power conversion efficiency of 9.8%."
Transparent and translucent photovoltaics
Transparent solar panels use a tin oxide coating on the inner surface of the glass panes to conduct current out of the cell. The cell contains titanium oxide that is coated with a photoelectric dye.
Most conventional solar cells use visible and infrared light to generate electricity. In contrast, the innovative new solar cell also uses ultraviolet radiation. Used to replace conventional window glass, or placed over the glass, the installation surface area could be large, leading to potential uses that take advantage of the combined functions of power generation, lighting and temperature control.
Another name for transparent photovoltaics is "translucent photovoltaics" (they transmit half the light that falls on them). Similar to inorganic photovoltaics, organic photovoltaics are also capable of being translucent.
Types of transparent and translucent photovoltaics
Non-wavelength-selective
Some non-wavelength-selective photovoltaics achieve semi-transparency by spatial segmentation of opaque solar cells. This method uses any type of opaque photovoltaic cell and spaces several small cells out on a transparent substrate. Spacing them out in this way reduces power conversion efficiencies dramatically while increasing transmission.
Another branch of non-wavelength-selective photovoltaics utilize visibly absorbing thin-film semi-conductors with small thicknesses or large enough band gaps that allow light to pass through. This results in semi-transparent photovoltaics with a similar direct trade off between efficiency and transmission as spatially segmented opaque solar cells.
Wavelength-selective
Wavelength-selective photovoltaics achieve transparency by utilizing materials that only absorb UV and/or NIR light and were first demonstrated in 2011. Despite their higher transmissions, lower power conversion efficiencies have resulted due to a variety of challenges. These include small exciton diffusion lengths, scaling of transparent electrodes without jeopardizing efficiency, and general lifetime due to the volatility of organic materials used in TPVs in general.
Innovations in transparent and translucent photovoltaics
Early attempts at developing non-wavelength-selective semi-transparent organic photovoltaics using very thin active layers that absorbed in the visible spectrum were only able to achieve efficiencies below 1%. However in 2011, transparent organic photovoltaics that utilized an organic chloroaluminum phthalocyanine (ClAlPc) donor and a fullerene acceptor exhibited absorption in the ultraviolet and near-infrared (NIR) spectrum with efficiencies around 1.3% and visible light transmission of over 65%. In 2017, MIT researchers developed a process to successfully deposit transparent graphene electrodes onto organic solar cells resulting in a 61% transmission of visible light and improved efficiencies ranging from 2.8%-4.1%.
Perovskite solar cells, popular due to their promise as next-generation photovoltaics with efficiencies over 25%, have also shown promise as translucent photovoltaics. In 2015, a semitransparent perovskite solar cell using a methylammonium lead triiodide perovskite and a silver nanowire mesh top electrode demonstrated 79% transmission at an 800 nm wavelength and efficiencies at around 12.7%.
Government subsidies
In some countries, additional incentives, or subsidies, are offered for building-integrated photovoltaics in addition to the existing feed-in tariffs for stand-alone solar systems. Since July 2006 France offered the highest incentive for BIPV, equal to an extra premium of EUR 0.25/kWh paid in addition to the 30 Euro cents for PV systems. These incentives are offered in the form of a rate paid for electricity fed to the grid.
European Union
France €0.25/kWh
Germany €0.05/kWh façade bonus expired in 2009
Italy €0.04–€0.09/kWh
United Kingdom 4.18 p/kWh
Spain, compared with a non- building installation that receives €0.28/kWh (RD 1578/2008):
≤20 kW: €0.34/kWh
>20 kW: €0.31/kWh
United States
United States – Varies by state. Check Database of State Incentives for Renewables & Efficiency for more details.
China
Further to the announcement of a subsidy program for BIPV projects in March 2009 offering RMB20 per watt for BIPV systems and RMB15/watt for rooftop systems, the Chinese government recently unveiled a photovoltaic energy subsidy program "the Golden Sun Demonstration Project". The subsidy program aims at supporting the development of photovoltaic electricity generation ventures and the commercialization of PV technology. The Ministry of Finance, the Ministry of Science and Technology and the National Energy Bureau have jointly announced the details of the program in July 2009. Qualified on-grid photovoltaic electricity generation projects including rooftop, BIPV, and ground mounted systems are entitled to receive a subsidy equal to 50% of the total investment of each project, including associated transmission infrastructure. Qualified off-grid independent projects in remote areas will be eligible for subsidies of up to 70% of the total investment. In mid November, China's finance ministry has selected 294 projects totaling 642 megawatts that come to roughly RMB 20 billion ($3 billion) in costs for its subsidy plan to dramatically boost the country's solar energy production.
Other integrated photovoltaics
Vehicle-integrated photovoltaics (ViPV) are similar for vehicles. Solar cells could be embedded into panels exposed to sunlight such as the hood, roof and possibly the trunk depending on a car's design.
Challenges
Performance
Because BIPV systems generate on-site power and are integrated into the building envelope, the system’s output power and thermal properties are the two primary performance indicators. Conventional BIPV systems have a lower heat dissipation capability than rack-mounted PV, which results in BIPV modules experiencing higher operating temperatures. Higher temperatures may degrade the module's semiconducting material, decreasing the output efficiency and precipitating early failure. In addition, the efficiency of BIPV systems is sensitive to weather conditions, and the use of inappropriate BIPV systems may also reduce their energy output efficiency. In terms of thermal performance, BIPV windows can reduce the cooling load compared to conventional clear glass windows, but may increase the heating load of the building.
Cost
The high upfront investment in BIPV systems is one of the biggest barriers to implementation. In addition to the upfront cost of purchasing BIPV components, the highly integrated nature of BIPV systems increases the complexity of the building design, which in turn leads to increased design and construction costs. Also, insufficient and inexperienced practitioners lead to higher employment costs incurred in the development of BIPV projects.
Policy and regulation
Although many countries have support policies for PV, most do not have additional benefits for BIPV systems. And typically, BIPV systems need to comply with building and PV industry standards, which places higher demands on implementing BIPV systems. In addition, government policies of lower conventional energy prices will lead to lower BIPV system benefits, which is particularly evident in countries where the price of conventional electricity is very low or subsidized by governments, such as in GCC countries.
Public understanding
Studies show that public awareness of BIPV is limited and the cost is generally considered too high. Deepening public understanding of BIPV through various public channels (e.g., policy, community engagement, and demonstration buildings) is likely to be beneficial to its long-term development.
See also
Distributed generation
List of pioneering solar buildings
Microgeneration
Nanoinverter
Passive solar building design
Perovskite solar cell
Solar panel
Rooftop solar power
Roof tile
Smart glass, a type of window blind capable of conserving energy for cooling
Solar cell
Solar power
Solar thermal
Zero-energy building
References
Further reading
External links
Building integrated photovoltaics an overview of the existing products and their fields of application
Canadian Solar Buildings Research Network
Building Integrated Photovoltaics
EURAC Research Building Integrated Photovoltaic on-line platform
PV UP-SCALE, a European founded project (contract EIE/05/171/SI2.420208) related to the large-scale implementation of photovoltaics (PV) in European cities.
Applications of photovoltaics
Building materials | Building-integrated photovoltaics | [
"Physics",
"Engineering"
] | 3,619 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
9,630,312 | https://en.wikipedia.org/wiki/De%20Sitter%20effect | In astrophysics, the term de Sitter effect (named after the Dutch physicist Willem de Sitter) has been applied to two unrelated phenomena:
De Sitter double star experiment
De Sitter precession – also known as geodetic precession or the geodetic effect
Astrophysics | De Sitter effect | [
"Physics",
"Astronomy"
] | 61 | [
"Astronomical sub-disciplines",
"Astrophysics"
] |
9,632,150 | https://en.wikipedia.org/wiki/Einstein%E2%80%93de%20Haas%20effect | The Einstein–de Haas effect is a physical phenomenon in which a change in the magnetic moment of a free body causes this body to rotate. The effect is a consequence of the conservation of angular momentum. It is strong enough to be observable in ferromagnetic materials. The experimental observation and accurate measurement of the effect demonstrated that the phenomenon of magnetization is caused by the alignment (polarization) of the angular momenta of the electrons in the material along the axis of magnetization. These measurements also allow the separation of the two contributions to the magnetization: that which is associated with the spin and with the orbital motion of the electrons.
The effect also demonstrated the close relation between the notions of angular momentum
in classical and in quantum physics.
The effect was predicted by O. W. Richardson in 1908. It is named after Albert Einstein and Wander Johannes de Haas, who published two papers in 1915 claiming the first experimental observation of the effect.
Description
The orbital motion of an electron (or any charged particle) around a certain axis produces a magnetic dipole with the magnetic moment of where and are the charge and the mass of the particle, while is the angular momentum of the motion (SI units are used). In contrast, the intrinsic magnetic
moment of the electron is related to its intrinsic angular momentum (spin) as (see Landé g-factor and anomalous magnetic dipole moment).
If a number of electrons in a unit volume of the material have a total orbital angular momentum of with respect to a certain axis, their magnetic moments would produce the magnetization of . For the spin contribution the relation would be . A change in magnetization, implies a proportional change in the angular momentum, of the electrons involved. Provided that there is no external torque along the magnetization axis applied to the body in the process, the rest of the body (practically all its mass) should acquire an angular momentum due to the law of conservation of angular momentum.
Experimental setup
The experiments involve a cylinder of a ferromagnetic material suspended with the aid of a thin string inside a cylindrical coil which is used to provide an axial magnetic field that magnetizes the cylinder along its axis. A change in the electric current in the coil changes the magnetic field the coil produces, which changes the magnetization of the ferromagnetic cylinder and, due to the effect described, its angular momentum. A change in the angular momentum causes a change in the rotational speed of the cylinder, monitored using optical devices. The external field interacting with a magnetic dipole cannot produce any torque () along the field direction. In these experiments the magnetization happens along the direction of the field produced by the magnetizing coil, therefore, in absence of other external fields, the angular momentum along this axis must be conserved.
In spite of the simplicity of such a layout, the experiments are not easy. The magnetization can be measured accurately with the help of a pickup coil around the cylinder, but the associated change in the angular momentum is small. Furthermore, the ambient magnetic fields, such as the Earth field, can provide a 107–108 times larger mechanical impact on the magnetized cylinder. The later accurate experiments were done in a specially constructed demagnetized environment with active compensation of the ambient fields. The measurement methods typically use the properties of the torsion pendulum, providing periodic current to the magnetization coil at frequencies close to the pendulum's resonance. The experiments measure directly the ratio: and derive the dimensionless gyromagnetic factor of the material from the definition: . The quantity is called gyromagnetic ratio.
History
The expected effect and a possible experimental approach was first described by Owen Willans Richardson in a paper published in 1908. The electron spin was discovered in 1925, therefore only the orbital motion of electrons was considered before that. Richardson derived the expected relation of . The paper mentioned the ongoing attempts to observe the effect at Princeton University.
In that historical context the idea of the orbital motion of electrons in atoms contradicted classical physics. This contradiction was addressed in the Bohr model in 1913, and later was removed with the development of quantum mechanics.
Samuel Jackson Barnett, motivated by the Richardson's paper realized that the opposite effect should also happen – a change in rotation should cause a magnetization (the Barnett effect). He published the idea in 1909, after which he pursued the experimental studies of the effect.
Einstein and de Haas published two papers in April 1915 containing a description of the expected effect and the experimental results. In the paper "Experimental proof of the existence of Ampere's molecular currents" they described in details the experimental apparatus and the measurements performed. Their result for the ratio of the angular momentum of the sample to its magnetic moment (the authors called it ) was very close (within 3%) to the expected value of . It was realized later that their result with the quoted uncertainty of 10% was not consistent with the correct value which is close to . Apparently, the authors underestimated the experimental uncertainties.
Barnett reported the results of his measurements at several scientific conferences in 1914. In October 1915 he published the first observation of the Barnett effect in a paper titled "Magnetization by Rotation". His result for was close to the right value of , which was unexpected at that time.
In 1918 John Quincy Stewart published the results of his measurements confirming the Barnett's result. In his paper he was calling the phenomenon the 'Richardson effect'.
The following experiments demonstrated that the gyromagnetic ratio for iron is indeed close to rather than . This phenomenon, dubbed "gyromagnetic anomaly" was finally explained after the discovery of the spin and introduction of the Dirac equation in 1928.
The experimental equipment was later donated by Geertruida de Haas-Lorentz, wife of de Haas and daughter of Lorentz, to the Ampère Museum in Lyon France in 1961. It went lost and was later rediscovered in 2023.
Literature about the effect and its discovery
Detailed accounts of the historical context and the explanations of the effect can be found in literature Commenting on the papers by Einstein, Calaprice in The Einstein Almanac writes:
52. "Experimental Proof of Ampère's Molecular Currents" (Experimenteller Nachweis der Ampereschen Molekularströme) (with Wander J. de Hass). Deutsche Physikalische Gesellschaft, Verhandlungen 17 (1915): 152–170.
Considering [André-Marie] Ampère's hypothesis that magnetism is caused by the microscopic circular motions of electric charges, the authors proposed a design to test [Hendrik] Lorentz's theory that the rotating particles are electrons. The aim of the experiment was to measure the torque generated by a reversal of the magnetisation of an iron cylinder.
Calaprice further writes:
53. "Experimental Proof of the Existence of Ampère's Molecular Currents" (with Wander J. de Haas) (in English). Koninklijke Akademie van Wetenschappen te Amsterdam, Proceedings 18 (1915–16).
Einstein wrote three papers with Wander J. de Haas on experimental work they did together on Ampère's molecular currents, known as the Einstein–De Haas effect. He immediately wrote a correction to paper 52 (above) when Dutch physicist H. A. Lorentz pointed out an error. In addition to the two papers above [that is 52 and 53] Einstein and de Haas cowrote a "Comment" on paper 53 later in the year for the same journal. This topic was only indirectly related to Einstein's interest in physics, but, as he wrote to his friend Michele Besso, "In my old age I am developing a passion for experimentation."
The second paper by Einstein and de Haas was communicated to the "Proceedings of the Royal Netherlands Academy of Arts and Sciences" by Hendrik Lorentz who was the father-in-law of de Haas. According to Viktor Frenkel, Einstein wrote in a report to the German Physical Society: "In the past three months I have performed experiments jointly with de Haas–Lorentz in the Imperial Physicotechnical Institute that have firmly established the existence of Ampère molecular currents." Probably, he attributed the hyphenated name to de Haas, not meaning both de Haas and H. A. Lorentz.
Later measurements and applications
The effect was used to measure the properties of various ferromagnetic elements and alloys. The key to more accurate measurements was better magnetic shielding, while the methods were essentially similar to those of the first experiments. The experiments measure the value of the g-factor (here we use the projections of the pseudovectors and onto the magnetization axis and omit the sign). The magnetization and the angular momentum consist of the contributions from the spin and the orbital angular momentum: , .
Using the known relations , and , where is the g-factor for the anomalous magnetic moment of the electron, one can derive the relative spin contribution to magnetization as: .
For pure iron the measured value is , and
. Therefore, in pure iron 96% of the magnetization
is provided by the polarization of the electrons' spins,
while the remaining 4% is provided by the polarization of their orbital angular momenta.
See also
Barnett effect
References
External links
"Einsteins's only experiment" (links to a directory of the Home Page of Physikalisch-Technische Bundesanstalt (PTB), Germany ). Here is a replica to be seen of the original apparatus on which the Einstein–de Haas experiment was carried out.
Experimental physics
Magnetism
Quantum magnetism
Albert Einstein | Einstein–de Haas effect | [
"Physics",
"Materials_science"
] | 1,967 | [
"Condensed matter physics",
"Quantum magnetism",
"Experimental physics",
"Quantum mechanics"
] |
9,632,448 | https://en.wikipedia.org/wiki/Fermi%20coordinates | In the mathematical theory of Riemannian geometry, there are two uses of the term Fermi coordinates. In one use they are local coordinates that are adapted to a geodesic. In a second, more general one, they are local coordinates that are adapted to any world line, even not geodesical.
Take a future-directed timelike curve ,
being the proper time along in the spacetime .
Assume that is the initial point of . Fermi coordinates adapted to are constructed this way. Consider an orthonormal basis of with parallel to . Transport the basis along making use of Fermi–Walker's transport. The basis at each point is still orthonormal with
parallel to and is non-rotated (in a precise sense related to the decomposition of Lorentz transformations into pure transformations and rotations) with respect to the initial basis, this is the physical meaning of Fermi–Walker's transport.
Finally construct a coordinate system in an open tube , a neighbourhood of , emitting all spacelike geodesics through with initial tangent vector , for every . A point has coordinates where is the only vector whose associated geodesic reaches for the value of its parameter and is the only time along for that this geodesic reaching exists.
If itself is a geodesic, then Fermi–Walker's transport becomes the standard parallel transport and Fermi's coordinates become standard Riemannian coordinates adapted to . In this case, using these coordinates in a neighbourhood of , we have , all Christoffel symbols vanish exactly on . This property is not valid for Fermi's coordinates however when is not a geodesic. Such coordinates are called Fermi coordinates and are named after the Italian physicist Enrico Fermi. The above properties are only valid on the geodesic. The Fermi-coordinates adapted to a null geodesic is provided by Mattias Blau, Denis Frank, and Sebastian Weiss. Notice that, if all Christoffel symbols vanish near , then the manifold is flat near .
In the Riemannian case at least, Fermi coordinates can be generalized to an arbitrary submanifold.
See also
Proper reference frame (flat spacetime)#Proper coordinates or Fermi coordinates
Geodesic normal coordinates
Fermi–Walker transport
Christoffel symbols
Isothermal coordinates
References
Riemannian geometry
Coordinate systems in differential geometry | Fermi coordinates | [
"Mathematics"
] | 480 | [
"Coordinate systems in differential geometry",
"Coordinate systems"
] |
3,132,156 | https://en.wikipedia.org/wiki/Betti%20reaction | The Betti reaction is a chemical addition reaction of aldehydes, primary aromatic amines and phenols producing α-aminobenzylphenols.
The Betti reaction is a special case of the Mannich reaction.
History
The reaction is named after the Italian chemist Mario Betti (1857-1942). Betti worked at many universities in Italy, including Florence, Cagliari, Siena, Genoa and Bologna, where he was the successor of Giacomo Ciamician. Betti's main research was focused on stereochemistry, and the resolution of racemic compounds, the relationship between molecular constitution and optical rotation, as well asymmetric synthesis using chiral auxiliaries or in the presence of polarized light.
In 1939 Mario Betti was appointed the Senator of the Kingdom of Italy.
In 1900 Betti hypothesized that 2-naphthol would be a good carbon nucleophile to the imine produced from the reaction of benzaldehyde and aniline. This led to the Betti reaction.
Today, the name has grown to refer to any reaction of aldehydes, primary aromatic amines and phenols producing α-aminobenzylphenols.
Mechanism
The reaction mechanism begins with an imine condensation of a primary aromatic amine and formaldehyde
Once the imine is produced, it reacts with phenol in the presence of water to yield an α-aminobenzylphenol.
First, the lone-pair on the nitrogen of the imine deprotonates the phenol, pushing the bonding electrons onto the oxygen. The carbonyl is then reformed and a double bond in the benzene ring attacks the carbon atom in the pronated imine cation. Water then acts as a base and deprotonates the α-carbon, reforming the aromatic ring and pushing electrons onto oxygen. The oxygen, which now has a negative formal charge, then attacks a hydrogen on the hydronium, resulting in an α-aminobenzylphenol, with water as the only byproduct.
Betti Base
The product of the Betti reaction is called the Betti base. The stereochemistry of the base was resolved into two isomers by using tartaric acid.
Uses for the Betti base and its derivatives include:
Enantioselective addition of diethylzinc to aryl aldehydes.
Enantioselective alkenylation of aldehydes.
Preparation of stable boronate complexes, which can be alkylated to yield amino acid precursors.
Separation of enantiomers.
References
Further reading
Betti, M. Gazz. Chim. Ital. 1900, 30 II, 301.
Betti, M. Gazz. Chim. Ital. 1903, 33 II, 2.
Organic Syntheses, Coll. Vol. 1, p.381 (1941); Vol. 9, p.60 (1929). (Article)
Pirrone, F Gazz. Chim. Ital. 1936, 66, 518.
Pirrone, F Gazz. Chim. Ital. 1937, 67, 529.
Phillips, J. P. Chem. Rev. 1956, 56, 286.
Phillips, J. P.; Barrall, E. M. J. Org. Chem. 1956, 21, 692.
Kumar, A.; Kumar, M.; Gupta, M. K. Tetrahedron Lett. 2010, 12, 1582-1584.
Addition reactions
Multiple component reactions
Name reactions | Betti reaction | [
"Chemistry"
] | 743 | [
"Name reactions",
"Coupling reactions",
"Organic reactions"
] |
3,132,530 | https://en.wikipedia.org/wiki/Nakai%20conjecture | In mathematics, the Nakai conjecture is an unproven characterization of smooth algebraic varieties, conjectured by Japanese mathematician Yoshikazu Nakai in 1961.
It states that if V is a complex algebraic variety, such that its ring of differential operators is generated by the derivations it contains, then V is a smooth variety. The converse statement, that smooth algebraic varieties have rings of differential operators that are generated by their derivations, is a result of Alexander Grothendieck.
The Nakai conjecture is known to be true for algebraic curves and Stanley–Reisner rings. A proof of the conjecture would also establish the Zariski–Lipman conjecture, for a complex variety V with coordinate ring R. This conjecture states that if the derivations of R are a free module over R, then V is smooth.
References
Algebraic geometry
Singularity theory
Conjectures
Unsolved problems in geometry | Nakai conjecture | [
"Mathematics"
] | 182 | [
"Geometry problems",
"Unsolved problems in mathematics",
"Unsolved problems in geometry",
"Fields of abstract algebra",
"Conjectures",
"Algebraic geometry",
"Mathematical problems"
] |
3,132,756 | https://en.wikipedia.org/wiki/Troxler%27s%20fading | Troxler's fading, also called Troxler fading or the Troxler effect, is an optical illusion affecting visual perception. When one fixates on a particular point for even a short period of time, an unchanging stimulus away from the fixation point will fade away and disappear. Research suggests that at least some portion of the perceptual phenomena associated with Troxler's fading occurs in the brain.
Discovery
Troxler's fading was first identified by Swiss physician Ignaz Paul Vital Troxler in 1804, who was practicing in Vienna at the time.
Process
Neural adaptation
Troxler's fading has been attributed to the adaptation of neurons vital for perceiving stimuli in the visual system. It is part of the general principle in sensory systems that unvarying stimuli soon disappear from our awareness. For example, if a small piece of paper is dropped on the inside of one's forearm, it is felt for a short period of time. Soon, however, the sensation fades away. This is because the tactile neurons have adapted and start to ignore the unimportant stimulus. But if one jiggles one's arm up and down, giving varying stimulation, one will continue to feel the paper.
Visual parallels
A similar 'sensory fading,' or filling-in, can be seen of a fixated stimulus when its retinal image is made stationary on the retina (a stabilized retinal image). Stabilization can be done in at least three ways.
First, one can mount a tiny projector on a contact lens. The projector shines an image into the eye. As the eye moves, the contact lens moves with it, so the image is always projected onto the same part of the retina;
Second, one can monitor eye movements and move the stimulus to cancel the eye movements;
Third, one can induce an afterimage, usually by an intense, brief flash, such as when one is photographed using a photographic flash (a form of stabilized retinal image that most people have experienced). This causes an image to be bleached onto the retina by the strong response of the rods and cones. In all these cases, the stimulus fades away after a short time and disappears.
The Troxler effect is enhanced if the stimulus is small, is of low contrast (or "equiluminant"), or is blurred. The effect is enhanced the further the stimulus is away from the fixation point.
Explanation of effect
Troxler's fading can occur without any extraordinary stabilization of the retinal image in peripheral vision because the neurons in the visual system beyond the rods and cones have large receptive fields. This means that the small, involuntary eye movements made when fixating on something fail to move the stimulus onto a new cell's receptive field, in effect giving unvarying stimulation. Further experimentation this century by Hsieh and Tse showed that at least some portion of the perceptual fading occurred in the brain, not in the eyes.
See also
Cognitive science
Lilac chaser – An illusion that involves Troxler fading
Bloody mary - one of the most well known examples of this effect
References
External links
Troxler project: a research project on Troxler's fading
Optical illusions
Visual perception | Troxler's fading | [
"Physics"
] | 675 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
3,132,996 | https://en.wikipedia.org/wiki/Oxyhydrogen | Oxyhydrogen is a mixture of hydrogen (H2) and oxygen (O2) gases. This gaseous mixture is used for torches to process refractory materials and was the first
gaseous mixture used for welding. Theoretically, a ratio of 2:1 hydrogen:oxygen is enough to achieve maximum efficiency; in practice a ratio 4:1 or 5:1 is needed to avoid an oxidizing flame.
This mixture may also be referred to as (Scandinavian and German ; ), although some authors define knallgas to be a generic term for the mixture of fuel with the precise amount of oxygen required for complete combustion, thus 2:1 oxyhydrogen would be called "hydrogen-knallgas".
"Brown's gas" and HHO are terms for oxyhydrogen originating in pseudoscience, although is preferred due to meaning .
Properties
Oxyhydrogen will combust when brought to its autoignition temperature. For the stoichiometric mixture in air, at normal atmospheric pressure, autoignition occurs at about 570 °C (1065 °F). The minimum energy required to ignite such a mixture, at lower temperatures, with a spark is about 20 microjoules. At standard temperature and pressure, oxyhydrogen can burn when it is between about 4% and 95% hydrogen by volume.
When ignited, the gas mixture converts to water vapor and releases energy, which sustains the reaction: 241.8 kJ of energy (LHV) for every mole of burned. The amount of heat energy released is independent of the mode of combustion, but the temperature of the flame varies. The maximum temperature of about is achieved with an exact stoichiometric mixture, about hotter than a hydrogen flame in air.
When either of the gases are mixed in excess of this ratio, or when mixed with an inert gas like nitrogen, the heat must spread throughout a greater quantity of matter, reducing flame temperature.
Oxyhydrogen is explosive and can detonate when ignited, releasing a large amount of energy. This is often demonstrated in classroom environments in which teachers fill a balloon with the gas, due to the easy access of hydrogen and oxygen.
Production by electrolysis
A pure stoichiometric mixture may be obtained by water electrolysis, which uses an electric current to dissociate the water molecules:
Electrolysis:
Combustion:
William Nicholson was the first to decompose water in this manner in 1800. In theory, the input energy of a closed system always equals the output energy, as the first law of thermodynamics states. However, in practice no systems are perfectly closed, and the energy required to generate the oxyhydrogen always exceeds the energy released by combusting it, even at maximum practical efficiency, as the second law of thermodynamics implies (see Electrolysis of water#Efficiency).
Applications
Lighting
Many forms of oxyhydrogen lamps have been described, such as the limelight, which used an oxyhydrogen flame to heat a piece of quicklime to white hot incandescence. Because of the explosiveness of the oxyhydrogen, limelights have been replaced by electric lighting.
Oxyhydrogen blowpipe
The foundations of the oxy-hydrogen blowpipe were laid down by Carl Wilhelm Scheele and Joseph Priestley around the last quarter of the eighteenth century. The oxy-hydrogen blowpipe itself was developed by the Frenchman Bochard-de-Saron, the English mineralogist Edward Daniel Clarke and the American chemist Robert Hare in the late 18th and early 19th centuries. It produced a flame hot enough to melt such refractory materials as platinum, porcelain, fire brick, and corundum, and was a valuable tool in several fields of science. It is used in the Verneuil process to produce synthetic corundum.
Oxyhydrogen torch
An oxyhydrogen torch (also known as hydrogen torch) is an oxy-gas torch that burns hydrogen (the fuel) with oxygen (the oxidizer). It is used for cutting and welding metals, glasses, and thermoplastics.
Due to competition from arc welding and other oxy-fuel torches such as the acetylene-fueled cutting torch, the oxyhydrogen torch is seldom used today, but it remains the preferred cutting tool in some niche applications.
Oxyhydrogen was once used in working platinum, because at the time, only it could burn hot enough to melt the metal . These techniques have been superseded by the electric arc furnace.
Pseudoscientific claims
Oxyhydrogen is associated with various exaggerated claims. It is often called "Brown's gas" or "HHO gas", a term popularized by fringe physicist Ruggero Santilli, who claimed that his HHO gas, produced by a special apparatus, is "a new form of water", with new properties, based on his fringe theory of "magnecules".
Many other pseudoscientific claims have been made about oxyhydrogen, like an ability to neutralize radioactive waste, help plants to germinate, and more.
Oxyhydrogen is often mentioned in conjunction with vehicles that claim to use water as a fuel. The most common and decisive counter-argument against producing this gas on board to use as a fuel or fuel additive is that more energy is always needed to split water molecules than is recouped by burning the resulting gas. Additionally, the volume of gas that can be produced for on-demand consumption through electrolysis is very small in comparison to the volume consumed by an internal combustion engine.
An article in Popular Mechanics in 2008 reported that oxyhydrogen does not increase the fuel economy in automobiles.
"Water-fueled" cars should not be confused with hydrogen-fueled cars, where the hydrogen is produced elsewhere and used as fuel or where it is used as fuel enhancement.
References
Fire
Chemical mixtures
Electrolysis
Oxygen
Hydrogen technologies
Hydrogen production
Fuels
Water fuel
Industrial gases | Oxyhydrogen | [
"Chemistry"
] | 1,244 | [
"Chemical energy sources",
"Electrochemistry",
"Combustion",
"Industrial gases",
"Chemical mixtures",
"Fuels",
"Electrolysis",
"Chemical process engineering",
"nan",
"Fire"
] |
3,133,405 | https://en.wikipedia.org/wiki/Flow%20battery | A flow battery, or redox flow battery (after reduction–oxidation), is a type of electrochemical cell where chemical energy is provided by two chemical components dissolved in liquids that are pumped through the system on separate sides of a membrane. Ion transfer inside the cell (accompanied by current flow through an external circuit) occurs across the membrane while the liquids circulate in their respective spaces.
Various flow batteries have been demonstrated, including inorganic and organic forms. Flow battery design can be further classified into full flow, semi-flow, and membraneless.
The fundamental difference between conventional and flow batteries is that energy is stored in the electrode material in conventional batteries, while in flow batteries it is stored in the electrolyte.
A flow battery may be used like a fuel cell (where new charged negolyte (a.k.a. reducer or fuel) and charged posolyte (a.k.a. oxidant) are added to the system) or like a rechargeable battery (where an electric power source drives regeneration of the reducer and oxidant).
Flow batteries have certain technical advantages over conventional rechargeable batteries with solid electroactive materials, such as independent scaling of power (determined by the size of the stack) and of energy (determined by the size of the tanks), long cycle and calendar life, and potentially lower total cost of ownership,. However, flow batteries suffer from low cycle energy efficiency (50–80%). This drawback stems from the need to operate flow batteries at high (>= 100 mA/cm2) current densities to reduce the effect of internal crossover (through the membrane/separator) and to reduce the cost of power (size of stacks). Also, most flow batteries (Zn-Cl2, Zn-Br2 and H2-LiBrO3 are exceptions) have lower specific energy (heavier weight) than lithium-ion batteries. The heavier weight results mostly from the need to use a solvent (usually water) to maintain the redox active species in the liquid phase.
Patent Classifications for flow batteries had not been fully developed as of 2021. Cooperative Patent Classification considers flow batteries as a subclass of regenerative fuel cell (H01M8/18), even though it is more appropriate to consider fuel cells as a subclass of flow batteries.
Cell voltage is chemically determined by the Nernst equation and ranges, in practical applications, from 1.0 to 2.43 volts. The energy capacity is a function of the electrolyte volume and the power is a function of the surface area of the electrodes.
History
The zinc–bromine flow battery (Zn-Br2) was the original flow battery. John Doyle file patent on September 29, 1879. Zn-Br2 batteries have relatively high specific energy, and were demonstrated in electric cars in the 1970s.
Walther Kangro, an Estonian chemist working in Germany in the 1950s, was the first to demonstrate flow batteries based on dissolved transition metal ions: Ti–Fe and Cr–Fe. After initial experimentations with Ti–Fe redox flow battery (RFB) chemistry, NASA and groups in Japan and elsewhere selected Cr–Fe chemistry for further development. Mixed solutions (i.e. comprising both chromium and iron species in the negolyte and in the posolyte) were used in order to reduce the effect of time-varying concentration during cycling.
In the late 1980s, Sum, Rychcik and Skyllas-Kazacos at the University of New South Wales (UNSW) in Australia demonstrated vanadium RFB chemistry UNSW filed several patents related to VRFBs, that were later licensed to Japanese, Thai and Canadian companies, which tried to commercialize this technology with varying success.
Organic redox flow batteries emerged in 2009.
In 2022, Dalian, China began operating a 400 MWh, 100 MW vanadium flow battery, then the largest of its type.
Sumitomo Electric has built flow batteries for use in Taiwan, Belgium, Australia, Morocco and California. Hokkaido’s flow battery farm was the biggest in the world when it opened in April 2022 — until China deployed one eight times larger that can match the output of a natural gas plant.
Design
A flow battery is a rechargeable fuel cell in which an electrolyte containing one or more dissolved electroactive elements flows through an electrochemical cell that reversibly converts chemical energy to electrical energy. Electroactive elements are "elements in solution that can take part in an electrode reaction or that can be adsorbed on the electrode."
Electrolyte is stored externally, generally in tanks, and is typically pumped through the cell (or cells) of the reactor. Flow batteries can be rapidly "recharged" by replacing discharged electrolyte liquid (analogous to refueling internal combustion engines) while recovering the spent material for recharging. They can also be recharged in situ. Many flow batteries use carbon felt electrodes due to its low cost and adequate electrical conductivity, despite their limited power density due to their low inherent activity toward many redox couples. The amount of electricity that can be generated depends on the volume of electrolyte.
Flow batteries are governed by the design principles of electrochemical engineering.
Evaluation
Redox flow batteries, and to a lesser extent hybrid flow batteries, have the advantages of:
Independent scaling of energy (tanks) and power (stack), which allows for a cost/weight/etc. optimization for each application
Long cycle and calendar lives (because there are no solid-to-solid phase transitions, which degrade lithium-ion and related batteries)
Quick response times
No need for "equalisation" charging (the overcharging of a battery to ensure all cells have an equal charge)
No harmful emissions
Little/no self-discharge during idle periods
Recycling of electroactive materials
Some types offer easy state-of-charge determination (through voltage dependence on charge), low maintenance and tolerance to overcharge/overdischarge.
They are safe because they typically do not contain flammable electrolytes, and electrolytes can be stored away from the power stack.
The main disadvantages are:
Low energy density (large tanks are required to store useful amounts of energy)
Low charge and discharge rates. This implies large electrodes and membrane separators, increasing cost.
Lower energy efficiency, because they operate at higher current densities to minimize the effects of cross-over (internal self-discharge) and to reduce cost.
Flow batteries typically have a higher energy efficiency than fuel cells, but lower than lithium-ion batteries.
Traditional flow battery chemistries have both low specific energy (which makes them too heavy for fully electric vehicles) and low specific power (which makes them too expensive for stationary energy storage). However a high power of 1.4 W/cm2 was demonstrated for hydrogen–bromine flow batteries, and a high specific energy (530 Wh/kg at the tank level) was shown for hydrogen–bromate flow batteries
Traditional flow batteries
The redox cell uses redox-active species in fluid (liquid or gas) media. Redox flow batteries are rechargeable (secondary) cells. Because they employ heterogeneous electron transfer rather than solid-state diffusion or intercalation they are more similar to fuel cells than to conventional batteries. The main reason fuel cells are not considered to be batteries, is because originally (in the 1800s) fuel cells emerged as a means to produce electricity directly from fuels (and air) via a non-combustion electrochemical process. Later, particularly in the 1960s and 1990s, rechargeable fuel cells (i.e. /, such as unitized regenerative fuel cells in NASA's Helios Prototype) were developed.
Cr–Fe chemistry has disadvantages, including hydrate isomerism (i.e. the equilibrium between electrochemically active Cr3+ chloro-complexes and inactive hexa-aqua complex and hydrogen evolution on the negode. Hydrate isomerism can be alleviated by adding chelating amino-ligands, while hydrogen evolution can be mitigated by adding Pb salts to increase the H2 overvoltage and Au salts for catalyzing the chromium electrode reaction.
Traditional redox flow battery chemistries include iron-chromium, vanadium, polysulfide–bromide (Regenesys), and uranium. Redox fuel cells are less common commercially although many have been proposed.
Vanadium
Vanadium redox flow batteries are the commercial leaders. They use vanadium at both electrodes, so they do not suffer cross-contamination. The limited solubility of vanadium salts, however, offsets this advantage in practice. This chemistry's advantages include four oxidation states within the electrochemical voltage window of the graphite-aqueous acid interface, and thus the elimination of the mixing dilution, detrimental in Cr–Fe RFBs. More importantly for commercial success is the near-perfect match of the voltage window of carbon/aqueous acid interface with that of vanadium redox-couples. This extends the life of the low-cost carbon electrodes and reduces the impact of side reactions, such as H2 and O2 evolutions, resulting in many year durability and many cycle (15,000–20,000) lives, which in turn results in a record low levelized cost of energy (LCOE, system cost divided by usable energy, cycle life, and round-trip efficiency). These long lifetimes allow for the amortization of their relatively high capital cost (driven by vanadium, carbon felts, bipolar plates, and membranes). The LCOE is on the order of a few tens cents per kWh, much lower than of solid-state batteries and near the targets of 5 cents stated by US and EC government agencies. Major challenges include: low abundance and high costs of V2O5 (> $30 / Kg); parasitic reactions including hydrogen and oxygen evolution; and precipitation of V2O5 during cycling.
Hybrid
The hybrid flow battery (HFB) uses one or more electroactive components deposited as a solid layer. The major disadvantage is that this reduces decoupled energy and power. The cell contains one battery electrode and one fuel cell electrode. This type is limited in energy by the electrode surface area.
HFBs include zinc–bromine, zinc–cerium, soluble lead–acid, and all-iron flow batteries. Weng et al. reported a vanadium–metal hydride hybrid flow battery with an experimental OCV of 1.93 V and operating voltage of 1.70 V, relatively high values. It consists of a graphite felt positive electrode operating in a mixed solution of and , and a metal hydride negative electrode in KOH aqueous solution. The two electrolytes of different pH are separated by a bipolar membrane. The system demonstrated good reversibility and high efficiencies in coulomb (95%), energy (84%), and voltage (88%). They reported improvements with increased current density, inclusion of larger 100 cm2 electrodes, and series operation. Preliminary data using a fluctuating simulated power input tested the viability toward kWh scale storage. In 2016, a high energy density Mn(VI)/Mn(VII)-Zn hybrid flow battery was proposed.
Zinc-polyiodide
A prototype zinc–polyiodide flow battery demonstrated an energy density of 167 Wh/L. Older zinc–bromide cells reach 70 Wh/L. For comparison, lithium iron phosphate batteries store 325 Wh/L. The zinc–polyiodide battery is claimed to be safer than other flow batteries given its absence of acidic electrolytes, nonflammability and operating range of that does not require extensive cooling circuitry, which would add weight and occupy space. One unresolved issue is zinc buildup on the negative electrode that can permeate the membrane, reducing efficiency. Because of the Zn dendrite formation, Zn-halide batteries cannot operate at high current density (> 20 mA/cm2) and thus have limited power density. Adding alcohol to the electrolyte of the ZnI battery can help. The drawbacks of Zn/I RFB lie are the high cost of Iodide salts (> $20 / Kg); limited area capacity of Zn deposition, reducing the decoupled energy and power; and Zn dendrite formation.
When the battery is fully discharged, both tanks hold the same electrolyte solution: a mixture of positively charged zinc ions () and negatively charged iodide ion, (). When charged, one tank holds another negative ion, polyiodide, (). The battery produces power by pumping liquid across the stack where the liquids mix. Inside the stack, zinc ions pass through a selective membrane and change into metallic zinc on the stack's negative side. To increase energy density, bromide ions () are used as the complexing agent to stabilize the free iodine, forming iodine–bromide ions () as a means to free up iodide ions for charge storage.
Proton flow
Proton flow batteries (PFB) integrate a metal hydride storage electrode into a reversible proton exchange membrane (PEM) fuel cell. During charging, PFB combines hydrogen ions produced from splitting water with electrons and metal particles in one electrode of a fuel cell. The energy is stored in the form of a metal hydride solid. Discharge produces electricity and water when the process is reversed and the protons are combined with ambient oxygen. Metals less expensive than lithium can be used and provide greater energy density than lithium cells.
Organic
Compared to inorganic redox flow batteries, such as vanadium and Zn-Br2 batteries. Organic redox flow batteries advantage is the tunable redox properties of its active components. As of 2021, organic RFB experienced low durability (i.e. calendar or cycle life, or both) and have not been demonstrated on a commercial scale.
Organic redox flow batteries can be further classified into aqueous (AORFBs) and non-aqueous (NAORFBs). AORFBs use water as solvent for electrolyte materials while NAORFBs employ organic solvents. AORFBs and NAORFBs can be further divided into total and hybrid systems. The former use only organic electrode materials, while the latter use inorganic materials for either anode or cathode. In larger-scale energy storage, lower solvent cost and higher conductivity give AORFBs greater commercial potential, as well as offering the safety advantages of water-based electrolytes. NAORFBs instead provide a much larger voltage window and occupy less space.
pH neutral AORFBs
pH neutral AORFBs are operated at pH 7 conditions, typically using NaCl as a supporting electrolyte. At pH neutral conditions, organic and organometallic molecules are more stable than at corrosive acidic and alkaline conditions. For example, K4[Fe(CN)], a common catholyte used in AORFBs, is not stable in alkaline solutions but is at pH neutral conditions.
AORFBs used methyl viologen as an anolyte and 4-hydroxy-2,2,6,6-tetramethylpiperidin-1-oxyl as a catholyte at pH neutral conditions, plus NaCL and a low-cost anion exchange membrane. This MV/TEMPO system has the highest cell voltage, 1.25V, and, possibly, lowest capital cost ($180/kWh) reported for AORFBs as of 2015. The aqueous liquid electrolytes were designed as a drop-in replacement without replacing infrastructure. A 600-milliwatt test battery was stable for 100 cycles with nearly 100 percent efficiency at current densities ranging from 20 to 100 mA/cm, with optimal performance rated at 40–50mA, at which about 70% of the battery's original voltage was retained. Neutral AORFBs can be more environmentally friendly than acid or alkaline alternatives, while showing electrochemical performance comparable to corrosive RFBs. The MV/TEMPO AORFB has an energy density of 8.4Wh/L with the limitation on the TEMPO side. In 2019Viologen-based flow batteries using an ultralight sulfonate–viologen/ferrocyanide AORFB were reported to be stable for 1000 cycles at an energy density of 10 Wh/L, the most stable, energy-dense AORFB to that date.
Acidic AORFBs
Quinones and their derivatives are the basis of many organic redox systems. In one study, 1,2-dihydrobenzoquinone-3,5-disulfonic acid (BQDS) and 1,4-dihydrobenzoquinone-2-sulfonic acid (BQS) were employed as cathodes, and conventional Pb/PbSO4 was the anolyte in a hybrid acid AORFB. Quinones accept two units of electrical charge, compared with one in conventional catholyte, implying twice as much energy in a given volume.
Another quinone 9,10-Anthraquinone-2,7-disulfonic acid (AQDS), was evaluated. AQDS undergoes rapid, reversible two-electron/two-proton reduction on a glassy carbon electrode in sulfuric acid. An aqueous flow battery with inexpensive carbon electrodes, combining the quinone/hydroquinone couple with the / redox couple, yields a peak galvanic power density exceeding 6,000 W/m2 at 13,000 A/m2. Cycling showed > 99% storage capacity retention per cycle. Volumetric energy density was over 20 Wh/L. Anthraquinone-2-sulfonic acid and anthraquinone-2,6-disulfonic acid on the negative side and 1,2-dihydrobenzoquinone- 3,5-disulfonic acid on the positive side avoids the use of hazardous Br2. The battery was claimed to last 1,000 cycles without degradation. It has a low cell voltage (ca. 0.55V) and a low energy density (< 4Wh/L).
Replacing hydrobromic acid with a less toxic alkaline solution (1M KOH) and ferrocyanide was less corrosive, allowing the use of inexpensive polymer tanks. The increased electrical resistance in the membrane was compensated increased voltage to 1.2V. Cell efficiency exceeded 99%, while round-trip efficiency measured 84%. The battery offered an expected lifetime of at least 1,000 cycles. Its theoretic energy density was 19Wh/L. Ferrocyanide's chemical stability in high pH KOH solution was not verified.
Integrating both anolyte and catholyte in the same molecule, i.e., bifunctional analytes or combi-molecules allow the same material to be used in both tanks. In one tank it is an electron donor, while in the other it is an electron recipient. This has advantages such as diminishing crossover effects. Thus, quinone diaminoanthraquinone and indigo-based molecules as well as TEMPO/phenazine are potential electrolytes for such symmetric redox-flow batteries (SRFB).
Another approach adopted a Blatter radical as the donor/recipient. It endured 275 charge and discharge cycles in tests, although it was not water-soluble.
Alkaline
Quinone and fluorenone molecules can be reengineered to increase water solubility. In 2021 a reversible ketone (de)hydrogenation demonstration cell operated continuously for 120 days over 1,111 charging cycles at room temperature without a catalyst, retaining 97% percent capacity. The cell offered more than double the energy density of vanadium-based systems. The major challenge was the lack of a stable catholyte, holding energy densities below 5 Wh/L. Alkaline AORFBs use excess potassium ferrocyanide catholyte because of the stability issue of ferrocyanide in alkaline solutions.
Metal-organic flow batteries use organic ligands to improve redox properties. The ligands can be chelates such as EDTA, and can enable the electrolyte to be in neutral or alkaline conditions under which metal aquo complexes would otherwise precipitate. By blocking the coordination of water to the metal, organic ligands can inhibit metal-catalyzed water-splitting reactions, resulting in higher voltage aqueous systems. For example, the use of chromium coordinated to 1,3-propanediaminetetraacetate (PDTA), gave cell potentials of 1.62 V vs. ferrocyanide and a record 2.13 V vs. bromine. Metal-organic flow batteries may be known as coordination chemistry flow batteries, such as Lockheed Martin's Gridstar Flow technology.
Oligomer
Oligomer redox-species were proposed to reduce crossover, while allowing low-cost membranes. Such redox-active oligomers are known as redoxymers. One system uses organic polymers and a saline solution with a cellulose membrane. A prototype underwent 10,000 charging cycles while retaining substantial capacity. The energy density was 10 Wh/L. Current density reached ,1 amperes/cm2.
Another oligomer RFB employed viologen and TEMPO redoxymers in combination with low-cost dialysis membranes. Functionalized macromolecules (similar to acrylic glass or styrofoam) dissolved in water were the active electrode material. The size-selective nanoporous membrane worked like a strainer and is produced much more easily and at lower cost than conventional ion-selective membranes. It block the big "spaghetti"-like polymer molecules, while allowing small counterions to pass. The concept may solve the high cost of traditional Nafion membrane. RFBs with oligomer redox-species have not demonstrated competitive area-specific power. Low operating current density may be an intrinsic feature of large redox-molecules.
Other types
Other flow-type batteries include the zinc–cerium battery, the zinc–bromine battery, and the hydrogen–bromine battery.
Membraneless
A membraneless battery relies on laminar flow in which two liquids are pumped through a channel, where they undergo electrochemical reactions to store or release energy. The solutions pass in parallel, with little mixing. The flow naturally separates the liquids, without requiring a membrane.
Membranes are often the most costly and least reliable battery components, as they are subject to corrosion by repeated exposure to certain reactants. The absence of a membrane enables the use of a liquid bromine solution and hydrogen: this combination is problematic when membranes are used, because they form hydrobromic acid that can destroy the membrane. Both materials are available at low cost. The design uses a small channel between two electrodes. Liquid bromine flows through the channel over a graphite cathode and hydrobromic acid flows under a porous anode. At the same time, hydrogen gas flows across the anode. The chemical reaction can be reversed to recharge the battery – a first for a membraneless design. One such membraneless flow battery announced in August 2013 produced a maximum power density of 795 kW/cm2, three times more than other membraneless systems—and an order of magnitude higher than lithium-ion batteries.
In 2018, a macroscale membraneless RFB capable of recharging and recirculation of the electrolyte streams was demonstrated. The battery was based on immiscible organic catholyte and aqueous anolyte liquids, which exhibited high capacity retention and Coulombic efficiency during cycling.
Suspension-based
A lithium–sulfur system arranged in a network of nanoparticles eliminates the requirement that charge moves in and out of particles that are in direct contact with a conducting plate. Instead, the nanoparticle network allows electricity to flow throughout the liquid. This allows more energy to be extracted.
In a semi-solid flow battery, positive and negative electrode particles are suspended in a carrier liquid. The suspensions are flow through a stack of reaction chambers, separated by a barrier such as a thin, porous membrane. The approach combines the basic structure of aqueous-flow batteries, which use electrode material suspended in a liquid electrolyte, with the chemistry of lithium-ion batteries in both carbon-free suspensions and slurries with a conductive carbon network. The carbon-free semi-solid RFB is also referred to as solid dispersion redox flow batteries. Dissolving a material changes its chemical behavior significantly. However, suspending bits of solid material preserves the solid's characteristics. The result is a viscous suspension.
In 2022, Influit Energy announced a flow battery electrolyte consisting of a metal oxide suspended in an aqueous solution.
Flow batteries with redox-targeted solids (ROTS), also known as solid energy boosters (SEBs) either the posolyte or negolyte or both (a.k.a. redox fluids), come in contact with one or more solid electroactive materials (SEM). The fluids comprise one or more redox couples, with redox potentials flanking the redox potential of the SEM. Such SEB/RFBs combine the high specific energy advantage of conventional batteries (such as lithium-ion) with the decoupled energy-power advantage of flow batteries. SEB(ROTS) RFBs have advantages compared to semi-solid RFBs, such as no need to pump viscous slurries, no precipitation/clogging, higher area-specific power, longer durability, and wider chemical design space. However, because of double energy losses (one in the stack and another in the tank between the SEB(ROTS) and a mediator), such batteries suffer from poor energy efficiency. On a system-level, the practical specific energy of traditional lithium-ion batteries is larger than that of SEB(ROTS)-flow versions of lithium-ion batteries.
Comparison
Applications
Technical merits make redox flow batteries well-suited for large-scale energy storage. Flow batteries are normally considered for relatively large (1 kWh – 10 MWh) stationary applications with multi-hour charge-discharge cycles. Flow batteries are not cost-efficient for shorter charge/discharge times. Market niches include:
Grid storage - short and/or long-term energy storage for use by the grid
Load balancing – the battery is attached to the grid to store power during off-peak hours and release it during peak demand periods. The common problem limiting this use of most flow battery chemistries is their low areal power (operating current density) which translates into high cost.
Shifting energy from intermittent sources such as wind or solar for use during periods of peak demand.
Peak shaving, where demand spikes are met by the battery.
UPS, where the battery is used if the main power fails to provide an uninterrupted supply.
Power conversion – Because all cells share the same electrolyte(s), the electrolytes may be charged using a given number of cells and discharged with a different number. As battery voltage is proportional to the number of cells used, the battery can act as a powerful DC–DC converter. In addition, if the number of cells is continuously changed (on the input and/or output side) power conversion can also be AC/DC, AC/AC, or DC–AC with the frequency limited by that of the switching gear.
Electric vehicles – Because flow batteries can be rapidly "recharged" by replacing the electrolyte, they can be used for applications where the vehicle needs to take on energy as fast as a gas vehicle. A common problem with most RFB chemistries in EV applications is their low energy density which translated into a short driving range. Zinc-chlorine batteries and batteries with highly soluble halates are a notable exception.
Stand-alone power system – An example of this is in cellphone base stations where no grid power is available. The battery can be used alongside solar or wind power sources to compensate for their fluctuating power levels and alongside a generator to save fuel.
See also
Glossary of fuel cell terms
Hydrogen technologies
List of battery types
Redox electrode
Microtubular membrane
References
External links
Electropaedia on Flow Batteries
Research on the uranium redox flow battery
South Australian Flow Battery Project
Electrochemistry
Fuel cells
Battery types | Flow battery | [
"Chemistry"
] | 5,963 | [
"Electrochemistry"
] |
3,133,750 | https://en.wikipedia.org/wiki/Open%20Vulnerability%20and%20Assessment%20Language | Open Vulnerability and Assessment Language (OVAL) is an international, information security, community standard to promote open and publicly available security content, and to standardize the transfer of this information across the entire spectrum of security tools and services. OVAL includes a language used to encode system details, and an assortment of content repositories held throughout the community. The language standardizes the three main steps of the assessment process:
representing configuration information of systems for testing;
analyzing the system for the presence of the specified machine state (vulnerability, configuration, patch state, etc.); and
reporting the results of this assessment.
The repositories are collections of publicly available and open content that utilize the language.
The OVAL community has developed three schemas written in Extensible Markup Language (XML) to serve as the framework and vocabulary of the OVAL Language. These schemas correspond to the three steps of the assessment process: an OVAL System Characteristics schema for representing system information, an OVAL Definition schema for expressing a specific machine state, and an OVAL Results schema for reporting the results of an assessment.
Content written in the OVAL Language is located in one of the many repositories found within the community. One such repository, known as the OVAL Repository, is hosted by The MITRE Corporation. It is the central meeting place for the OVAL Community to discuss, analyze, store, and disseminate OVAL Definitions. Each definition in the OVAL Repository determines whether a specified software vulnerability, configuration issue, program, or patch is present on a system.
The information security community contributes to the development of OVAL by participating in the creation of the OVAL Language on the OVAL Developers Forum and by writing definitions for the OVAL Repository through the OVAL Community Forum. An OVAL Board consisting of representatives from a broad spectrum of industry, academia, and government organizations from around the world oversees and approves the OVAL Language and monitors the posting of the definitions hosted on the OVAL Web site. This means that the OVAL, which is funded by US-CERT at the U.S. Department of Homeland Security for the benefit of the community, reflects the insights and combined expertise of the broadest possible collection of security and system administration professionals worldwide.
OVAL is used by the Security Content Automation Protocol (SCAP).
OVAL Language
The OVAL Language standardizes the three main steps of the assessment process: representing configuration information of systems for testing; analyzing the system for the presence of the specified machine state (vulnerability, configuration, patch state, etc.); and reporting the results of this assessment.
OVAL Interpreter
The OVAL Interpreter is a freely available reference implementation created to show how data can be collected from a computer for testing based on a set of OVAL Definitions and then evaluated to determine the results of each definition.
The OVAL Interpreter demonstrates the usability of OVAL Definitions, and can be used by definition writers to ensure correct syntax and adherence to the OVAL Language during the development of draft definitions. It is not a fully functional scanning tool and has a simplistic user interface, but running the OVAL Interpreter will provide you with a list of result values for each evaluated definition.
OVAL Repository
The OVAL Repository is the central meeting place for the OVAL Community to discuss, analyze, store, and disseminate OVAL Definitions. Other repositories in the community also host OVAL content, which can include OVAL System Characteristics files and OVAL Results files as well as definitions. The OVAL Repository contains all community-developed OVAL Vulnerability, Compliance, Inventory, and Patch Definitions for supported operating systems. Definitions are free to use and implement in information security products and services.
The OVAL Repository Top Contributor Award Program grants awards on a quarterly basis to the top contributors to the OVAL Repository. The Repository is a community effort, and contributions of new content and modifications are instrumental in its success. The awards serve as public recognition of an organization’s support of the OVAL Repository and as an incentive to others to contribute.
Organizations receiving the award will also receive an OVAL Repository Top Contributor logo indicating the quarter of the award (e.g., 1st Quarter 2007) that may be used as they see fit. Awards are granted to organizations that have made a significant contribution of new or modified content each quarter.
OVAL Board
The OVAL Board is an advisory body, which provides valuable input on OVAL to the Moderator (currently MITRE). While it is important to have organizational support for OVAL, it is the individuals who sit on the OVAL Board and their input and activity that truly make a difference. The Board’s primary responsibilities are to work with the Moderator and the Community to define OVAL, to provide input into OVAL’s strategic direction, and to advocate OVAL in the Community.
See also
MITRE The MITRE Corporation
Common Vulnerability and Exposures (index of standardized names for vulnerabilities and other security issues)
XCCDF - eXtensible Configuration Checklist Description Format
Security Content Automation Protocol uses OVAL
External links
OVAL web site
Gideon Technologies (OVAL Board Member) Corporate Web Site
www.itsecdb.com Portal for OVAL definitions from several sources
oval.secpod.com SecPod OVAL Definitions Professional Feed
Computer security procedures
Mitre Corporation | Open Vulnerability and Assessment Language | [
"Engineering"
] | 1,034 | [
"Cybersecurity engineering",
"Computer security procedures"
] |
3,135,190 | https://en.wikipedia.org/wiki/Low-definition%20television | Low-definition television (LDTV) refers to TV systems that have a lower screen resolution than standard-definition television systems. The term is usually used in reference to digital television, in particular when broadcasting at the same (or similar) resolution as low-definition analog television systems. Mobile DTV systems usually transmit in low definition, as do all slow-scan television systems.
Sources
The Video CD format uses a progressive scan LDTV signal (352×240 or 352×288), which is half the vertical and horizontal resolution of full-bandwidth SDTV. However, most players will internally upscale VCD material to 480/576 lines for playback, as this is both more widely compatible and gives a better overall appearance. No motion information is lost due to this process, as VCD video is not high-motion and only plays back at 25 or 30 frames per second, and the resultant display is comparable to consumer-grade VHS video playback.
For the first few years of its existence, YouTube offered only one, low-definition resolution of 256x144 or 144p at 30~50 fps or less, later extending first to widescreen 426×240, then to gradually higher resolutions; once the video service had become well established and had been acquired by Google, it had access to Google's radically improved storage space and transmission bandwidth, and could rely on a good proportion of its users having high-speed internet connections, giving an overall effect reminiscent of early online video streaming attempts using RealVideo or similar services, where 160×120 at single-figure framerates was deemed acceptable to cater to those whose network connections could not sufficiently deliver 240p content.
Video games
Older video game consoles and home computers often generated a technically compliant analog 525-line NTSC or 625-line PAL signal, but only sent one field type rather than alternating between the two. This created a 262 or 312 line progressive scan signal (with half the vertical resolution), which in theory can be decoded on any receiver that can decode normal, interlaced signals.
Since the shadow mask and beam width of standard CRT televisions were designed for interlaced signals, these systems produced a distinctive fixed pattern of alternating bright and dark scan lines; many emulators for older systems offer video filters to recreate this effect. With the introduction of digital video formats these low-definition modes are usually referred to as 240p and 288p (with the standard definition modes being 480i and 576i).
With the introduction of 16-bit computers in the mid-1980s, such as the Atari ST and Amiga, followed by 16-bit consoles in the late 1980s and early 1990s, like the Sega Genesis and Super NES, outputting the standard interlaced resolutions was supported for the first time, but rarely used due to heavy demands on processing power and memory. Standard resolutions also had a tendency to produce noticeable flicker at horizontal edges unless employed quite carefully, such as using anti-aliasing, which was either not available or computationally exorbitant. Thus, progressive output with half the vertical remained the primary format on most games of the fourth and fifth generation consoles (including the Sega Saturn, the Sony PlayStation and the Nintendo 64).
With the advent of sixth generation consoles and the launch of the Dreamcast, standard interlaced resolution became more common, and progressive lower resolution usage declined.
More recent game systems tend to use only properly interlaced NTSC or PAL in addition to higher resolution modes, except when running games designed for older, compatible systems in their native modes. The PlayStation 2 generates 240p/288p if a PlayStation game calls for this mode, as do many Virtual Console emulated games on the Nintendo Wii. Nintendo's official software development kit documentation for the Wii refers to 240p as 'non-interlaced mode' or 'double-strike'.
Shortly after the launch of the Wii Virtual Console service, many users with component video cables experienced problems displaying some Virtual Console games due to certain TV models/manufacturers not supporting 240p over a component video connection. Nintendo's solution was to implement a video mode which forces the emulator to output 480i instead of 240p, however many games released prior were never updated.
Teleconferencing LDTV
Sources of LDTV using standard broadcasting techniques include mobile TV services powered by DVB-H, 1seg, DMB, or ATSC-M/H. However, this kind of LDTV transmission technology is based on existent LDTV teleconferencing standards that have been in place since the late 1990s.
Resolutions
See also
List of common resolutions
8640p, 4320p, 2160p, 1080p, 1080i, 720p, 576p, 576i, 480p, 480i
Digital television
Digital radio
DVB, ATSC, ISDB
SDTV, EDTV, HDTV
Narrow-bandwidth television
Moving Pictures Experts Group
Handheld television
References
Digital television
Broadcast engineering
Broadband | Low-definition television | [
"Engineering"
] | 1,016 | [
"Broadcast engineering",
"Electronic engineering"
] |
3,135,512 | https://en.wikipedia.org/wiki/Phosphorolysis | Phosphorolysis is the cleavage of a compound in which inorganic phosphate is the attacking group. It is analogous to hydrolysis.
An example of this is glycogen breakdown by glycogen phosphorylase, which catalyzes attack by inorganic phosphate on the terminal glycosyl residue at the nonreducing end of a glycogen molecule. If the glycogen chain has n glucose units, the products of a single phosphorolytic event are one molecule of glucose 1-phosphate and a glycogen chain of n-1 remaining glucose units.
In addition, sometimes phosphorolysis is preferable to hydrolysis (like in the breakdown of glycogen or starch, as in the example above) because glucose 1-phosphate yields more ATP than does free glucose when subsequently catabolized to pyruvate.
Another example of phosphorolysis is seen in the conversion of glyceraldehyde 3-phosphate to 1,3-bisphosphoglycerate in glycolysis. The mechanism involves phosphorolysis.
See also
Phosphorylase
References
External links
Chemical processes | Phosphorolysis | [
"Chemistry"
] | 250 | [
"Chemical process engineering",
"Chemical process stubs",
"Chemical processes",
"nan"
] |
3,135,830 | https://en.wikipedia.org/wiki/Systems%20biomedicine | Systems biomedicine, also called systems biomedical science, is the application of systems biology to the understanding and modulation of developmental and pathological processes in humans, and in animal and cellular models. Whereas systems biology aims at modeling exhaustive networks of interactions (with the long-term goal of, for example, creating a comprehensive computational model of the cell), mainly at intra-cellular level, systems biomedicine emphasizes the multilevel, hierarchical nature of the models (molecule, organelle, cell, tissue, organ, individual/genotype, environmental factor, population, ecosystem) by discovering and selecting the key factors at each level and integrating them into models that reveal the global, emergent behavior of the biological process under consideration.
Such an approach will be favorable when the execution of all the experiments necessary to establish exhaustive models is limited by time and expense (e.g., in animal models) or basic ethics (e.g., human experimentation).
In the year of 1992, a paper on system biomedicine by Kamada T. was published (Nov.-Dec.), and an article on systems medicine and pharmacology by Zeng B.J. was also published (April) in the same time period.
In 2009, the first collective book on systems biomedicine was edited by Edison T. Liu and Douglas A. Lauffenburger.
In October 2008, one of the first research groups uniquely devoted to systems biomedicine was established at the European Institute of Oncology. One of the first research centers specialized on systems biomedicine was founded by Rudi Balling. The Luxembourg Centre for Systems Biomedicine is an interdisciplinary center of the University of Luxembourg. The first centre devoted to spatial issues in systems biomedicine has been recently established at Oregon Health and Science University.
The first peer-reviewed journal on this topic, Systems Biomedicine, was recently established by Landes Bioscience.
See also
Systems biology
Systems medicine
References
Bioinformatics
Systems biology | Systems biomedicine | [
"Engineering",
"Biology"
] | 418 | [
"Bioinformatics",
"Biological engineering",
"Systems biology"
] |
3,136,164 | https://en.wikipedia.org/wiki/Principal%20ideal%20theorem | In mathematics, the principal ideal theorem of class field theory, a branch of algebraic number theory, says that extending ideals gives a mapping on the class group of an algebraic number field to the class group of its Hilbert class field, which sends all ideal classes to the class of a principal ideal. The phenomenon has also been called principalization, or sometimes capitulation.
Formal statement
For any algebraic number field K and any ideal I of the ring of integers of K, if L is the Hilbert class field of K, then
is a principal ideal αOL, for OL the ring of integers of L and some element α in it.
History
The principal ideal theorem was conjectured by , and was the last remaining aspect of his program on class fields to be completed, in 1929.
reduced the principal ideal theorem to a question about finite abelian groups: he showed that it would follow if the transfer from a finite group to its derived subgroup is trivial. This result was proved by Philipp Furtwängler (1929).
References
Ideals (ring theory)
Group theory
Homological algebra
Theorems in algebraic number theory | Principal ideal theorem | [
"Mathematics"
] | 222 | [
"Mathematical structures",
"Theorems in algebraic number theory",
"Group theory",
"Fields of abstract algebra",
"Theorems in number theory",
"Category theory",
"Homological algebra"
] |
5,694,932 | https://en.wikipedia.org/wiki/Tecplot | Tecplot is a family of visualization & analysis software tools developed by American company Tecplot, Inc., which is headquartered in Bellevue, Washington. The firm was formerly operated as Amtec Engineering. In 2016, the firm was acquired by Vela Software, an operating group of Constellation Software, Inc. (TSX:CSU).
Tecplot 360
Tecplot 360 is a Computational Fluid Dynamics (CFD) and numerical simulation software package used in post-processing simulation results. Tecplot 360 is also used in chemistry applications to visualize molecule structure by post-processing charge density data.
Common tasks associated with post-processing analysis of flow solver (e.g. Fluent, OpenFOAM) data include calculating grid quantities (e.g. aspect ratios, skewness, orthogonality and stretch factors), normalizing data; Deriving flow field functions like pressure coefficient or vorticity magnitude, verifying solution convergence, estimating the order of accuracy of solutions, interactively exploring data through cut planes (a slice through a region), iso-surfaces (3-D maps of concentrations), particle paths (dropping an object in the "fluid" and watching where it goes).
Tecplot 360 may be used to visualize output from programming languages such as Fortran. Tecplot's native data format is PLT or SZPLT. Many other formats are also supported, including:
CFD Formats:
VTK, CGNS, FLOW-3D (Flow Science, Inc.), ANSYS CFX, ANSYS FLUENT .cas and .dat format and polyhedra, OpenFOAM, PLOT3D (Flow Science, Inc.), Tecplot and polyhedra, Ensight Gold, HDF5 (Hierarchical Data Format), CONVERGE CFD (Convergent Science), and Barracuda Virtual Reactor (CPFD Software).
Data Formats:
HDF, Microsoft Excel (Windows only), comma- or space-delimited ASCII.
FEA Formats:
Abaqus, ANSYS, FIDAP Neutral, LSTC/DYNA LS-DYNA, NASTRAN MSC Software, Patran MSC Software, PTC Mechanica, SDRC/IDEAS universal and 3D Systems STL.
ParaView supports Tecplot format through a VisIt importer.
Tecplot RS
Tecplot RS is a tool tailored towards visualizing the results of
reservoir simulations, which model the flow of fluids through porous media, as in oil and gas fields, and aquifers.
Tecplot Focus
Tecplot Focus is plotting software designed for measured field data, performance plotting of test data, mathematical analysis, and general engineering plotting.
Tecplot Chorus
Tecplot Chorus is a data management, design optimization, and aero database development framework used for comparing collections of CFD simulations.
References
External links
Official Site
User Community
File format definition
Graphics software
Computational fluid dynamics
Plotting software
Software that uses VTK | Tecplot | [
"Physics",
"Chemistry"
] | 617 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
5,697,044 | https://en.wikipedia.org/wiki/Negishi%20coupling | The Negishi coupling is a widely employed transition metal catalyzed cross-coupling reaction. The reaction couples organic halides or triflates with organozinc compounds, forming carbon-carbon bonds (C-C) in the process. A palladium (0) species is generally utilized as the catalyst, though nickel is sometimes used. A variety of nickel catalysts in either Ni0 or NiII oxidation state can be employed in Negishi cross couplings such as Ni(PPh3)4, Ni(acac)2, Ni(COD)2 etc.
The leaving group X is usually chloride, bromide, or iodide, but triflate and acetyloxy groups are feasible as well. X = Cl usually leads to slow reactions.
The organic residue R = alkenyl, aryl, allyl, alkynyl or propargyl.
The halide X' in the organozinc compound can be chloride, bromine or iodine and the organic residue R' is alkenyl, aryl, allyl, alkyl, benzyl, homoallyl, and homopropargyl.
The metal M in the catalyst is nickel or palladium
The ligand L in the catalyst can be triphenylphosphine, dppe, BINAP, chiraphos or XPhos.
Palladium catalysts in general have higher chemical yields and higher functional group tolerance.
The Negishi coupling finds common use in the field of total synthesis as a method for selectively forming C-C bonds between complex synthetic intermediates. The reaction allows for the coupling of sp3, sp2, and sp carbon atoms, (see orbital hybridization) which make it somewhat unusual among the palladium-catalyzed coupling reactions. Organozincs are moisture and air sensitive, so the Negishi coupling must be performed in an oxygen and water free environment, a fact that has hindered its use relative to other cross-coupling reactions that require less robust conditions (i.e. Suzuki reaction). However, organozincs are more reactive than both organostannanes and organoborates which correlates to faster reaction times.
The reaction is named after Ei-ichi Negishi who was a co-recipient of the 2010 Nobel Prize in Chemistry for the discovery and development of this reaction.
Negishi and coworkers originally investigated the cross-coupling of organoaluminum reagents in 1976 initially employing Ni and Pd as the transition metal catalysts, but noted that Ni resulted in the decay of stereospecifity whereas Pd did not. Transitioning from organoaluminum species to organozinc compounds Negishi and coworkers reported the use of Pd complexes in organozinc coupling reactions and carried out methods studies, eventually developing the reaction conditions into those commonly utilized today. Alongside Richard F. Heck and Akira Suzuki, El-ichi Negishi was a co-recipient of the Nobel Prize in Chemistry in 2010, for his work on "palladium-catalyzed cross couplings in organic synthesis".
Reaction mechanism
The reaction mechanism is thought to proceed via a standard Pd catalyzed cross-coupling pathway, starting with a Pd(0) species, which is oxidized to Pd(II) in an oxidative addition step involving the organohalide species. This step proceeds with aryl, vinyl, alkynyl, and acyl halides, acetates, or triflates, with substrates following standard oxidative addition relative rates (I>OTf>Br>>Cl).
The actual mechanism of oxidative addition is unresolved, though there are two likely pathways. One pathway is thought to proceed via an SN2 like mechanism resulting in inverted stereochemistry. The other pathway proceeds via concerted addition and retains stereochemistry.
Though the additions are cis- the Pd(II) complex rapidly isomerizes to the trans- complex.
Next, the transmetalation step occurs where the organozinc reagent exchanges its organic substituent with the halide in the Pd(II) complex, generating the trans- Pd(II) complex and a zinc halide salt. The organozinc substrate can be aryl, vinyl, allyl, benzyl, homoallyl, or homopropargyl. Transmetalation is usually rate limiting and a complete mechanistic understanding of this step has not yet been reached though several studies have shed light on this process. Alkylzinc species form higher-order zincate species prior to transmetalation whereas arylzinc species do not. ZnXR and ZnR2 can both be used as reactive reagents, and Zn is known to prefer four coordinate complexes, which means solvent coordinated Zn complexes, such as cannot be ruled out a priori. Studies indicate competing equilibriums exist between cis- and trans- bis alkyl organopalladium complexes, but that the only productive intermediate is the cis complex.
The last step in the catalytic pathway of the Negishi coupling is reductive elimination, which is thought to proceed via a three coordinate transition state, yielding the coupled organic product and regenerating the Pd(0) catalyst. For this step to occur, the aforementioned cis- alkyl organopalladium complex must be formed.
Both organozinc halides and diorganozinc compounds can be used as starting materials. In one model system it was found that in the transmetalation step the former give the cis-adduct R-Pd-R' resulting in fast reductive elimination to product while the latter gives the trans-adduct which has to go through a slow trans-cis isomerization first.
A common side reaction is homocoupling. In one Negishi model system the formation of homocoupling was found to be the result of a second transmetalation reaction between the diarylmetal intermediate and arylmetal halide:
Ar–Pd–Ar' + Ar'–Zn–X → Ar'–Pd–Ar' + Ar–Zn–X
Ar'–Pd–Ar' → Ar'–Ar' + Pd(0) (homocoupling)
Ar–Zn–X + H2O → Ar–H + HO–Zn–X (reaction accompanied by dehalogenation)
Nickel catalyzed systems can operate under different mechanisms depending on the coupling partners. Unlike palladium systems which involve only Pd0 or PdII, nickel catalyzed systems can involve nickel of different oxidation states. Both systems are similar in that they involve similar elementary steps: oxidative addition, transmetalation, and reductive elimination. Both systems also have to address issues of β-hydride elimination and difficult oxidative addition of alkyl electrophiles.
For unactivated alkyl electrophiles, one possible mechanism is a transmetalation first mechanism. In this mechanism, the alkyl zinc species would first transmetalate with the nickel catalyst. Then the nickel would abstract the halide from the alkyl halide resulting in the alkyl radical and oxidation of nickel after addition of the radical.
One important factor when contemplating the mechanism of a nickel catalyzed cross coupling is that reductive elimination is facile from NiIII species, but very difficult from NiII species. Kochi and Morrell provided evidence for this by isolating NiII complex Ni(PEt3)2(Me)(o-tolyl), which did not undergo reductive elimination quickly enough to be involved in this elementary step.
Scope
The Negishi coupling has been applied the following illustrative syntheses:
unsymmetrical 2,2'-bipyridines from 2-bromopyridine with tetrakis(triphenylphosphine)palladium(0),
biphenyl from o-tolylzinc chloride and o-iodotoluene and tetrakis(triphenylphosphine)palladium(0),
5,7-hexadecadiene from 1-decyne and (Z)-1-hexenyl iodide.
Negishi coupling has been applied in the synthesis of hexaferrocenylbenzene:
with hexaiodidobenzene, diferrocenylzinc and tris(dibenzylideneacetone)dipalladium(0) in tetrahydrofuran. The yield is only 4% signifying substantial crowding around the aryl core.
In a novel modification palladium is first oxidized by the haloketone 2-chloro-2-phenylacetophenone 1 and the resulting palladium OPdCl complex then accepts both the organozinc compound 2 and the organotin compound 3 in a double transmetalation:
Examples of nickel catalyzed Negishi couplings include sp2-sp2, sp2-sp3, and sp3-sp3 systems. In the system first studied by Negishi, aryl-aryl cross coupling was catalyzed by Ni(PPh3)4 generated in situ through reduction of Ni(acac)2 with PPh3 and (i-Bu)2AlH.
Variations have also been developed to allow for the cross-coupling of aryl and alkenyl partners. In the variation developed by Knochel et al, aryl zinc bromides were reacted with vinyl triflates and vinyl halides.
Reactions between sp3-sp3 centers are often more difficult; however, adding an unsaturated ligand with an electron withdrawing group as a cocatalyst improved the yield in some systems. It is believed that added coordination from the unsaturated ligand favors reductive elimination over β-hydride elimination. This also works in some alkyl-aryl systems.
Several asymmetric variants exist and many utilize Pybox ligands.
Industrial applications
The Negishi coupling is not employed as frequently in industrial applications as its cousins the Suzuki reaction and Heck reaction, mostly as a result of the water and air sensitivity of the required aryl or alkyl zinc reagents. In 2003 Novartis employed a Negishi coupling in the manufacture of PDE472, a phosphodiesterase type 4D inhibitor, which was being investigated as a drug lead for the treatment of asthma. The Negishi coupling was used as an alternative to the Suzuki reaction providing improved yields, 73% on a 4.5 kg scale, of the desired benzodioxazole synthetic intermediate.
Applications in total synthesis
Where the Negishi coupling is rarely used in industrial chemistry, a result of the aforementioned water and oxygen sensitivity, it finds wide use in the field of natural products total synthesis. The increased reactivity relative to other cross-coupling reactions makes the Negishi coupling ideal for joining complex intermediates in the synthesis of natural products. Additionally, Zn is more environmentally friendly than other metals such as Sn used in the Stille coupling. The Negishi coupling historically is not used as much as the Stille or Suzuki coupling. When it comes to fragment-coupling processes the Negishi coupling is particularly useful, especially when compared to the aforementioned Stille and Suzuki coupling reactions. The major drawback of the Negishi coupling, aside from its water and oxygen sensitivity, is its relative lack of functional group tolerance when compared to other cross-coupling reactions.
(−)-stemoamide is a natural product found in the root extracts of ‘’Stemona tuberosa’’. These extracts have been used Japanese and Chinese folk medicine to treat respiratory disorders, and (−)-stemoamide is also an anthelminthic. Somfai and coworkers employed a Negishi coupling in their synthesis of (−)-stemoamide. The reaction was implemented mid-synthesis, forming an sp3-sp2 c-c bond between β,γ-unsaturated ester and an intermediate diene 4 with a 78% yield of product 5. Somfai completed the stereoselective total synthesis of (−)-stemoamide in 12-steps with a 20% overall yield.
Kibayashi and coworkers utilized the Negishi coupling in the total synthesis of Pumiliotoxin B. Pumiliotoxin B is one of the major toxic alkaloids isolated from Dendrobates pumilio, a Panamanian poison frog. These toxic alkaloids display modulatory effects on voltage-dependent sodium channels, resulting in cardiotonic and myotonic activity. Kibayashi employed the Negishi coupling late stage in the synthesis of Pumiliotoxin B, coupling a homoallylic sp3 carbon on the zinc alkylidene indolizidine 6 with the (E)-vinyl iodide 7 with a 51% yield. The natural product was then obtained after deprotection.
δ-trans-tocotrienoloic acid isolated from the plant, Chrysochlamys ulei, is a natural product shown to inhibit DNA polymerase β (pol β), which functions to repair DNA via base excision. Inhibition of pol B in conjunction with other chemotherapy drugs may increase the cytotoxicity of these chemotherapeutics, leading to lower effective dosages. The Negishi coupling was implemented in the synthesis of δ-trans-tocotrienoloic acid by Hecht and Maloney coupling the sp3 homopropargyl zinc reagent 8 with sp2 vinyl iodide 9. The reaction proceeded with quantitative yield, coupling fragments mid-synthesis en route to the stereoselectively synthesized natural product δ-trans-tocotrienoloic acid.
Smith and Fu demonstrated that their method to couple secondary nucleophiles with secondary alkyl electrophiles could be applied to the formal synthesis of α-cembra-2,7,11-triene-4,6-diol, a target with antitumor activity. They achieved a 61% yield on a gram scale using their method to install an iso-propyl group. This method would be highly adaptable in this application for diversification and installing other alkyl groups to enable structure-activbity relationship (SAR) studies.Kirschning and Schmidt applied nickel catalyzed negishi cross-coupling to the first total synthesis of carolactone. In this application, they achieved 82% yield and dr = 10:1.
Preparation of organozinc precursors
Alkylzinc reagents can be accessed from the corresponding alkyl bromides using iodine in dimethylacetamide (DMAC). The catalytic I2 serves to activate the zinc towards nucleophilic addition.
Aryl zincs can be synthesized using mild reaction conditions via a Grignard like intermediate.
Organozincs can also be generated in situ and used in a one pot procedure as demonstrated by Knochel et al.
Further reading
See also
CPhos
Heck reaction
Suzuki reaction
References
External links
The Negishi coupling at www.organic-chemistry.org
Carbon-carbon bond forming reactions
Condensation reactions
Name reactions | Negishi coupling | [
"Chemistry"
] | 3,209 | [
"Carbon-carbon bond forming reactions",
"Coupling reactions",
"Organic reactions",
"Name reactions",
"Condensation reactions"
] |
5,697,912 | https://en.wikipedia.org/wiki/Imaging%20spectrometer | An imaging spectrometer is an instrument used in hyperspectral imaging and imaging spectroscopy to acquire a spectrally-resolved image of an object or scene, usually to support analysis of the composition the object being imaged. The spectral data produced for a pixel is often referred to as a datacube due to the three-dimensional representation of the data. Two axes of the image correspond to vertical and horizontal distance and the third to wavelength. The principle of operation is the same as that of the simple spectrometer, but special care is taken to avoid optical aberrations for better image quality.
Example imaging spectrometer types include: filtered camera, whiskbroom scanner, pushbroom scanner, integral field spectrograph (or related dimensional reformatting techniques), wedge imaging spectrometer, Fourier transform imaging spectrometer, computed tomography imaging spectrometer (CTIS), image replicating imaging spectrometer (IRIS), coded aperture snapshot spectral imager (CASSI), and image mapping spectrometer (IMS).
Background
In 1704, Sir Isaac Newton demonstrated that white light could be split up into component colours. The subsequent history of spectroscopy led to precise measurements and provided the empirical foundations for atomic and molecular physics (Born & Wolf, 1999). Significant achievements in imaging spectroscopy are attributed to airborne instruments, particularly arising in the early 1980s and 1990s (Goetz et al., 1985; Vane et al., 1984). However, it was not until 1999 that the first imaging spectrometer was launched in space (the NASA Moderate-resolution Imaging Spectroradiometer, or MODIS).
Terminology and definitions evolve over time. At one time, >10 spectral bands sufficed to justify the term imaging spectrometer but presently the term is seldom defined by a total minimum number of spectral bands, rather by a contiguous (or redundant) statement of spectral bands.
Principle
Imaging spectrometers are used specifically for the purpose of measuring the spectral content of light and electromagnetic light. The spectral data gathered is used to give the operator insight into the sources of radiation. Prism spectrometers use a classical method of dispersing radiation by means of a prism as a refracting element.
The imaging spectrometer works by imaging a radiation source onto what is called a "slit" by means of a source imager. A collimator collimates the beam that is dispersed by a refracting prism and re-imaged onto a detection system by a re-imager. Special care is taken to produce the best possible image of the source onto the slit. The purpose of the collimator and re-imaging optics are to take the best possible image of the slit. An area-array of elements fills the detection system at this stage. The source image is reimaged, every point, as a line spectrum on what is called a detector-array column. The detector array signals supply data pertaining to spectral content, in particular, spatially resolved source points inside source area. These source points are imaged onto the slit and then re-imaged onto the detector array. Simultaneously, the system provides spectral information about the source area and its line of spatially resolved points. The line is then scanned in order to build a database of information about the spectral content.
In imaging spectroscopy (also hyperspectral imaging or spectral imaging) each pixel of an image acquires many bands of light intensity data from the spectrum, instead of just the three bands of the RGB color model. More precisely, it is the simultaneous acquisition of spatially coregistered images in many spectrally contiguous bands.
Some spectral images contain only a few image planes of a spectral data cube, while others are better thought of as full spectra at every location in the image. For example, solar physicists use the spectroheliograph to make images of the Sun built up by scanning the slit of a spectrograph, to study the behavior of surface features on the Sun; such a spectroheliogram may have a spectral resolution of over 100,000 () and be used to measure local motion (via the Doppler shift) and even the magnetic field (via the Zeeman splitting or Hanle effect) at each location in the image plane. The multispectral images collected by the Opportunity rover, in contrast, have only four wavelength bands and hence are only a little more than 3-color images.
Unmixing
Hyperspectral data is often used to determine what materials are present in a scene. Materials of interest could include roadways, vegetation, and specific targets (i.e. pollutants, hazardous materials, etc.). Trivially, each pixel of a hyperspectral image could be compared to a material database to determine the type of material making up the pixel. However, many hyperspectral imaging platforms have low resolution (>5m per pixel) causing each pixel to be a mixture of several materials. The process of unmixing one of these 'mixed' pixels is called hyperspectral image unmixing or simply hyperspectral unmixing.
A solution to hyperspectral unmixing is to reverse the mixing process. Generally, two models of mixing are assumed: linear and nonlinear.
Linear mixing models the ground as being flat and incident sunlight on the ground causes the materials to radiate some amount of the incident energy back to the sensor. Each pixel then, is modeled as a linear sum of all the radiated energy curves of materials making up the pixel. Therefore, each material contributes to the sensor's observation in a positive linear fashion. Additionally, a conservation of energy constraint is often observed thereby forcing the weights of the linear mixture to sum to one in addition to being positive. The model can be described mathematically as follows:
where represents a pixel observed by the sensor, is a matrix of material reflectance signatures (each signature is a column of the matrix), and is the proportion of material present in the observed pixel. This type of model is also referred to as a simplex.
With satisfying the two constraints:
1. Abundance Nonnegativity Constraint (ANC) - each element of x is positive.
2. Abundance Sum-to-one Constraint (ASC) - the elements of x must sum to one.
Non-linear mixing results from multiple scattering often due to non-flat surface such as buildings and vegetation.
There are many algorithms to unmix hyperspectral data each with their own strengths and weaknesses. Many algorithms assume that pure pixels (pixels which contain only one materials) are present in a scene.
Some algorithms to perform unmixing are listed below:
Pixel Purity Index Works by projecting each pixel onto one vector from a set of random vectors spanning the reflectance space. A pixel receives a score when it represent an extremum of all the projections. Pixels with the highest scores are deemed to be spectrally pure.
N-FINDR
Gift Wrapping Algorithm
Independent Component Analysis Endmember Extraction Algorithm - works by assuming that pure pixels occur independently than mixed pixels. Assumes pure pixels are present.
Vertex Component Analysis - works on the fact that the affine transformation of a simplex is another simplex which helps to find hidden (folded) vertices of the simplex. Assumes pure pixels are present.
Principal component analysis - could also be used to determine endmembers, projection on principal axes could permit endmember selection [Smith, Johnson et Adams (1985), Bateson et Curtiss (1996)]
Multi endmembers spatial mixture analysis based on the SMA algorithm
Spectral phasor analysis based on Fourier transformation of spectra and plotting them on a 2D plot.
Non-linear unmixing algorithms also exist: support vector machines or analytical neural network.
Probabilistic methods have also been attempted to unmix pixel through Monte Carlo unmixing algorithm.
Once the fundamental materials of a scene are determined, it is often useful to construct an abundance map of each material which displays the fractional amount of material present at each pixel. Often linear programming is done to observed ANC and ASC.
Applications
Planetary observations
The practical application of imaging spectrometers is they are used to observe the planet Earth from orbiting satellites. The spectrometer functions by recording all points of color on a picture, thus, the spectrometer is focused on specific parts of the Earth's surface to record data. The advantages of spectral content data include vegetation identification, physical condition analysis, mineral identification for the purpose of potential mining, and the assessment of polluted waters in oceans, coastal zones and inland waterways.
Prism spectrometers are ideal for Earth observation because they measure wide spectral ranges competently. Spectrometers can be set to cover a range from 400 nm to 2,500 nm, which interests scientists who are able to observe Earth by means of aircraft and satellite. The spectral resolution of the prism spectrometer is not desirable for most scientific applications; thus, its purpose is specific to recording spectral content of areas with greater spatial variations.
Venus express, orbiting Venus, had a number of imaging spectrometers covering NIR-vis-UV.
Geophysical imaging
One application is spectral geophysical imaging, which allows quantitative and qualitative characterization of the surface and of the atmosphere, using radiometric measurements. These measurements can then be used for unambiguous direct and indirect identification of surface materials and atmospheric trace gases, the measurement of their relative concentrations, subsequently the assignment of the proportional contribution of mixed pixel signals (e.g., the spectral unmixing problem), the derivation of their spatial distribution (mapping problem), and finally their study over time (multi-temporal analysis). The Moon Mineralogy Mapper on Chandrayaan-1 was a geophysical imaging spectrometer.
Disadvantages
The lenses of the prism spectrometer are used for both collimation and re-imaging; however, the imaging spectrometer is limited in its performance by the image quality provided by the collimators and re-imagers. The resolution of the slit image at each wavelength limits spatial resolution; likewise, the resolution of optics across the slit image at each wavelength limits spectral resolution. Moreover, distortion of the slit image at each wavelength can complicate the interpretation of the spectral data.
The refracting lenses used in the imaging spectrometer limit performance by the axial chromatic aberrations of the lens. These chromatic aberrations are bad because they create differences in focus, which prevent good resolution; however, if the range is restricted it is possible to achieve good resolution. Furthermore, chromatic aberrations can be corrected by using two or more refracting materials over the full visible range. It is harder to correct chromatic aberrations over wider spectral ranges without further optical complexity.
Systems
Spectrometers intended for very wide spectral ranges are best if made with all-mirror systems. These particular systems have no chromatic aberrations, and that is why they are preferable. On the other hand, spectrometers with single point or linear array detection systems require simpler mirror systems. Spectrometers using area-array detectors need more complex mirror systems to provide good resolution. It is conceivable that a collimator could be made that would prevent all aberrations; however, this design is expensive because it requires the use of aspherical mirrors.
Smaller two-mirror systems can correct aberrations, but they are not suited for imaging spectrometers. Three mirror systems are compact and correct aberrations as well, but they require at least two asperical components. Systems with more than four mirrors tend to be large and a lot more complex. Catadioptric systems are used in Imagine Spectrometers and are compact, too; however, the collimator or imager will be made up of two curved mirrors and three refracting elements, and thus, the system is very complex.
Optical complexity is unfavorable, however, because effects scatter all optical surfaces and stray reflections. Scattered radiation can interfere with the detector by entering into it and causing errors in recorded spectra. Stray radiation is referred to as stray light. By limiting the total number of surfaces that can contribute to scatter, it limits the introduction of stray light into the equation.
Imaging spectrometers are meant to produce well resolved images. In order for this to occur, imaging spectrometers need to be made with few optical surfaces and have no aspherical optical surfaces.
Sensors
Planned:
EnMAP
Current and Past:
AVIRIS — airborne
MODIS — on board EOS Terra and Aqua platforms
MERIS — on board Envisat
Hyperion — on board Earth Observing-1
Several commercial manufacturers for laboratory, ground-based, aerial, or industrial imaging spectrographs
Examples
Ralph (New Horizons), Visible and ultraviolet imaging spectrometer on New Horizons
Jovian Infrared Auroral Mapper, infrared imaging spectrometer on Juno Jupiter orbiter
Mapping Imaging Spectrometer for Europa (planned for developmental Europa Clipper spacecraft
Compact Reconnaissance Imaging Spectrometer for Mars (CRISM), imaging spectrometer in Mars orbit aboard Mars Reconnaissance Orbiter
Special Sensor Ultraviolet Limb Imager, to observe the earth's ionosphere and thermosphere
See also
Landsat
Remote sensing
Hyperspectral imaging
Full Spectral Imaging
List of Earth observation satellites
Chemical Imaging
Infrared Microscopy
Phasor approach to fluorescence lifetime and spectral imaging
Video spectroscopy
References
Further readling
Goetz, A.F.H., Vane, G., Solomon, J.E., & Rock, B.N. (1985) Imaging spectrometry for earth remote sensing. Science, 228, 1147.
Schaepman, M. (2005) Spectrodirectional Imaging: From Pixels to Processes. Inaugural address, Wageningen University, Wageningen (NL).
Vane, G., Chrisp, M., Emmark, H., Macenka, S., & Solomon, J. (1984) Airborne Visible Infrared Imaging Spec-trometer (AVIRIS): An Advanced Tool for Earth Remote Sensing. European Space Agency, (Special Publication) ESA SP, 2, 751.
External links
List of imaging spectrometer instruments
About imaging spectroscopy (USGS): http://speclab.cr.usgs.gov/aboutimsp.html
Link to resources (OKSI): http://www.techexpo.com/WWW/opto-knowledge/IS_resources.html
Special Interest Group Imaging Spectroscopy (EARSeL): https://web.archive.org/web/20051230225147/http://www.op.dlr.de/dais/SIG-IS/SIG-IS.html
Applications of Spectroscopic and Chemical Imaging in Research: http://www3.imperial.ac.uk/vibrationalspectroscopyandchemicalimaging/research
Analysis tool for spectral unmixing : http://www.spechron.com
Image sensors
Spectrometers | Imaging spectrometer | [
"Physics",
"Chemistry"
] | 3,098 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
5,699,217 | https://en.wikipedia.org/wiki/Diffusion%20creep | Diffusion creep refers to the deformation of crystalline solids by the diffusion of vacancies through their crystal lattice. Diffusion creep results in plastic deformation rather than brittle failure of the material.
Diffusion creep is more sensitive to temperature than other deformation mechanisms. It becomes especially relevant at high homologous temperatures (i.e. within about a tenth of its absolute melting temperature). Diffusion creep is caused by the migration of crystalline defects through the lattice of a crystal such that when a crystal is subjected to a greater degree of compression in one direction relative to another, defects migrate to the crystal faces along the direction of compression, causing a net mass transfer that shortens the crystal in the direction of maximum compression. The migration of defects is in part due to vacancies, whose migration is equal to a net mass transport in the opposite direction.
Principle
Crystalline materials are never perfect on a microscale. Some sites of atoms in the crystal lattice can be occupied by point defects, such as "alien" particles or vacancies. Vacancies can actually be thought of as chemical species themselves (or part of a compound species/component) that may then be treated using heterogeneous phase equilibria. The number of vacancies may also be influenced by the number of chemical impurities in the crystal lattice, if such impurities require the formation of vacancies to exist in the lattice.
A vacancy can move through the crystal structure when the neighbouring particle "jumps" in the vacancy, so that the vacancy moves in effect one site in the crystal lattice. Chemical bonds need to be broken and new bonds have to be formed during the process, therefore a certain activation energy is needed. Moving a vacancy through a crystal becomes therefore easier when the temperature is higher.
The most stable state will be when all vacancies are evenly spread through the crystal. This principle follows from Fick's law:
In which Jx stands for the flux ("flow") of vacancies in direction x; Dx is a constant for the material in that direction and is the difference in concentration of vacancies in that direction. The law is valid for all principal directions in (x, y, z)-space, so the x in the formula can be exchanged for y or z. The result will be that they will become evenly distributed over the crystal, which will result in the highest mixing entropy.
When a mechanical stress is applied to the crystal, new vacancies will be created at the sides perpendicular to the direction of the lowest principal stress. The vacancies will start moving in the direction of crystal planes perpendicular to the maximal stress. Current theory holds that the elastic strain in the neighborhood of a defect is smaller toward the axis of greatest differential compression, creating a defect chemical potential gradient (depending upon lattice strain) within the crystal that leads to net accumulation of defects at the faces of maximum compression by diffusion. A flow of vacancies is the same as a flow of particles in the opposite direction. This means a crystalline material can deform under a differential stress, by the flow of vacancies.
Highly mobile chemical components substituting for other species in the lattice can also cause a net differential mass transfer (i.e. segregation) of chemical species inside the crystal itself, often promoting shortening of the rheologically more difficult substance and enhancing deformation.
Types of diffusion creep
Diffusion of vacancies through a crystal can happen in a number of ways. When vacancies move through the crystal (in the material sciences often called a "grain"), this is called Nabarro–Herring creep. Another way in which vacancies can move is along the grain boundaries, a mechanism called Coble creep.
When a crystal deforms by diffusion creep to accommodate space problems from simultaneous grain boundary sliding (the movement of whole grains along grain boundaries) this is called granular or superplastic flow. Diffusion creep can also be simultaneous with pressure solution. Pressure solution is, like Coble creep, a mechanism in which material moves along grain boundaries. While in Coble creep the particles move by "dry" diffusion, in pressure solution they move in solution.
Flow laws
Each plastic deformation of a material can be described by a formula in which the strain rate () depends on the differential stress (σ or σD), the grain size (d) and an activation value in the form of an Arrhenius equation:
In which A is the constant of diffusion, Q the activation energy of the mechanism, R the gas constant and T the absolute temperature (in kelvins). The exponents n and m are values for the sensitivity of the flow to stress and grain size respectively. The values of A, Q, n and m are different for each deformation mechanism. For diffusion creep, the value of n is usually around 1. The value for m can vary between 2 (Nabarro-Herring creep) and 3 (Coble creep). That means Coble creep is more sensitive to grain size of a material: materials with larger grains can deform less easily by Coble creep than materials with small grains.
Traces of diffusion creep
It is difficult to find clear microscale evidence for diffusion creep in a crystalline material, since few structures have been identified as definite proof. A material that was deformed by diffusion creep can have flattened grains (grains with a so called shape-preferred orientation or SPO). Equidimensional grains with no lattice-preferred orientation (or LPO) can be an indication for superplastic flow. In materials that were deformed under very high temperatures, lobate grain boundaries may be taken as evidence for diffusion creep.
Diffusion creep is a mechanism by which the volume of the crystals can increase. Larger grain sizes can be a sign that diffusion creep was more effective in a crystalline material.
See also
Creep (deformation)
Deformation (engineering)
Diffusion
Dislocation creep
Material sciences
References
Literature
Gower, R.J.W. & Simpson, C.; 1992: Phase boundary mobility in naturally deformed, high-grade quartzofeldspatic rocks: evidence for diffusion creep, Journal of Structural Geology 14, p. 301-314.
Passchier, C.W. & Trouw, R.A.J., 1998: Microtectonics, Springer,
Twiss, R.J. & Moores, E.M., 2000 (6th edition): Structural Geology, W.H. Freeman & co,
Materials degradation | Diffusion creep | [
"Materials_science",
"Engineering"
] | 1,324 | [
"Materials degradation",
"Materials science"
] |
13,653,300 | https://en.wikipedia.org/wiki/Kinetic%20proofreading | Kinetic proofreading (or kinetic amplification) is a mechanism for error correction in biochemical reactions, proposed independently by John Hopfield (1974) and Jacques Ninio (1975). Kinetic proofreading allows enzymes to discriminate between two possible reaction pathways leading to correct or incorrect products with an accuracy higher than what one would predict based on the difference in the activation energy between these two pathways.
Increased specificity is obtained by introducing an irreversible step exiting the pathway, with reaction intermediates leading to incorrect products more likely to prematurely exit the pathway than reaction intermediates leading to the correct product. If the exit step is fast relative to the next step in the pathway, the specificity can be increased by a factor of up to the ratio between the two exit rate constants. (If the next step is fast relative to the exit step, specificity will not be increased because there will not be enough time for exit to occur.) This can be repeated more than once to increase specificity further.
As an analogy, if we have a medicine assembly line sometimes produces empty boxes, and we are unable to upgrade the assembly line, then we can increase the ratio of full boxes over empty boxes (specificity) by placing a giant fan at the end. Empty boxes are more likely to be blown off the line (a higher exit rate) than full boxes, even though both kinds' production rates are lowered. By lengthening the final section and adding more giant fans (multistep proofreading), the specificity can be increased arbitrarily, at the cost of decreasing production rate.
Specificity paradox
In protein synthesis, the error rate is on the order of . This means that when a ribosome is matching anticodons of tRNA to the codons of mRNA, it matches complementary sequences correctly nearly all the time. Hopfield noted that because of how similar the substrates are (the difference between a wrong codon and a right codon can be as small as a difference in a single base), an error rate that small is unachievable with a one-step mechanism. Both wrong and right tRNA can bind to the ribosome, and if the ribosome can only discriminate between them by complementary matching of the anticodon, it must rely on the small free energy difference between binding three matched complementary bases or only two.
A one-shot machine which tests whether the codons match or not by examining whether the codon and anticodon are bound will not be able to tell the difference between wrong and right codon with an error rate less than unless the free energy difference is at least 9.2kT, which is much larger than the free energy difference for single codon binding. This is a thermodynamic bound, so it cannot be evaded by building a different machine. However, this can be overcome by kinetic proofreading, which introduces an irreversible step through the input of energy.
Another molecular recognition mechanism, which does not require expenditure of free energy is that of conformational proofreading. The incorrect product may also be formed but hydrolyzed at a greater rate than the correct product, giving the possibility of theoretically infinite specificity the longer you let this reaction run, but at the cost of large amounts of the correct product as well. (Thus there is a tradeoff between product production and its efficiency.) The hydrolytic activity may be on the same enzyme, as in DNA polymerases with editing functions, or on different enzymes.
Multistep ratchet
Hopfield suggested a simple way to achieve smaller error rates using a molecular ratchet which takes many irreversible steps, each testing to see if the sequences match. At each step, energy is expended and specificity (the ratio of correct substrate to incorrect substrate at that point in the pathway) increases.
The requirement for energy in each step of the ratchet is due to the need for the steps to be irreversible; for specificity to increase, entry of substrate and analogue must occur largely through the entry pathway, and exit largely through the exit pathway. If entry were an equilibrium, the earlier steps would form a pre-equilibrium and the specificity benefits of entry into the pathway (less likely for the substrate analogue) would be lost; if the exit step were an equilibrium, then the substrate analogue would be able to re-enter the pathway through the exit step, bypassing the specificity of earlier steps altogether.
Although one test will fail to discriminate between mismatched and matched sequences a fraction of the time, two tests will both fail only of the time, and N tests will fail of the time. In terms of free energy, the discrimination power of N successive tests for two states with a free energy is the same as one test between two states with a free energy .
To achieve an error rate of requires several comparison steps. Hopfield predicted on the basis of this theory that there is a multistage ratchet in the ribosome which tests the match several times before incorporating the next amino acid into the protein.
Experimental examples
Charging tRNAs with their respective amino-acids – the enzyme that charges the tRNA is called aminoacyl tRNA synthetase. This enzyme utilizes a high energy intermediate state to increase the fidelity of binding the right pair of tRNA and amino-acid. In this case, energy is used to make the high-energy intermediate (making the entry pathway irreversible), and the exit pathway is irreversible by virtue of the high energy difference in dissociation.
Homologous recombination – Homologous recombination facilitates the exchange between homologous or almost homologous DNA strands. During this process, the RecA protein polymerizes along a DNA and this DNA-protein filament searches for a homologous DNA sequence. Both processes of RecA polymerization and homology search utilize the kinetic proofreading mechanism.
DNA damage recognition and repair – a certain DNA repair mechanism utilizes kinetic proofreading to discriminate damaged DNA. Some DNA polymerases can also detect when they have added an incorrect base and are able to hydrolyze it immediately; in this case, the irreversible (energy-requiring) step is addition of the base.
Antigen discrimination by T cell receptors – T cells respond to foreign antigens at low concentrations, while ignoring any self-antigens present at much higher concentration. This ability is known as antigen discrimination. T-cell receptors use kinetic proofreading to discriminate between high and low affinity antigens presented on an MHC molecule. The intermediate steps of kinetic proofreading are realized by multiple rounds of phosphorylation of the receptor and its adaptor proteins.
Theoretical considerations
Universal first passage time
Biochemical processes that use kinetic proofreading to improve specificity implement the delay-inducing multistep ratchet by a variety of distinct biochemical networks. Nonetheless, many such networks result in the times to completion of the molecular assembly and the proofreading steps (also known as the first passage time) that approach a near-universal, exponential shape for high proofreading rates and large network sizes. Since exponential completion times are characteristic of a two-state Markov process, this observation makes kinetic proofreading one of only a few examples of biochemical processes where structural complexity results in a much simpler large-scale, phenomenological dynamics.
Topology
The increase in specificity, or the overall amplification factor of a kinetic proofreading network that may include multiple pathways and especially loops is intimately related to the topology of the network: the specificity grows exponentially with the number of loops in the network. An example is homologous recombination in which the number of loops scales like the square of DNA length. The universal completion time emerges precisely in this regime of large number of loops and high amplification.
References
Further reading
Biological processes
DNA replication | Kinetic proofreading | [
"Mathematics",
"Biology"
] | 1,631 | [
"Genetics techniques",
"Mathematical and theoretical biology",
"Applied mathematics",
"DNA replication",
"Molecular genetics",
"nan"
] |
13,657,747 | https://en.wikipedia.org/wiki/Dirac%20bracket | The Dirac bracket is a generalization of the Poisson bracket developed by Paul Dirac to treat classical systems with second class constraints in Hamiltonian mechanics, and to thus allow them to undergo canonical quantization. It is an important part of Dirac's development of Hamiltonian mechanics to elegantly handle more general Lagrangians; specifically, when constraints are at hand, so that the number of apparent variables exceeds that of dynamical ones. More abstractly, the two-form implied from the Dirac bracket is the restriction of the symplectic form to the constraint surface in phase space.
This article assumes familiarity with the standard Lagrangian and Hamiltonian formalisms, and their connection to canonical quantization. Details of Dirac's modified Hamiltonian formalism are also summarized to put the Dirac bracket in context.
Inadequacy of the standard Hamiltonian procedure
The standard development of Hamiltonian mechanics is inadequate in several specific situations:
When the Lagrangian is at most linear in the velocity of at least one coordinate; in which case, the definition of the canonical momentum leads to a constraint. This is the most frequent reason to resort to Dirac brackets. For instance, the Lagrangian (density) for any fermion is of this form.
When there are gauge (or other unphysical) degrees of freedom which need to be fixed.
When there are any other constraints that one wishes to impose in phase space.
Example of a Lagrangian linear in velocity
An example in classical mechanics is a particle with charge and mass confined to the - plane with a strong constant, homogeneous perpendicular magnetic field, so then pointing in the -direction with strength .
The Lagrangian for this system with an appropriate choice of parameters is
where is the vector potential for the magnetic field, ; is the speed of light in vacuum; and is an arbitrary external scalar potential; one could easily take it to be quadratic in and , without loss of generality. We use
as our vector potential; this corresponds to a uniform and constant magnetic field B in the z direction. Here, the hats indicate unit vectors. Later in the article, however, they are used to distinguish quantum mechanical operators from their classical analogs. The usage should be clear from the context.
Explicitly, the Lagrangian amounts to just
which leads to the equations of motion
For a harmonic potential, the gradient of amounts to just the coordinates, .
Now, in the limit of a very large magnetic field, . One may then drop the kinetic term to produce a simple approximate Lagrangian,
with first-order equations of motion
Note that this approximate Lagrangian is linear in the velocities, which is one of the conditions under which the standard Hamiltonian procedure breaks down. While this example has been motivated as an approximation, the Lagrangian under consideration is legitimate and leads to consistent equations of motion in the Lagrangian formalism.
Following the Hamiltonian procedure, however, the canonical momenta associated with the coordinates are now
which are unusual in that they are not invertible to the velocities; instead, they are constrained to be functions of the coordinates: the four phase-space variables are linearly dependent, so the variable basis is overcomplete.
A Legendre transformation then produces the Hamiltonian
Note that this "naive" Hamiltonian has no dependence on the momenta, which means that equations of motion (Hamilton's equations) are inconsistent.
The Hamiltonian procedure has broken down. One might try to fix the problem by eliminating two of the components of the -dimensional phase space, say and , down to a reduced phase space of dimensions, that is sometimes expressing the coordinates as momenta and sometimes as coordinates. However, this is neither a general nor rigorous solution. This gets to the heart of the matter: that the definition of the canonical momenta implies a constraint on phase space (between momenta and coordinates) that was never taken into account.
Generalized Hamiltonian procedure
In Lagrangian mechanics, if the system has holonomic constraints, then one generally adds Lagrange multipliers to the Lagrangian to account for them. The extra terms vanish when the constraints are satisfied, thereby forcing the path of stationary action to be on the constraint surface. In this case, going to the Hamiltonian formalism introduces a constraint on phase space in Hamiltonian mechanics, but the solution is similar.
Before proceeding, it is useful to understand the notions of weak equality and strong equality. Two functions on phase space, and , are weakly equal if they are equal when the constraints are satisfied, but not throughout the phase space, denoted . If and are equal independently of the constraints being satisfied, they are called strongly equal, written . It is important to note that, in order to get the right answer, no weak equations may be used before evaluating derivatives or Poisson brackets.
The new procedure works as follows, start with a Lagrangian and define the canonical momenta in the usual way. Some of those definitions may not be invertible and instead give a constraint in phase space (as above). Constraints derived in this way or imposed from the beginning of the problem are called primary constraints. The constraints, labeled , must weakly vanish, .
Next, one finds the naive Hamiltonian, , in the usual way via a Legendre transformation, exactly as in the above example. Note that the Hamiltonian can always be written as a function of s and s only, even if the velocities cannot be inverted into functions of the momenta.
Generalizing the Hamiltonian
Dirac argues that we should generalize the Hamiltonian (somewhat analogously to the method of Lagrange multipliers) to
where the are not constants but functions of the coordinates and momenta. Since this new Hamiltonian is the most general function of coordinates and momenta weakly equal to the naive Hamiltonian, is the broadest generalization of the Hamiltonian possible
so that when .
To further illuminate the , consider how one gets the equations of motion from the naive Hamiltonian in the standard procedure. One expands the variation of the Hamiltonian out in two ways and sets them equal (using a somewhat abbreviated notation with suppressed indices and sums):
where the second equality holds after simplifying with the Euler-Lagrange equations of motion and the definition of canonical momentum. From this equality, one deduces the equations of motion in the Hamiltonian formalism from
where the weak equality symbol is no longer displayed explicitly, since by definition the equations of motion only hold weakly. In the present context, one cannot simply set the coefficients of and separately to zero, since the variations are somewhat restricted by the constraints. In particular, the variations must be tangent to the constraint surface.
One can demonstrate that the solution to
for the variations and restricted by the constraints (assuming the constraints satisfy some regularity conditions) is generally
where the are arbitrary functions.
Using this result, the equations of motion become
where the are functions of coordinates and velocities that can be determined, in principle, from the second equation of motion above.
The Legendre transform between the Lagrangian formalism and the Hamiltonian formalism has been saved at the cost of adding new variables.
Consistency conditions
The equations of motion become more compact when using the Poisson bracket, since if is some function of the coordinates and momenta then
if one assumes that the Poisson bracket with the (functions of the velocity) exist; this causes no problems since the contribution weakly vanishes. Now, there are some consistency conditions which must be satisfied in order for this formalism to make sense. If the constraints are going to be satisfied, then their equations of motion must weakly vanish, that is, we require
There are four different types of conditions that can result from the above:
An equation that is inherently false, such as .
An equation that is identically true, possibly after using one of our primary constraints.
An equation that places new constraints on our coordinates and momenta, but is independent of the .
An equation that serves to specify the .
The first case indicates that the starting Lagrangian gives inconsistent equations of motion, such as . The second case does not contribute anything new.
The third case gives new constraints in phase space. A constraint derived in this manner is called a secondary constraint. Upon finding the secondary constraint one should add it to the extended Hamiltonian and check the new consistency conditions, which may result in still more constraints. Iterate this process until there are no more constraints. The distinction between primary and secondary constraints is largely an artificial one (i.e. a constraint for the same system can be primary or secondary depending on the Lagrangian), so this article does not distinguish between them from here on. Assuming the consistency condition has been iterated until all of the constraints have been found, then will index all of them. Note this article uses secondary constraint to mean any constraint that was not initially in the problem or derived from the definition of canonical momenta; some authors distinguish between secondary constraints, tertiary constraints, et cetera.
Finally, the last case helps fix the . If, at the end of this process, the are not completely determined, then that means there are unphysical (gauge) degrees of freedom in the system. Once all of the constraints (primary and secondary) are added to the naive Hamiltonian and the solutions to the consistency conditions for the are plugged in, the result is called the total Hamiltonian.
Determination of the
The uk must solve a set of inhomogeneous linear equations of the form
The above equation must possess at least one solution, since otherwise the initial Lagrangian is inconsistent; however, in systems with gauge degrees of freedom, the solution will not be unique. The most general solution is of the form
where is a particular solution and is the most general solution to the homogeneous equation
The most general solution will be a linear combination of linearly independent solutions to the above homogeneous equation. The number of linearly independent solutions equals the number of (which is the same as the number of constraints) minus the number of consistency conditions of the fourth type (in previous subsection). This is the number of unphysical degrees of freedom in the system. Labeling the linear independent solutions where the index runs from to the number of unphysical degrees of freedom, the general solution to the consistency conditions is of the form
where the are completely arbitrary functions of time. A different choice of the corresponds to a gauge transformation, and should leave the physical state of the system unchanged.
The total Hamiltonian
At this point, it is natural to introduce the total Hamiltonian
and what is denoted
The time evolution of a function on the phase space, , is governed by
Later, the extended Hamiltonian is introduced. For gauge-invariant (physically measurable quantities) quantities, all of the Hamiltonians should give the same time evolution, since they are all weakly equivalent. It is only for non gauge-invariant quantities that the distinction becomes important.
The Dirac bracket
Above is everything needed to find the equations of motion in Dirac's modified Hamiltonian procedure. Having the equations of motion, however, is not the endpoint for theoretical considerations. If one wants to canonically quantize a general system, then one needs the Dirac brackets. Before defining Dirac brackets, first-class and second-class constraints need to be introduced.
We call a function of coordinates and momenta first class if its Poisson bracket with all of the constraints weakly vanishes, that is,
for all . Note that the only quantities that weakly vanish are the constraints , and therefore anything that weakly vanishes must be strongly equal to a linear combination of the constraints. One can demonstrate that the Poisson bracket of two first-class quantities must also be first class. The first-class constraints are intimately connected with the unphysical degrees of freedom mentioned earlier. Namely, the number of independent first-class constraints is equal to the number of unphysical degrees of freedom, and furthermore, the primary first-class constraints generate gauge transformations. Dirac further postulated that all secondary first-class constraints are generators of gauge transformations, which turns out to be false; however, typically one operates under the assumption that all first-class constraints generate gauge transformations when using this treatment.
When the first-class secondary constraints are added into the Hamiltonian with arbitrary as the first-class primary constraints are added to arrive at the total Hamiltonian, then one obtains the extended Hamiltonian. The extended Hamiltonian gives the most general possible time evolution for any gauge-dependent quantities, and may actually generalize the equations of motion from those of the Lagrangian formalism.
For the purposes of introducing the Dirac bracket, of more immediate interest are the second class constraints. Second class constraints are constraints that have a nonvanishing Poisson bracket with at least one other constraint.
For instance, consider second-class constraints and whose Poisson bracket is simply a constant, ,
Now, suppose one wishes to employ canonical quantization, then the phase-space coordinates become operators whose commutators become times their classical Poisson bracket. Assuming there are no ordering issues that give rise to new quantum corrections, this implies that
where the hats emphasize the fact that the constraints are on operators.
On one hand, canonical quantization gives the above commutation relation, but on the other hand 1 and are constraints that must vanish on physical states, whereas the right-hand side cannot vanish. This example illustrates the need for some generalization of the Poisson bracket which respects the system's constraints, and which leads to a consistent quantization procedure. This new bracket should be bilinear, antisymmetric, satisfy the Jacobi identity as does the Poisson bracket, reduce to the Poisson bracket for unconstrained systems, and, additionally, the bracket of any second-class constraint with any other quantity must vanish.
At this point, the second class constraints will be labeled . Define a matrix with entries
In this case, the Dirac bracket of two functions on phase space, and , is defined as
where denotes the entry of 's inverse matrix. Dirac proved that will always be invertible.
It is straightforward to check that the above definition of the Dirac bracket satisfies all of the desired properties, and especially the last one, of vanishing for an argument which is a second-class constraint.
When applying canonical quantization on a constrained Hamiltonian system, the commutator of the operators is supplanted by times their classical Dirac bracket. Since the Dirac bracket respects the constraints, one need not be careful about evaluating all brackets before using any weak equations, as is the case with the Poisson bracket.
Note that while the Poisson bracket of bosonic (Grassmann even) variables with itself must vanish, the Poisson bracket of fermions represented as a Grassmann variables with itself need not vanish. This means that in the fermionic case it is possible for there to be an odd number of second class constraints.
Illustration on the example provided
Returning to the above example, the naive Hamiltonian and the two primary constraints are
Therefore, the extended Hamiltonian can be written
The next step is to apply the consistency conditions , which in this case become
These are not secondary constraints, but conditions that fix and . Therefore, there are no secondary constraints and the arbitrary coefficients are completely determined, indicating that there are no unphysical degrees of freedom.
If one plugs in with the values of and , then one can see that the equations of motion are
which are self-consistent and coincide with the Lagrangian equations of motion.
A simple calculation confirms that and are second class constraints since
hence the matrix looks like
which is easily inverted to
where is the Levi-Civita symbol. Thus, the Dirac brackets are defined to be
If one always uses the Dirac bracket instead of the Poisson bracket, then there is no issue about the order of applying constraints and evaluating expressions, since the Dirac bracket of anything weakly zero is strongly equal to zero. This means that one can just use the naive Hamiltonian with Dirac brackets, instead, to thus get the correct equations of motion, which one can easily confirm on the above ones.
To quantize the system, the Dirac brackets between all of the phase space variables are needed. The nonvanishing Dirac brackets for this system are
while the cross-terms vanish, and
Therefore, the correct implementation of canonical quantization dictates the commutation relations,
with the cross terms vanishing, and
This example has a nonvanishing commutator between and , which means this structure specifies a noncommutative geometry. (Since the two coordinates do not commute, there will be an uncertainty principle for the and positions.)
Further Illustration for a hypersphere
Similarly, for free motion on a hypersphere , the coordinates are constrained, . From a plain kinetic Lagrangian, it is evident that their momenta are perpendicular to them, . Thus the corresponding Dirac Brackets are likewise simple to work out,
The ( constrained phase-space variables obey much simpler Dirac brackets than the unconstrained variables, had one eliminated one of the s and one of the s through the two constraints ab initio, which would obey plain Poisson brackets. The Dirac brackets add simplicity and elegance, at the cost of excessive (constrained) phase-space variables.
For example, for free motion on a circle, , for and eliminating from the circle constraint yields the unconstrained
with equations of motion
an oscillation; whereas the equivalent constrained system with yields
whence, instantly, virtually by inspection, oscillation for both variables,
See also
Canonical quantization
Hamiltonian mechanics
Poisson bracket
Moyal bracket
First class constraint
Second class constraints
Lagrangian
Symplectic structure
Overcompleteness
References
Mathematical quantization
Symplectic geometry
Hamiltonian mechanics | Dirac bracket | [
"Physics",
"Mathematics"
] | 3,662 | [
"Theoretical physics",
"Classical mechanics",
"Quantum mechanics",
"Hamiltonian mechanics",
"Mathematical quantization",
"Dynamical systems"
] |
13,662,027 | https://en.wikipedia.org/wiki/Colloid%20vibration%20current | Colloid vibration current is an electroacoustic phenomenon that arises when ultrasound propagates through a fluid that contains ions and either solid particles or emulsion droplets.
The pressure gradient in an ultrasonic wave moves particles relative to the fluid. This motion disturbs the double layer that exists at the particle-fluid interface. The picture illustrates the mechanism of this distortion. Practically all particles in fluids carry a surface charge. This surface charge is screened with an equally charged diffuse layer; this structure is called the double layer. Ions of the diffuse layer are located in the fluid and can move with the fluid. Fluid motion relative to the particle drags these diffuse ions in the direction of one or the other of the particle's poles. The picture shows ions dragged towards the left hand pole. As a result of this drag, there is an excess of negative ions in the vicinity of the left hand pole and an excess of positive surface charge at the right hand pole. As a result of this charge excess, particles gain a dipole moment. These dipole moments generate an electric field that in turn generates measurable electric current. This phenomenon is widely used for measuring zeta potential in concentrated colloids.
See also
Electric sonic amplitude
Electroacoustic phenomena
Interface and colloid science
Zeta potential
References
Chemical mixtures
Colloidal chemistry
Soft matter | Colloid vibration current | [
"Physics",
"Chemistry",
"Materials_science"
] | 268 | [
"Materials science stubs",
"Colloidal chemistry",
"Soft matter",
"Colloids",
"Surface science",
"Chemical mixtures",
"Condensed matter physics",
"nan",
"Condensed matter stubs"
] |
13,662,732 | https://en.wikipedia.org/wiki/Rydberg%20matter | Rydberg matter is an exotic phase of matter formed by Rydberg atoms; it was predicted around 1980 by É. A. Manykin, M. I. Ozhovan and P. P. Poluéktov. It has been formed from various elements like caesium, potassium, hydrogen and nitrogen; studies have been conducted on theoretical possibilities like sodium, beryllium, magnesium and calcium. It has been suggested to be a material that diffuse interstellar bands may arise from. Circular Rydberg states, where the outermost electron is found in a planar circular orbit, are the most long-lived, with lifetimes of up to several hours, and are the most common.
Physical
Rydberg matter consists of usually hexagonal planar clusters; these cannot be very big because of the retardation effect caused by the finite velocity of the speed of light. Hence, they are not gases or plasmas; nor are they solids or liquids; they are most similar to dusty plasmas with small clusters in a gas. Though Rydberg matter can be studied in the laboratory by laser probing, the largest cluster reported consists of only 91 atoms, but it has been shown to be behind extended clouds in space and the upper atmosphere of planets. Bonding in Rydberg matter is caused by delocalisation of the high-energy electrons to form an overall lower energy state. The way in which the electrons delocalise is to form standing waves on loops surrounding nuclei, creating quantised angular momentum and the defining characteristics of Rydberg matter. It is a generalised metal by way of the quantum numbers influencing loop size but restricted by the bonding requirement for strong electron correlation; it shows exchange-correlation properties similar to covalent bonding. Electronic excitation and vibrational motion of these bonds can be studied by Raman spectroscopy.
Lifetime
Due to reasons still debated by the physics community because of the lack of methods to observe clusters, Rydberg matter is highly stable against disintegration by emission of radiation; the characteristic lifetime of a cluster at n = 12 is 25 seconds. Reasons given include the lack of overlap between excited and ground states, the forbidding of transitions between them and exchange-correlation effects hindering emission through necessitating tunnelling that causes a long delay in excitation decay. Excitation plays a role in determining lifetimes, with a higher excitation giving a longer lifetime; n = 80 gives a lifetime comparable to the age of the Universe.
Excitations
In ordinary metals, interatomic distances are nearly constant through a wide range of temperatures and pressures; this is not the case with Rydberg matter, whose distances and thus properties vary greatly with excitations. A key variable in determining these properties is the principal quantum number n that can be any integer greater than 1; the highest values reported for it are around 100. Bond distance d in Rydberg matter is given by
where a0 is the Bohr radius. The approximate factor 2.9 was first experimentally determined, then measured with rotational spectroscopy in different clusters. Examples of d calculated this way, along with selected values of the density D, are given in the adjacent table.
Condensation
Like bosons that can be condensed to form Bose–Einstein condensates, Rydberg matter can be condensed, but not in the same way as bosons. The reason for this is that Rydberg matter behaves similarly to a gas, meaning that it cannot be condensed without removing the condensation energy; ionisation occurs if this is not done. All solutions to this problem so far involve using an adjacent surface in some way, the best being evaporating the atoms of which the Rydberg matter is to be formed from and leaving the condensation energy on the surface. Using caesium atoms, graphite-covered surfaces and thermionic converters as containment, the work function of the surface has been measured to be 0.5 eV, indicating that the cluster is between the ninth and fourteenth excitation levels.
See also
The overview provides information on Rydberg matter and possible applications in developing clean energy, catalysts, researching space phenomena, and usage in sensors.
State of matter
Disputed
The research claiming to create ultradense hydrogen Rydberg matter (with interatomic spacing of ~2.3 pm: many orders of magnitude less than in most solid matter) is disputed:
″The paper of Holmlid and Zeiner-Gundersen makes claims that would be truly revolutionary if they were true. We have shown that they violate some fundamental and very well established laws in a rather direct manner. We believe we share this scepticism with most of the scientific community. The response to the theories of Holmlid is perhaps most clearly reflected in the reference list of their article. Out of 114 references, 36 are not coauthored by Holmlid. And of these 36, none address the claims made by him and his co-authors. This is so much more remarkable because the claims, if correct, would revolutionize quantum science, add at least two new forms of hydrogen, of which one is supposedly the ground state of the element, discover an extremely dense form of matter, discover processes that violate baryon number conservation, in addition to solving humanity’s need for energy practically in perpetuity.″
References
Condensed matter physics | Rydberg matter | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,080 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
16,381,455 | https://en.wikipedia.org/wiki/Lieb%27s%20square%20ice%20constant | Lieb's square ice constant is a mathematical constant used in the field of combinatorics to quantify the number of Eulerian orientations of grid graphs. It was introduced by Elliott H. Lieb in 1967.
Definition
An n × n grid graph (with periodic boundary conditions and n ≥ 2) has n2 vertices and 2n2 edges; it is 4-regular, meaning that each vertex has exactly four neighbors. An orientation of this graph is an assignment of a direction to each edge; it is an Eulerian orientation if it gives each vertex exactly two incoming edges and exactly two outgoing edges.
Denote the number of Eulerian orientations of this graph by f(n). Then
is Lieb's square ice constant. Lieb used a transfer-matrix method to compute this exactly.
The function f(n) also counts the number of 3-colorings of grid graphs, the number of nowhere-zero 3-flows in 4-regular graphs, and the number of local flat foldings of the Miura fold. Some historical and physical background can be found in the article Ice-type model.
See also
Spin ice
Ice-type model
References
Mathematical constants
Quadratic irrational numbers | Lieb's square ice constant | [
"Physics",
"Materials_science",
"Mathematics"
] | 248 | [
"Mathematical objects",
"Enumerative combinatorics",
"Lattice models",
"Combinatorics",
"Computational physics",
"Condensed matter physics",
"nan",
"Statistical mechanics",
"Mathematical constants",
"Numbers"
] |
16,384,086 | https://en.wikipedia.org/wiki/Job-shop%20scheduling | Job-shop scheduling, the job-shop problem (JSP) or job-shop scheduling problem (JSSP) is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. In a general job scheduling problem, we are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m machines with varying processing power, while trying to minimize the makespan – the total length of the schedule (that is, when all the jobs have finished processing). In the specific variant known as job-shop scheduling, each job consists of a set of operations O1, O2, ..., On which need to be processed in a specific order (known as precedence constraints). Each operation has a specific machine that it needs to be processed on and only one operation in a job can be processed at a given time. A common relaxation is the flexible job shop, where each operation can be processed on any machine of a given set (the machines in each set are identical).
The name originally came from the scheduling of jobs in a job shop, but the theme has wide applications beyond that type of instance. This problem is one of the best known combinatorial optimization problems, and was the first problem for which competitive analysis was presented, by Graham in 1966. The best problem instances for a basic model with a makespan objective are due to Taillard.
In the standard three-field notation for optimal job scheduling problems, the job-shop variant is denoted by J in the first field. For example, the problem denoted by "" is a 3-machines job-shop problem with unit processing times, where the goal is to minimize the maximum completion time.
Problem variations
Many variations of the problem exist, including the following:
Machines can have duplicates (flexible job shop with duplicate machines) or belong to groups of identical machines (flexible job shop).
Machines can require a certain gap between jobs or no idle-time.
Machines can have sequence-dependent setups.
Objective function can be to minimize the makespan, the Lp norm, tardiness, maximum lateness etc. It can also be multi-objective optimization problem.
Jobs may have constraints, for example a job i needs to finish before job j can be started (see workflow). Also, the objective function can be multi-criteria.
Set of jobs can relate to different set of machines.
Deterministic (fixed) processing times or probabilistic processing times.
NP-hardness
Since the traveling salesman problem is NP-hard, the job-shop problem with sequence-dependent setup is clearly also NP-hard since the TSP is a special case of the JSP with a single job (the cities are the machines and the salesman is the job).
Problem representation
The disjunctive graph is one of the popular models used for describing the job-shop scheduling problem instances.
A mathematical statement of the problem can be made as follows:
Let and be two finite sets. On account of the industrial origins of the problem, the are called machines and the are called jobs.
Let denote the set of all sequential assignments of jobs to machines, such that every job is done by every machine exactly once; elements may be written as matrices, in which column lists the jobs that machine will do, in order. For example, the matrix
means that machine will do the three jobs in the order , while machine will do the jobs in the order .
Suppose also that there is some cost function . The cost function may be interpreted as a "total processing time", and may have some expression in terms of times , the cost/time for machine to do job .
The job-shop problem is to find an assignment of jobs such that is a minimum, that is, there is no such that .
Scheduling efficiency
Scheduling efficiency can be defined for a schedule through the ratio of total machine idle time to the total processing time as below:
Here is the idle time of machine , is the makespan and is the number of machines. Notice that with the above definition, scheduling efficiency is simply the makespan normalized to the number of machines and the total processing time. This makes it possible to compare the usage of resources across JSP instances of different size.
The problem of infinite cost
One of the first problems that must be dealt with in the JSP is that many proposed solutions have infinite cost: i.e., there exists such that . In fact, it is quite simple to concoct examples of such by ensuring that two machines will deadlock, so that each waits for the output of the other's next step.
Major results
Graham had already provided the List scheduling algorithm in 1966, which is -competitive, where m is the number of machines. Also, it was proved that List scheduling is optimum online algorithm for 2 and 3 machines. The Coffman–Graham algorithm (1972) for uniform-length jobs is also optimum for two machines, and is -competitive. In 1992, Bartal, Fiat, Karloff and Vohra presented an algorithm that is 1.986 competitive. A 1.945-competitive algorithm was presented by Karger, Philips and Torng in 1994. In 1992, Albers provided a different algorithm that is 1.923-competitive. Currently, the best known result is an algorithm given by Fleischer and Wahl, which achieves a competitive ratio of 1.9201.
A lower bound of 1.852 was presented by Albers.
Taillard instances has an important role in developing job-shop scheduling with makespan objective.
In 1976 Garey provided a proof that this problem is NP-complete for m>2, that is, no optimal solution can be computed in deterministic polynomial time for three or more machines (unless P=NP).
In 2011 Xin Chen et al. provided optimal algorithms for online scheduling on two related machines improving previous results.
Offline makespan minimization
Atomic jobs
The simplest form of the offline makespan minimisation problem deals with atomic jobs, that is, jobs that are not subdivided into multiple operations. It is equivalent to packing a number of items of various different sizes into a fixed number of bins, such that the maximum bin size needed is as small as possible. (If instead the number of bins is to be minimised, and the bin size is fixed, the problem becomes a different problem, known as the bin packing problem.)
Dorit S. Hochbaum and David Shmoys presented a polynomial-time approximation scheme in 1987 that finds an approximate solution to the offline makespan minimisation problem with atomic jobs to any desired degree of accuracy.
Jobs consisting of multiple operations
The basic form of the problem of scheduling jobs with multiple (M) operations, over M machines, such that all of the first operations must be done on the first machine, all of the second operations on the second, etc., and a single job cannot be performed in parallel, is known as the flow-shop scheduling problem. Various algorithms exist, including genetic algorithms.
Johnson's algorithm
A heuristic algorithm by S. M. Johnson can be used to solve the case of a 2 machine N job problem when all jobs are to be processed in the same order. The steps of algorithm are as follows:
Job Pi has two operations, of duration Pi1, Pi2, to be done on Machine M1, M2 in that sequence.
Step 1. List A = { 1, 2, …, N }, List L1 = {}, List L2 = {}.
Step 2. From all available operation durations, pick the minimum.
If the minimum belongs to Pk1,
Remove K from list A; Add K to end of List L1.
If minimum belongs to Pk2,
Remove K from list A; Add K to beginning of List L2.
Step 3. Repeat Step 2 until List A is empty.
Step 4. Join List L1, List L2. This is the optimum sequence.
Johnson's method only works optimally for two machines. However, since it is optimal, and easy to compute, some researchers have tried to adopt it for M machines, (M > 2.)
The idea is as follows: Imagine that each job requires m operations in sequence, on M1, M2 … Mm. We combine the first m/2 machines into an (imaginary) Machining center, MC1, and the remaining Machines into a Machining Center MC2. Then the total processing time for a Job P on MC1 = sum( operation times on first m/2 machines), and processing time for Job P on MC2 = sum(operation times on last m/2 machines).
By doing so, we have reduced the m-Machine problem into a Two Machining center scheduling problem. We can solve this using Johnson's method.
Makespan prediction
Machine learning has been recently used to predict the optimal makespan of a JSP instance without actually producing the optimal schedule. Preliminary results show an accuracy of around 80% when supervised machine learning methods were applied to classify small randomly generated JSP instances based on their optimal scheduling efficiency compared to the average.
Example
Here is an example of a job-shop scheduling problem formulated in AMPL as a mixed-integer programming problem with indicator constraints:
param N_JOBS;
param N_MACHINES;
set JOBS ordered = 1..N_JOBS;
set MACHINES ordered = 1..N_MACHINES;
param ProcessingTime{JOBS, MACHINES} > 0;
param CumulativeTime{i in JOBS, j in MACHINES} =
sum {jj in MACHINES: ord(jj) <= ord(j)} ProcessingTime[i,jj];
param TimeOffset{i1 in JOBS, i2 in JOBS: i1 <> i2} =
max {j in MACHINES}
(CumulativeTime[i1,j] - CumulativeTime[i2,j] + ProcessingTime[i2,j]);
var end >= 0;
var start{JOBS} >= 0;
var precedes{i1 in JOBS, i2 in JOBS: ord(i1) < ord(i2)} binary;
minimize makespan: end;
subj to makespan_def{i in JOBS}:
end >= start[i] + sum{j in MACHINES} ProcessingTime[i,j];
subj to no12_conflict{i1 in JOBS, i2 in JOBS: ord(i1) < ord(i2)}:
precedes[i1,i2] ==> start[i2] >= start[i1] + TimeOffset[i1,i2];
subj to no21_conflict{i1 in JOBS, i2 in JOBS: ord(i1) < ord(i2)}:
!precedes[i1,i2] ==> start[i1] >= start[i2] + TimeOffset[i2,i1];
data;
param N_JOBS := 4;
param N_MACHINES := 4;
param ProcessingTime:
1 2 3 4 :=
1 5 4 2 1
2 8 3 6 2
3 9 7 2 3
4 3 1 5 8;
Related problems
Flow-shop scheduling is a similar problem but without the constraint that each operation must be done on a specific machine (only the order constraint is kept).
Open-shop scheduling is a similar problem but also without the order constraint.
See also
Disjunctive graph
Dynamic programming
Genetic algorithm scheduling
List of NP-complete problems
Optimal control
Scheduling (production processes)
References
External links
University of Vienna Directory of methodologies, systems and software for dynamic optimization.
Taillard instances
Brucker P. Scheduling Algorithms. Heidelberg, Springer. Fifth ed.
Optimal scheduling
NP-complete problems
pt:Escalonamento de Job Shop | Job-shop scheduling | [
"Mathematics",
"Engineering"
] | 2,461 | [
"Optimal scheduling",
"Industrial engineering",
"Computational problems",
"Mathematical problems",
"NP-complete problems"
] |
16,384,773 | https://en.wikipedia.org/wiki/Nonequilibrium%20partition%20identity | The nonequilibrium partition identity (NPI) is a remarkably simple and elegant consequence of the fluctuation theorem previously known as the Kawasaki identity:
(Carberry et al. 2004). Thus in spite of the second law inequality which might lead one to expect that the average would decay exponentially with time, the exponential probability ratio given by the FT exactly cancels the negative exponential in the average above leading to an average which is unity for all time.
The first derivation of the nonequilibrium partition identity for Hamiltonian systems was by Yamada and Kawasaki in 1967. For thermostatted deterministic systems the first derivation was by Morriss and Evans in 1985.
Bibliography
See also
Fluctuation theorem – Provides an equality that quantifies fluctuations in time averaged entropy production in a wide variety of nonequilibrium systems
Crooks fluctuation theorem – Provides a fluctuation theorem between two equilibrium states; implies the Jarzynski equality
External links
Statistical mechanics
Non-equilibrium thermodynamics
Equations | Nonequilibrium partition identity | [
"Physics",
"Chemistry",
"Mathematics"
] | 208 | [
"Thermodynamics stubs",
"Statistical mechanics stubs",
"Non-equilibrium thermodynamics",
"Mathematical objects",
"Equations",
"Thermodynamics",
"Statistical mechanics",
"Physical chemistry stubs",
"Dynamical systems"
] |
16,393,338 | https://en.wikipedia.org/wiki/Goodman%20relation | Within the branch of materials science known as material failure theory, the Goodman relation (also called a Goodman diagram, a Goodman-Haigh diagram, a Haigh diagram or a Haigh-Soderberg diagram) is an equation used to quantify the interaction of mean and alternating stresses on the fatigue life of a material. The equation is typically presented as a linear curve of mean stress vs. alternating stress that provides the maximum number of alternating stress cycles a material will withstand before failing from fatigue.
A scatterplot of experimental data shown on an amplitude versus mean stress plot can often be approximated by a parabola known as the Gerber line, which can in turn be (conservatively) approximated by a straight line called the Goodman line.
Mathematical description
The relations can be represented mathematically as:
, Gerber Line (parabola)
, Goodman Line
, Soderberg Line
where is the stress amplitude, is the mean stress, is the fatigue limit for completely reversed loading, is the ultimate tensile strength of the material and is the factor of safety.
The Gerber parabola is indication of the region just beneath the failure points during experiment.
The Goodman line connects on the abscissa and on the ordinate. The Goodman line is much safer consideration than the Gerber parabola because it is completely inside the Gerber parabola and excludes some of area which is nearby to failure region.
The Soderberg Line connects on the abscissa and on the ordinate, which is more conservative consideration and much safer. is the yield strength of the material.
The general trend given by the Goodman relation is one of decreasing fatigue life with increasing mean stress for a given level of alternating stress. The relation can be plotted to determine the safe cyclic loading of a part; if the coordinate given by the mean stress and the alternating stress lies under the curve given by the relation, then the part will survive. If the coordinate is above the curve, then the part will fail for the given stress parameters.
References
Bibliography
Goodman, J., Mechanics Applied to Engineering, Longman, Green & Company, London, 1899.
Hertzberg, Richard W., Deformation and Fracture Mechanics and Engineering Materials. John Wiley and Sons, Hoboken, NJ: 1996.
Mars, W.V., Computed dependence of rubber's fatigue behavior on strain crystallization. Rubber Chemistry and Technology, 82(1), 51–61. 2009.
Further reading
External links
Fatpack. Fatigue analysis in python with Goodman mean stress correction implementation.
Materials science
Fracture mechanics
Rubber properties | Goodman relation | [
"Physics",
"Materials_science",
"Engineering"
] | 520 | [
"Structural engineering",
"Applied and interdisciplinary physics",
"Fracture mechanics",
"Materials science",
"nan",
"Materials degradation"
] |
16,400,133 | https://en.wikipedia.org/wiki/S%20Arae | S Arae (S Ara) is an RR Lyrae-type pulsating variable star in the constellation of Ara. It has an apparent visual magnitude which varies between 9.92 and 11.24 during its 10.85-hour pulsation period, and it exhibits the Blazhko effect.
In 1896 David Gill and Jacobus Kapteyn announced that the variability of the as yet unnamed star was "all but proved" by the Cape Carte du Ciel photographic plates. In 1900, Robert T. A. Innes confirmed that the star, by then named CPD-49 10361, is a variable. It was listed with its modern variable star designation, S Arae, in Annie Jump Cannon's 1907 Second Catalogue of Variable Stars.
It was originally thought that S Arae was a binary whose brightness changes were caused by eclipses. In 1918, Harlow Shapley included it within the Cepheid variable star class. By 1939 it had been classified as an RR Lyrae variable.
S Arae's large negative declination makes it a circumpolar star in Antarctica. Such a star can be monitored continuously for much of the southern hemisphere's winter, allowing a long period of observation without gaps due to daylight. It was the first star to be monitored that way at Dome C. RRab type stars, like S Arae, are fundamental mode pulsating stars that have asymmetric light curves which rise to maximum brightness rapidly then fade more slowly. The Blazhko effect modulation period for this star is 47.264 days (about 105 times longer than the main pulsation period), and three other periodicities have been detected in the light curve.
References
Ara (constellation)
RR Lyrae variables
A-type bright giants
088064
Arae, S
Durchmusterung objects | S Arae | [
"Astronomy"
] | 384 | [
"Constellations",
"Ara (constellation)"
] |
11,112,693 | https://en.wikipedia.org/wiki/Virtual%20temperature | In atmospheric thermodynamics, the virtual temperature () of a moist air parcel is the temperature at which a theoretical dry air parcel would have a total pressure and density equal to the moist parcel of air.
The virtual temperature of unsaturated moist air is always greater than the absolute air temperature, however, as the existence of suspended cloud droplets reduces the virtual temperature.
The virtual temperature effect is also known as the vapor buoyancy effect. It has been described to increase Earth's thermal emission by warming the tropical atmosphere.
Introduction
Description
In atmospheric thermodynamic processes, it is often useful to assume air parcels behave approximately adiabatically, and approximately ideally. The specific gas constant for the standardized mass of one kilogram of a particular gas is variable, and described mathematically as
where is the molar gas constant, and is the apparent molar mass of gas in kilograms per mole. The apparent molar mass of a theoretical moist parcel in Earth's atmosphere can be defined in components of water vapor and dry air as
with being partial pressure of water, dry air pressure, and and representing the molar masses of water vapor and dry air respectively. The total pressure is described by Dalton's law of partial pressures:
Purpose
Rather than carry out these calculations, it is convenient to scale another quantity within the ideal gas law to equate the pressure and density of a dry parcel to a moist parcel. The only variable quantity of the ideal gas law independent of density and pressure is temperature. This scaled quantity is known as virtual temperature, and it allows for the use of the dry-air equation of state for moist air. Temperature has an inverse proportionality to density. Thus, analytically, a higher vapor pressure would yield a lower density, which should yield a higher virtual temperature in turn.
Derivation
Consider a moist air parcel containing masses and of dry air and water vapor in a given volume . The density is given by
where and are the densities the dry air and water vapor would respectively have when occupying the volume of the air parcel. Rearranging the standard ideal gas equation with these variables gives
and
Solving for the densities in each equation and combining with the law of partial pressures yields
Then, solving for and using is approximately 0.622 in Earth's atmosphere:
where the virtual temperature is
We now have a non-linear scalar for temperature dependent purely on the unitless value , allowing for varying amounts of water vapor in an air parcel. This virtual temperature in units of kelvin can be used seamlessly in any thermodynamic equation necessitating it.
Variations
Often the more easily accessible atmospheric parameter is the mixing ratio . Through expansion upon the definition of vapor pressure in the law of partial pressures as presented above and the definition of mixing ratio:
which allows
Algebraic expansion of that equation, ignoring higher orders of due to its typical order in Earth's atmosphere of , and substituting with its constant value yields the linear approximation
With the mixing ratio expressed in g/g.
An approximate conversion using in degrees Celsius and mixing ratio in g/kg is
Knowing that specific humidity is given in terms of mixing ratio as , then we can write mixing ratio in terms of the specific humidity as .
We can now write the virtual temperature in terms of specific humidity as
Simplifying the above will reduce to
and using the value of , then we can write
Virtual potential temperature
Virtual potential temperature is similar to potential temperature in that it removes the temperature variation caused by changes in pressure. Virtual potential temperature is useful as a surrogate for density in buoyancy calculations and in turbulence transport which includes vertical air movement.
Density temperature
A moist air parcel may also contain liquid droplets and ice crystals in addition to water vapor. A net mixing ratio can be defined as the sum of the mixing ratios of water vapor , liquid , and ice present in the parcel. Assuming that and are typically much smaller than , a density temperature of a parcel can be defined, representing the temperature at which a theoretical dry air parcel would have the a pressure and density equal to a moist parcel of air while accounting for condensates:
Uses
Virtual temperature is used in adjusting CAPE soundings for assessing available convective potential energy from skew-T log-P diagrams. The errors associated with ignoring virtual temperature correction for smaller CAPE values can be quite significant. Thus, in the early stages of convective storm formation, a virtual temperature correction is significant in identifying the potential intensity in tropical cyclogenesis.
Further reading
References
Atmospheric thermodynamics
Meteorological quantities
Atmospheric temperature
Atmospheric pressure
Humidity and hygrometry | Virtual temperature | [
"Physics",
"Mathematics"
] | 927 | [
"Quantity",
"Physical quantities",
"Meteorological quantities",
"Atmospheric pressure"
] |
11,118,768 | https://en.wikipedia.org/wiki/Parabolic%20induction | In mathematics, parabolic induction is a method of constructing representations of a reductive group from representations of its parabolic subgroups.
If G is a reductive algebraic group and is the Langlands decomposition of a parabolic subgroup P, then parabolic induction consists of taking a representation of , extending it to P by letting N act trivially, and inducing the result from P to G.
There are some generalizations of parabolic induction using cohomology, such as cohomological parabolic induction and Deligne–Lusztig theory.
Philosophy of cusp forms
The philosophy of cusp forms was a slogan of Harish-Chandra, expressing his idea of a kind of reverse engineering of automorphic form theory, from the point of view of representation theory. The discrete group Γ fundamental to the classical theory disappears, superficially. What remains is the basic idea that representations in general are to be constructed by parabolic induction of cuspidal representations. A similar philosophy was enunciated by Israel Gelfand, and the philosophy is a precursor of the Langlands program. A consequence for thinking about representation theory is that cuspidal representations are the fundamental class of objects, from which other representations may be constructed by procedures of induction.
According to Nolan Wallach
Put in the simplest terms the "philosophy of cusp forms" says that for each Γ-conjugacy classes of Q-rational parabolic subgroups one should construct automorphic functions (from objects from spaces of lower dimensions) whose constant terms are zero for other conjugacy classes and the constant terms for [an] element of the given class give all constant terms for this parabolic subgroup. This is almost possible and leads to a description of all automorphic forms in terms of these constructs and cusp forms. The construction that does this is the Eisenstein series.
Notes
References
A. W. Knapp, Representation Theory of Semisimple Groups: An Overview Based on Examples, Princeton Landmarks in Mathematics, Princeton University Press, 2001. .
Representation theory | Parabolic induction | [
"Mathematics"
] | 416 | [
"Representation theory",
"Fields of abstract algebra"
] |
11,120,026 | https://en.wikipedia.org/wiki/Sum-free%20set | In additive combinatorics and number theory, a subset A of an abelian group G is said to be sum-free if the sumset A + A is disjoint from A. In other words, A is sum-free if the equation has no solution with .
For example, the set of odd numbers is a sum-free subset of the integers, and the set {N + 1, ..., 2N } forms a large sum-free subset of the set {1, ..., 2N }. Fermat's Last Theorem is the statement that, for a given integer n > 2, the set of all nonzero nth powers of the integers is a sum-free set.
Some basic questions that have been asked about sum-free sets are:
How many sum-free subsets of {1, ..., N } are there, for an integer N? Ben Green has shown that the answer is , as predicted by the Cameron–Erdős conjecture.
How many sum-free sets does an abelian group G contain?
What is the size of the largest sum-free set that an abelian group G contains?
A sum-free set is said to be maximal if it is not a proper subset of another sum-free set.
Let be defined by is the largest number such that any subset of with size n has a sum-free subset of size k. The function is subadditive, and by the Fekete subadditivity lemma, exists. Erdős proved that , and conjectured that equality holds. This was proved by Eberhard, Green, and Manners.
See also
Erdős–Szemerédi theorem
Sum-free sequence
References
Sumsets
Additive combinatorics | Sum-free set | [
"Mathematics"
] | 375 | [
"Additive combinatorics",
"Sumsets",
"Combinatorics"
] |
2,276,394 | https://en.wikipedia.org/wiki/Phenmetrazine | Phenmetrazine, sold under the brand name Preludin among others, is a stimulant drug first synthesized in 1952 and originally used as an appetite suppressant, but withdrawn from the market in the 1980s due to widespread misuse. It was initially replaced by its analogue phendimetrazine (under the brand name Prelu-2) which functions as a prodrug to phenmetrazine, but now it is rarely prescribed, due to concerns of misuse and addiction. Chemically, phenmetrazine is a substituted amphetamine containing a morpholine ring or a substituted phenylmorpholine.
Medical uses
Phenmetrazine has been used as an appetite suppressant for purposes of weight loss. It was used therapeutically for this indication at a dosage of 25mg two or three times per day (or 50–75mg/day total) in adults. Phenmetrazine has been found to produce similar weight loss to dextroamphetamine in people with obesity.
In addition to its appetite suppressant effects, phenmetrazine produces psychostimulant and sympathomimetic effects. Phenmetrazine has been shown to produce very similar subjective psychostimulant effects to those of amphetamine and methamphetamine in clinical studies. Although able to produce comparable effects however, phenmetrazine has only about one-fifth to one-third of the potency of dextroamphetamine by weight.
Pharmacology
Pharmacodynamics
Phenmetrazine acts as a norepinephrine and dopamine releasing agent (NDRA), with values for induction of norepinephrine and dopamine release of 29–50nM and 70–131nM, respectively. It has very weak activity as a releaser of serotonin, with an EC50 value of 7,765 to >10,000nM. The drug is several times less potent than dextroamphetamine and dextromethamphetamine as an NDRA in vitro. This is in accordance with the higher doses required clinically.
In contrast to many other monoamine releasing agents (MRAs), phenmetrazine is inactive in terms of vesicular monoamine transporter 2 (VMAT2) actions. A few other MRAs have also been found to be inactive at VMAT2, such as phentermine and benzylpiperazine (BZP). These findings indicate that VMAT2 activity is non-essential for robust MRA actions.
Phenmetrazine does not appear to have been assessed at the trace amine-associated receptor 1 (TAAR1).
Phenmetrazine has been found to dose-dependently elevate brain dopamine levels in rodents in vivo. A 10mg/kg i.v. dose of phenmetrazine increased nucleus accumbens dopamine levels by around 1,400% in rats. For comparison, dextroamphetamine 3mg/kg i.p. increased striatal dopamine levels by about 5,000% in rats. On the other hand, the maximal increases in brain dopamine levels with phenmetrazine are similar to those with the proposed dopamine transporter (DAT) "inverse agonists" methylphenidate and cocaine (e.g., ~1,500%). Dopamine-releasing drugs that lack VMAT2 activity are theorized to produce much smaller maximal impacts on dopamine levels under experimental conditions than those which also act on VMAT2 like amphetamine. However, the pharmacological significance of these VMAT2 interactions in humans is unclear.
In trials performed on rats, it has been found that after subcutaneous administration of phenmetrazine, both optical isomers are equally effective in reducing food intake, but in oral administration the levo isomer is more effective. In terms of central stimulation however, the dextro isomer is about four times as effective in both methods of administration.
Pharmacokinetics
After an oral dose, about 70% of the drug is excreted from the body within 24 hours. About 19% of that is excreted as the unmetabolised drug and the rest as various metabolites.
The salt which has been used for immediate-release formulations is phenmetrazine hydrochloride (Preludin). Sustained-release formulations were available as resin-bound, rather than soluble, salts. Both of these dosage forms share a similar bioavailability as well as time to peak onset, however, sustained-release formulations offer improved pharmacokinetics with a steady release of active ingredient which results in a lower peak concentration in blood plasma.
Chemistry
Phenmetrazine, also known as (2RS,3RS)-2-phenyl-3-methylmorpholine or as (2RS,3RS)-3-methyl-2-phenyltetrahydro-2H-1,4-oxazine, is a substituted phenylmorpholine. It is the (2RS,3RS)- or (±)-trans- enantiomer of 2-phenyl-3-methylmorpholine.
Phenmetrazine's chemical structure incorporates the backbone of amphetamine, the prototypical psychostimulant which, like phenmetrazine, is a releasing agent of dopamine and norepinephrine. The molecule also loosely resembles ethcathinone, the active metabolite of popular anorectic amfepramone (diethylpropion). Unlike phenmetrazine, ethcathinone (and therefore amfepramone as well) are mostly selective as norepinephrine releasing agents.
A variety of phenmetrazine analogues and derivatives have been encountered as designer drugs. In addition, the activities of various phenmetrazine analogues and derivatives as monoamine releasing agent (MRA) have been described.
Synthesis
Phenmetrazine can be synthesized in three steps from 2-bromopropiophenone and ethanolamine. The intermediate alcohol 3-methyl-2-phenylmorpholin-2-ol (1) is converted to a fumarate salt (2) with fumaric acid, then reduced with sodium borohydride to give phenmetrazine free base (3). The free base can be converted to the fumarate salt (4) by reaction with fumaric acid.
History
Phenmetrazine was first patented in Germany in 1952 by Boehringer-Ingelheim, with some pharmacological data published in 1954. It was the result of a search by Thomä and Wick for an anorectic drug without the side effects of amphetamine. Phenmetrazine was introduced into clinical use in 1954 in Europe.
Society and culture
Names
Phenmetrazine is the generic name of the drug and its , , and . It is also known by the brand name Preludin.
Availability
In 2004, phenmetrazine remained marketed only in Israel.
Legal status
Phenmetrazine is a Schedule II controlled substance in the United States.
Recreational use
Phenmetrazine has been used recreationally in many countries, including Sweden. When stimulant use first became prevalent in Sweden in the 1950s, phenmetrazine was preferred to amphetamine and methamphetamine by users. In the autobiographical novel Rush by Kim Wozencraft, intravenous phenmetrazine is described as the most euphoric and pro-sexual of the stimulants the author used.
Phenmetrazine was classified as a narcotic in Sweden in 1959, and was taken completely off the market in 1965. Formerly the illegal demand was satisfied by smuggling from Germany, and later Spain and Italy. At first, Preludin tablets were smuggled, but soon the smugglers started bringing in raw phenmetrazine powder. Eventually amphetamine became the dominant stimulant of abuse because of its greater availability.
Phenmetrazine was taken by the Beatles early in their career. Paul McCartney was one known user. McCartney's introduction to drugs started in Hamburg, Germany. The Beatles had to play for hours, and they were often given the drug (referred to as "prellies") by the maid who cleaned their housing arrangements, German customers, or by Astrid Kirchherr (whose mother bought them). McCartney would usually take one, but John Lennon would often take four or five. Hunter Davies asserted, in his 1968 biography of the band, that their use of such stimulants then was in response to their need to stay awake and keep working, rather than a simple desire for kicks.
Jack Ruby said he was on phenmetrazine at the time he killed Lee Harvey Oswald.
Preludin was also used recreationally in the US throughout the 1960s and 1970s. It could be crushed up in water, heated and injected. The street name for the drug in Washington, DC was "Bam". Phenmetrazine continues to be used and abused around the world, in countries including South Korea.
References
Anorectics
Beta-Hydroxyamphetamines
Euphoriants
Norepinephrine-dopamine releasing agents
Phenylmorpholines
Stimulants
Withdrawn drugs | Phenmetrazine | [
"Chemistry"
] | 2,000 | [
"Drug safety",
"Withdrawn drugs"
] |
2,276,409 | https://en.wikipedia.org/wiki/Steel%20frame | Steel frame is a building technique with a "skeleton frame" of vertical steel columns and horizontal I-beams, constructed in a rectangular grid to support the floors, roof and walls of a building which are all attached to the frame. The development of this technique made the construction of the skyscraper possible. Steel frame has displaced its predecessor, the iron frame, in the early 20th century.
Concept
The rolled steel "profile" or cross section of steel columns takes the shape of the letter "". The two wide flanges of a column are thicker and wider than the flanges on a beam, to better withstand compressive stress in the structure. Square and round tubular sections of steel can also be used, often filled with concrete. Steel beams are connected to the columns with bolts and threaded fasteners, and historically connected by rivets. The central "web" of the steel I-beam is often wider than a column web to resist the higher bending moments that occur in beams.
Wide sheets of steel deck can be used to cover the top of the steel frame as a "form" or corrugated mold, below a thick layer of concrete and steel reinforcing bars. Another popular alternative is a floor of precast concrete flooring units with some form of concrete topping. Often in office buildings, the final floor surface is provided by some form of raised flooring system with the void between the walking surface and the structural floor being used for cables and air handling ducts.
The frame needs to be protected from fire because steel softens at high temperature and this can cause the building to partially collapse. In the case of the columns this is usually done by encasing it in some form of fire resistant structure such as masonry, concrete or plasterboard. The beams may be cased in concrete, plasterboard or sprayed with a coating to insulate it from the heat of the fire or it can be protected by a fire-resistant ceiling construction. Asbestos was a popular material for fireproofing steel structures up until the early 1970s, before the health risks of asbestos fibres were fully understood.
The exterior "skin" of the building is anchored to the frame using a variety of construction techniques and following a huge variety of architectural styles. Bricks, stone, reinforced concrete, architectural glass, sheet metal and simply paint have been used to cover the frame to protect the steel from the weather.
Cold-formed steel frames
Cold-formed steel frames are also known as lightweight steel framing (LSF).
Thin sheets of galvanized steel can be cold formed into steel studs for use as a structural or non-structural building material for both external and partition walls in both residential, commercial and industrial construction projects (pictured). The dimension of the room is established with a horizontal track that is anchored to the floor and ceiling to outline each room. The vertical studs are arranged in the tracks, usually spaced apart, and fastened at the top and bottom.
The typical profiles used in residential construction are the C-shape stud and the U-shaped track, and a variety of other profiles. Framing members are generally produced in a thickness of 12 to 25 gauge. Heavy gauges, such as 12 and 14 gauge, are commonly used when axial loads (parallel to the length of the member) are high, such as in load-bearing construction. Medium-heavy gauges, such as 16 and 18 gauge, are commonly used when there are no axial loads but heavy lateral loads (perpendicular to the member) such as exterior wall studs that need to resist hurricane-force wind loads along coasts. Light gauges, such as 25 gauge, are commonly used where there are no axial loads and very light lateral loads such as in interior construction where the members serve as framing for demising walls between rooms. The wall finish is anchored to the two flange sides of the stud, which varies from thick, and the width of web ranges from . Rectangular sections are removed from the web to provide access for electrical wiring.
Steel mills produce galvanized sheet steel, the base material for the manufacture of cold-formed steel profiles. Sheet steel is then roll-formed into the final profiles used for framing. The sheets are zinc coated (galvanized) to increase protection against oxidation and corrosion. Steel framing provides excellent design flexibility due to the high strength-to-weight ratio of steel, which allows it to span over long distances, and also resist wind and earthquake loads.
Steel-framed walls can be designed to offer excellent thermal and acoustic properties – one of the specific considerations when building using cold-formed steel is that thermal bridging can occur across the wall system between the outside environment and interior conditioned space. Thermal bridging can be protected against by installing a layer of externally fixed insulation along the steel framing – typically referred to as a 'thermal break'.
The spacing between studs is typically 16 inches on center for home exterior and interior walls depending on designed loading requirements. In office suites the spacing is on center for all walls except for elevator and staircase wells.
Hot-formed steel frames
Hot Formed frames, also known as hot-rolled steel frames, are engineered from steel that undergoes a complex manufacturing process known as hot rolling. During this procedure, steel members are heated to temperatures above the steel’s recrystallization temperature (1700˚F).This process serves to refine the grain structure of the steel and align its crystalline lattice. It is then passed through precision rollers to achieve the desired frame profiles.
The distinctive feature of hot formed frames is their substantial beam thickness and larger dimensions, making them more robust compared to their cold rolled counterparts. This inherent strength makes them particularly well-suited for application in larger structures, as they show minimal deformation when subjected to substantial loads.
While it is true that hot rolled steel members often have a higher initial cost per component when compared to cold rolled steel, their cost-efficiency becomes increasingly evident when used in the construction of larger structures. This is because hot rolled steel frames require fewer components to span equivalent distances, leading to economic advantages in bigger projects.
History
The use of steel instead of iron for structural purposes was initially slow. The first iron-framed building, Ditherington Flax Mill, had been built in 1797, but it was not until the development of the Bessemer process in 1855 that steel production was made efficient enough for steel to be a widely used material. Cheap steels, which had high tensile and compressive strengths and good ductility, were available from about 1870, but wrought and cast iron continued to satisfy most of the demand for iron-based building products, due mainly to problems of producing steel from alkaline ores. These problems, caused principally by the presence of phosphorus, were solved by Sidney Gilchrist Thomas in 1879.
It was not until 1880 that an era of construction based on reliable mild steel began. By that date the quality of steels being produced had become reasonably consistent.
The Home Insurance Building, completed in 1885, was the first to use skeleton frame construction, completely removing the load bearing function of its masonry cladding. In this case the iron columns are merely embedded in the walls, and their load carrying capacity appears to be secondary to the capacity of the masonry, particularly for wind loads. In the United States, the first steel framed building was the Rand McNally Building in Chicago, erected in 1890.
The Royal Insurance Building in Liverpool designed by James Francis Doyle in 1895 (erected 1896–1903) was the first to use a steel frame in the United Kingdom.
See also
Buckling-restrained braced frame (BRBF)
Curtain wall (architecture)
Prefabricated buildings
Steel building
Structural steel
Structural robustness
Tension fabric structure
References
Sources
External links
Historical Development of Iron and Steel in Buildings
"Its Here – All Steel Buildings." Popular Science Monthly, November 1928, p. 33.
Steel Framing Industry Association web site
Steel Framing Alliance web site
British Constructional Steelwork Association / SCI information
Construction
Structural steel | Steel frame | [
"Engineering"
] | 1,606 | [
"Construction",
"Structural steel",
"Structural engineering"
] |
2,277,097 | https://en.wikipedia.org/wiki/TeraGrid | TeraGrid was an e-Science grid computing infrastructure combining resources at eleven partner sites. The project started in 2001 and operated from 2004 through 2011.
The TeraGrid integrated high-performance computers, data resources and tools, and experimental facilities. Resources included more than a petaflops of computing capability and more than 30 petabytes of online and archival data storage, with rapid access and retrieval over high-performance computer network connections. Researchers could also access more than 100 discipline-specific databases.
TeraGrid was coordinated through the Grid Infrastructure Group (GIG) at the University of Chicago, working in partnership with the resource provider sites in the United States.
History
The US National Science Foundation (NSF) issued a solicitation asking for a "distributed terascale facility" from program director Richard L. Hilderbrandt.
The TeraGrid project was launched in August 2001 with $53 million in funding to four sites: the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, the San Diego Supercomputer Center (SDSC) at the University of California, San Diego, the University of Chicago Argonne National Laboratory, and the Center for Advanced Computing Research (CACR) at the California Institute of Technology in Pasadena, California.
The design was meant to be an extensible distributed open system from the start.
In October 2002, the Pittsburgh Supercomputing Center (PSC) at Carnegie Mellon University and the University of Pittsburgh joined the TeraGrid as major new partners when NSF announced $35 million in supplementary funding. The TeraGrid network was transformed through the ETF project from a 4-site mesh to a dual-hub backbone network with connection points in Los Angeles and at the Starlight facilities in Chicago.
In October 2003, NSF awarded $10 million to add four sites to TeraGrid as well as to establish a third network hub, in Atlanta. These new sites were Oak Ridge National Laboratory (ORNL), Purdue University, Indiana University, and the Texas Advanced Computing Center (TACC) at The University of Texas at Austin.
TeraGrid construction was also made possible through corporate partnerships with Sun Microsystems, IBM, Intel Corporation, Qwest Communications, Juniper Networks, Myricom, Hewlett-Packard Company, and Oracle Corporation.
TeraGrid construction was completed in October 2004, at which time the TeraGrid facility began full production.
Operation
In August 2005, NSF's newly created office of cyberinfrastructure extended support for another five years with a $150 million set of awards. It included $48 million for coordination and user support to the Grid Infrastructure Group at the University of Chicago led by Charlie Catlett.
Using high-performance network connections, the TeraGrid featured high-performance computers, data resources and tools, and high-end experimental facilities around the USA. The work supported by the project is sometimes called e-Science.
In 2006, the University of Michigan's School of Information began a study of TeraGrid.
In May 2007, TeraGrid integrated resources included more than 250 teraflops of computing capability and more than 30 petabytes (quadrillions of bytes) of online and archival data storage with rapid access and retrieval over high-performance networks. Researchers could access more than 100 discipline-specific databases. In late 2009, The TeraGrid resources had grown to 2 petaflops of computing capability and more than 60 petabytes storage. In mid 2009, NSF extended the operation of TeraGrid to 2011.
Transition to XSEDE
A follow-on project was approved in May 2011.
In July 2011, a partnership of 17 institutions announced the Extreme Science and Engineering Discovery Environment (XSEDE). NSF announced funding the XSEDE project for five years, at $121 million.
XSEDE is led by John Towns at the University of Illinois's National Center for Supercomputing Applications.
Architecture
TeraGrid resources are integrated through a service-oriented architecture in that each resource provides a "service" that is defined in terms of interface and operation. Computational resources run a set of software packages called "Coordinated TeraGrid Software and Services" (CTSS). CTSS provides a familiar user environment on all TeraGrid systems, allowing scientists to more easily port code from one system to another. CTSS also provides integrative functions such as single-signon, remote job submission, workflow support, data movement tools, etc. CTSS includes the Globus Toolkit, Condor, distributed accounting and account management software, verification and validation software, and a set of compilers, programming tools, and environment variables.
TeraGrid uses a 10 Gigabits per second dedicated fiber-optical backbone network, with hubs in Chicago, Denver, and Los Angeles. All resource provider sites connect to a backbone node at 10 Gigabits per second. Users accessed the facility through national research networks such as the Internet2 Abilene backbone and National LambdaRail.
Usage
TeraGrid users primarily came from U.S. universities. There are roughly 4,000 users at over 200 universities. Academic researchers in the United States can obtain exploratory, or development allocations (roughly, in "CPU hours") based on an abstract describing the work to be done. More extensive allocations involve a proposal that is reviewed during a quarterly peer-review process. All allocation proposals are handled through the TeraGrid website. Proposers select a scientific discipline that most closely describes their work, and this enables reporting on the allocation of, and use of, TeraGrid by scientific discipline. As of July 2006 the scientific profile of TeraGrid allocations and usage was:
Each of these discipline categories correspond to a specific program area of the National Science Foundation.
Starting in 2006, TeraGrid provided application-specific services to Science Gateway partners, who serve (generally via a web portal) discipline-specific scientific and education communities. Through the Science Gateways program TeraGrid aims to broaden access by at least an order of magnitude in terms of the number of scientists, students, and educators who are able to use TeraGrid.
Resource providers
Argonne National Laboratory (ANL) operated by the University of Chicago and the Department of Energy
Indiana University - Big Red - IBM BladeCenter JS21 Cluster
Louisiana Optical Network Initiative (LONI)
National Center for Atmospheric Research (NCAR)
National Center for Supercomputing Applications (NCSA)
National Institute for Computational Sciences (NICS) operated by University of Tennessee at Oak Ridge National Laboratory.
Oak Ridge National Laboratory (ORNL)
Pittsburgh Supercomputing Center (PSC) operated by University of Pittsburgh and Carnegie Mellon University.
Purdue University
San Diego Supercomputer Center (SDSC)
Texas Advanced Computing Center (TACC)
Similar projects
Distributed European Infrastructure for Supercomputing Applications (DEISA), integrating eleven European supercomputing centers
Enabling Grids for E-sciencE (EGEE)
National Research Grid Initiative (NAREGEGI) involving several supercomputer centers in Japan from 2003
Open Science Grid - a distributed computing infrastructure for scientific research
Extreme Science and Engineering Discovery Environment (XSEDE) - the TeraGrid successor
References
External links
TeraGrid website
Grid computing
National Science Foundation
Supercomputing | TeraGrid | [
"Technology"
] | 1,502 | [
"Supercomputing"
] |
2,277,192 | https://en.wikipedia.org/wiki/Geotextile | Geotextiles are versatile permeable fabrics that, when used in conjunction with soil, can effectively perform multiple functions, including separation, filtration, reinforcement, protection, and drainage. Typically crafted from polypropylene or polyester, geotextile fabrics are available in two primary forms: woven, which resembles traditional mail bag sacking, and nonwoven, which resembles felt.
Geotextile composites have been introduced and products such as geogrids and meshes have been developed. Geotextiles are durable and are able to soften a fall. Overall, these materials are referred to as geosynthetics and each configuration—geonets, geosynthetic clay liners, geogrids, geotextile tubes, and others—can yield benefits in geotechnical and environmental engineering design.
History
Geotextiles were originally intended to be a substitute for granular soil filters. Geotextiles can also be referred to as filter fabrics. In the 1950s, R.J. Barrett began working using geotextiles behind precast concrete seawalls, under precast concrete erosion control blocks, beneath large stone riprap, and in other erosion control situations. He used different styles of woven monofilament fabrics, all characterized by a relatively high percentage open area (varying from 6 to 30%). He discussed the need for both adequate permeability and soil retention, along with adequate fabric strength and proper elongation and tone setting for geotextile use in filtration situations.
Applications
Geotextiles and related products have many applications and currently support many civil engineering applications including roads, airfields, railroads, embankments, retaining structures, reservoirs, canals, dams, bank protection, coastal engineering and construction site silt fences or to form a geotextile tube. Geotextiles can also serve as components of other geosynthetics such as the reinforcing material in a bituminous geomembrane. Usually geotextiles are placed at the tension surface to strengthen the soil. Geotextiles are also used for sand dune armoring to protect upland coastal property from storm surge, wave action and flooding. A large sand-filled container (SFC) within the dune system prevents storm erosion from proceeding beyond the SFC. Using a sloped unit rather than a single tube eliminates damaging scour.
Erosion control manuals comment on the effectiveness of sloped, stepped shapes in mitigating shoreline erosion damage from storms. Geotextile sand-filled units provide a "soft" armoring solution for upland property protection. Geotextiles are used as matting to stabilize flow in stream channels and swales.
Geotextiles can improve soil strength at a lower cost than conventional soil nailing. In addition, geotextiles allow planting on steep slopes, further securing the slope.
Geotextiles have been used to protect the fossil hominid footprints of Laetoli in Tanzania from erosion, rain, and tree roots.
In building demolition, geotextile fabrics in combination with steel wire fencing can contain explosive debris.
Coir (coconut fiber) geotextiles are popular for erosion control, slope stabilization and bioengineering, due to the fabric's substantial mechanical strength. Coir geotextiles last approximately 3 to 5 years depending on the fabric weight. The product degrades into humus, enriching the soil.
Global warming
Glacial retreat
Geotextiles with reflective properties are often used in protecting the melting glaciers. In north Italy, they use Geotextiles to cover the glaciers for protection from the Sun. The reflective properties of the geotextile reflect the sun away from the melting glacier in order to slow the process. However, this process has proven to be more expensive than effective.
Design methods
While many possible design methods or combinations of methods are available to the geotextile designer, the ultimate decision for a particular application usually takes one of three directions: design by cost and availability, design by specification, or design by function. Extensive literature on design methods for geotextiles has been published in the peer reviewed journal Geotextiles and Geomembranes.
Requirements
Geotextiles are needed for specific requirements, just as anything else in the world. Some of these requirements consist of polymers composed of a minimum of 85% by weight poly-propylene, polyesters, polyamides, polyolefins, and polyethylene.
See also
Geomembrane
Hard landscape materials
Polypropylene raffia
Sediment control
References
Further reading
John, N. W. M. (1987). Geotextiles. Glasgow: Blackie Publishing Ltd.
Koerner, R. M. (2012). Designing with Geosynthetics, 6th Edition. Xlibris Publishing Co.
Koerner, R. M., ed. (2016). Geotextiles: From Design to Applications. Amsterdam: Woodhead Publishing Co.
External links
Building materials
Geosynthetics
Landscape architecture
Plastics applications
Textiles | Geotextile | [
"Physics",
"Engineering"
] | 1,036 | [
"Building engineering",
"Landscape architecture",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
2,277,564 | https://en.wikipedia.org/wiki/Resist%20%28semiconductor%20fabrication%29 | In semiconductor fabrication, a resist is a thin layer used to transfer a circuit pattern to the semiconductor substrate which it is deposited upon. A resist can be patterned via lithography to form a (sub)micrometer-scale, temporary mask that protects selected areas of the underlying substrate during subsequent processing steps. The material used to prepare said thin layer is typically a viscous solution. Resists are generally proprietary mixtures of a polymer or its precursor and other small molecules (e.g. photoacid generators) that have been specially formulated for a given lithography technology. Resists used during photolithography are called photoresists.
Background
Semiconductor devices (as of 2005) are built by depositing and patterning many thin layers. The patterning steps, or lithography, define the function of the device and the density of its components.
For example, in the interconnect layers of a modern microprocessor, a conductive material (copper or aluminum) is inlaid in an electrically insulating matrix (typically fluorinated silicon dioxide or another low-k dielectric). The metal patterns define multiple electrical circuits that are used to connect the microchip's transistors to one another and ultimately to external devices via the chip's pins.
The most common patterning method used by the semiconductor device industry is photolithography -- patterning using light. In this process, the substrate of interest is coated with photosensitive resist and irradiated with short-wavelength light projected through a photomask, which is a specially prepared stencil formed of opaque and transparent regions - usually a quartz substrate with a patterned chromium layer. The shadow of opaque regions in the photomask forms a submicrometer-scale pattern of dark and illuminated regions in the resist layer -- the areal image. Chemical and physical changes occur in the exposed areas of the resist layer. For example, chemical bonds may be formed or destroyed, inducing a change in solubility. This latent image is then developed for example by rinsing with an appropriate solvent. Selected regions of the resist remain, which after a post-exposure bake step form a stable polymeric pattern on the substrate. This pattern can be used as a stencil in the next process step. For example, areas of the underlying substrate that are not protected by the resist pattern may be etched or doped. Material may be selectively deposited on the substrate. After processing, the remaining resist may be stripped. Sometimes (esp. during Microelectromechanical systems fabrication), the patterned resist layer may be incorporated in the final product. Many photolithography and processing cycles may be performed to create complex devices.
Resists may also be formulated to be sensitive to charged particles, such as the electron beams produced in scanning electron microscopes. This is the basis of electron-beam direct-write lithography.
A resist is not always necessary. Several materials may be deposited or patterned directly using techniques like soft lithography, Dip-Pen Nanolithography, evaporation through a shadow mask or stencil.
Typical process
Resist Deposition: The precursor solution is spin-coated on a clean (semiconductor) substrate, such as a silicon wafer, to form a very thin, uniform layer.
Soft Bake: The layer is baked at a low temperature to evaporate residual solvent.
Exposure: A latent image is formed in the resist e.g. (a) via exposure to ultraviolet light through a photomask with opaque and transparent regions or (b) by direct writing using a laser beam or electron beam.
Post-Exposure Bake
Development: Areas of the resist that have (or have not) been exposed are removed by rinsing with an appropriate solvent.
Processing through the resist pattern: wet or dry etching, lift-off, doping...
Resist Stripping
See also
Electron beam lithography
Nanolithography
Photolithography
External links
MicroChem
Shipley (now Rohm and Haas Electronic Materials)
Clariant
micro resist technology
Semiconductor device fabrication | Resist (semiconductor fabrication) | [
"Materials_science"
] | 841 | [
"Semiconductor device fabrication",
"Microtechnology"
] |
2,277,570 | https://en.wikipedia.org/wiki/Pertechnetate | The pertechnetate ion () is an oxyanion with the chemical formula . It is often used as a convenient water-soluble source of isotopes of the radioactive element technetium (Tc). In particular it is used to carry the 99mTc isotope (half-life 6 hours) which is commonly used in nuclear medicine in several nuclear scanning procedures.
Pertechnetate is poorly hydrated as [TcO4(H2O)n]− and [TcO4(H2O)n-m]−[H3O]+m (n = 1–50, m = 1–4) clusters that have been demonstrated by simulation with DFT. First hydration shell of TcO4− is asymmetric and contains no more than 7 water molecules. Only three of the four oxygen atoms of TcO4− form hydrogen bonds with water molecules.
A technetate(VII) salt is a compound containing this ion. Pertechnetate compounds are salts of technetic(VII) acid. Pertechnetate is analogous to permanganate but it has little oxidizing power. Pertechnetate has higher oxidation power than perrhenate.
Understanding pertechnetate is important in understanding technetium contamination in the environment and in nuclear waste management.
Chemistry
is the starting material for most of the chemistry of technetium. Pertechnetate salts are usually colorless. is produced by oxidizing technetium with nitric acid or with hydrogen peroxide. The pertechnetate anion is similar to the permanganate anion but is a weaker oxidizing agent. It is tetrahedral and diamagnetic. The standard electrode potential for / is only +0.738 V in acidic solution, as compared to +1.695 V for /. Because of its diminished oxidizing power, is stable in alkaline solution. is more similar to . Depending on the reducing agent, can be converted to derivatives containing Tc(VI), Tc(V), and Tc(IV). In the absence of strong complexing ligands, is reduced to a +4 oxidation state via the formation of hydrate.
Some metals like actinides, barium, scandium, yttrium or zirconium may form complex salts with pertechnetate thus strongly effecting its liquid-liquid extraction behavior.
Preparation of 99mTcO4−
is conveniently available in high radionuclidic purity from molybdenum-99, which decays with 87% probability to . The subsequent decay of leads to either or . can be produced in a nuclear reactor via irradiation of either molybdenum-98 or naturally occurring molybdenum with thermal neutrons, but this is not the method currently in use today. Currently, is recovered as a product of the nuclear fission reaction of , separated from other fission products via a multistep process and loaded onto a column of alumina that forms the core of a / radioisotope generator.
As the continuously decays to , the can be removed periodically (usually daily) by flushing a saline solution (0.15 M NaCl in water) through the alumina column: the more highly charged is retained on the column, where it continues to undergo radioactive decay, while the medically useful radioisotope is eluted in the saline. The eluate from the column must be sterile and pyrogen free, so that the Tc drug can be used directly, usually within 12 hours of elution. In a few cases, sublimation or solvent extraction may be used.
Examples
A complex that can penetrate the blood–brain barrier is generated by reduction of with tin(II) in the presence of the ligand hexamethylpropylene amine oxime (HMPAO) to form TcO-D,L-HMPAO.
A complex that for imaging the lungs, Tc-MAA, is generated by reduction of with in the presence of human serum albumin.
, which is both water and air stable, is generated by reduction of with carbon monoxide. This compound is a precursor to complexes that can be used in cancer diagnosis and therapy involving DNA-DNA pretargeting.
Compounds
Reactions
Radiolysis of in nitrate solutions proceeds through the reduction to which induces complex disproportionation processes:
Pertechnetate can be reduced by H2S to give Tc2S7.
Pertechnetate is also reduced to Tc(IV/V) compounds in alkaline solutions in nuclear waste tanks without adding catalytic metals, reducing agents, or external radiation. Reactions of mono- and disaccharides with 99m yield Tc(IV) compounds that are water-soluble.
Uses
Pharmaceutical use
The half-life of is long enough that labelling synthesis of the radiopharmaceutical and scintigraphic measurements can be performed without significant loss of radioactivity. The energy emitted from is 140 keV, which allows for the study of deep body organs. Radiopharmaceuticals have no intended pharmacologic effect and are used in very low concentrations. Radiopharmaceuticals containing are currently being applied in the determining morphology of organs, testing of organ function, and scintigraphic and emission tomographic imaging. The gamma radiation emitted by the radionuclide allows organs to be imaged in vivo tomographically. Currently, over 80% of radiopharmaceuticals used clinically are labelled with . A majority of radiopharmaceuticals labelled with are synthesized by the reduction of the pertechnetate ion in the presence of ligands chosen to confer organ specificity of the drug. The resulting compound is then injected into the body and a "gamma camera" is focused on sections or planes in order to image the spatial distribution of the .
Specific imaging applications
is used primarily in the study of the thyroid gland - its morphology, vascularity, and function. and iodide, due to their comparable charge/radius ratio, are similarly incorporated into the thyroid gland. The pertechnetate ion is not incorporated into the thyroglobulin. It is also used in the study of blood perfusion, regional accumulation, and cerebral lesions in the brain, as it accumulates primarily in the choroid plexus.
Pertechnetate salts, such as sodium pertechnetate, cannot pass through the blood–brain barrier. In addition to the salivary and thyroid glands, localizes in the stomach. is renally eliminated for the first three days after being injected. After a scanning is performed, it is recommended that a patient drink large amounts of water in order to expedite elimination of the radionuclide. Other methods of administration include intraperitoneal, intramuscular, subcutaneous, as well as orally. The behavior of the ion is essentially the same, with small differences due to the difference in rate of absorption, regardless of the method of administration.
Synthesis of 99mTcO4− radiopharmaceuticals
is advantageous for the synthesis of a variety of radiopharmaceuticals because Tc can adopt a number of oxidation states. The oxidation state and coligands dictate the specificity of the radiopharmaceutical. The starting material , made available after elution from the generator column, as mentioned above, can be reduced in the presence of complexing ligands. Many different reducing agents can be used, but transition metal reductants are avoided because they compete with for ligands. Oxalates, formates, hydroxylamine, and hydrazine are also avoided because they form complexes with the technetium. Electrochemical reduction is impractical.
Ideally, the synthesis of the desired radiopharmaceutical from , a reducing agent, and desired ligands should occur in one container after elution, and the reaction must be performed in a solvent that can be injected intravenously, such as a saline solution. Kits are available that contain the reducing agent, usually tin(II) and ligands. These kits are sterile, pyrogen-free, easily purchased, and can be stored for long periods of time. The reaction with takes place directly after elution from the generator column and shortly before its intended use. A high organ specificity is important because the injected activity should accumulate in the organ under investigation, as there should be a high activity ratio of the target organ to nontarget organs. If there is a high activity in organs adjacent to the one under investigation, the image of the target organ can be obscured. Also, high organ specificity allows for the reduction of the injected activity, and thus the exposure to radiation, in the patient. The radiopharmaceutical must be kinetically inert, in that it must not change chemically in vivo en route to the target organ.
As a 99mTc carrier
A technetium-99m generator provides the pertechnetate containing the short-lived isotope 99mTc for medical uses. This compound is generated directly from molybdate held on alumina within the generator (see this topic for detail).
In nuclear medicine
Pertechnetate has a wide variety of uses in diagnostic nuclear medicine. Since technetate(VII) can substitute for iodine in the Na/I symporter (NIS) channel in follicular cells of the thyroid gland, inhibiting uptake of iodine into the follicular cells, 99mTc-pertechnetate can be used as an alternative to 123I in imaging of the thyroid, although it specifically measures uptake and not organification. It has also been used historically to evaluate for testicular torsion, although ultrasound is more commonly used in current practice, as it does not deliver a radiation dose to the testes. It is also used in labeling of autologus red blood cells for MUGA scans to evaluate left ventricular cardiac function, localization of gastrointestinal bleeding prior to embolization or surgical management, and in damaged red blood cells to detect ectopic splenic tissue.
It is actively accumulated and secreted by the mucoid cells of the gastric mucosa, and therefore, technetate(VII) radiolabeled with technetium-99m is injected into the body when looking for ectopic gastric tissue as is found in a Meckel's diverticulum with Meckel's scans.
Non-radioactive uses
All technetium salts are mildly radioactive, but some of them have explored use of the element for its chemical properties. In these uses, its radioactivity is incidental, and generally the least radioactive (longest-lived) isotopes of Tc are used. In particular, 99Tc (half-life 211,000 years) is used in corrosion research, because it is the decay product of the easily obtained commercial 99mTc isotope. Solutions of technetate(VII) react with the surface of iron to form technetium dioxide, in this way it is able to act as an anodic corrosion inhibitor.
See also
Permanganate
Perrhenate
Sodium pertechnetate
References
Transition metal oxyanions
Radiopharmaceuticals
Medical physics
Corrosion inhibitors | Pertechnetate | [
"Physics",
"Chemistry"
] | 2,336 | [
"Applied and interdisciplinary physics",
"Medicinal radiochemistry",
"Radiopharmaceuticals",
"Medical physics",
"Corrosion inhibitors",
"Chemicals in medicine",
"Process chemicals"
] |
2,277,871 | https://en.wikipedia.org/wiki/Metastability%20%28electronics%29 | In electronics, metastability is the ability of a digital electronic system to persist for an unbounded time in an unstable equilibrium or metastable state.
In digital logic circuits, a digital signal is required to be within certain voltage or current limits to represent a '0' or '1' logic level for correct circuit operation; if the signal is within a forbidden intermediate range it may cause faulty behavior in logic gates the signal is applied to. In metastable states, the circuit may be unable to settle into a stable '0' or '1' logic level within the time required for proper circuit operation. As a result, the circuit can act in unpredictable ways, and may lead to a system failure, sometimes referred to as a "glitch". Metastability is an instance of the Buridan's ass paradox.
Metastable states are inherent features of asynchronous digital systems, and of systems with more than one independent clock domain. In self-timed asynchronous systems, arbiters are designed to allow the system to proceed only after the metastability has resolved, so the metastability is a normal condition, not an error condition.
In synchronous systems with asynchronous inputs, synchronizers are designed to make the probability of a synchronization failure acceptably small.
Metastable states are avoidable in fully synchronous systems when the input setup and hold time requirements on flip-flops are satisfied.
Example
A simple example of metastability can be found in an SR NOR latch, when Set and Reset inputs are true (R=1 and S=1) and then both transition to false (R=0 and S=0) at about the same time. Both outputs Q and are initially held at 0 by the simultaneous Set and Reset inputs. After both Set and Reset inputs change to false, the flip-flop will (eventually) end up in one of two stable states, one of Q and true and the other false. The final state will depend on which of R or S returns to zero first, chronologically, but if both transition at about the same time, the resulting metastability, with intermediate or oscillatory output levels, can take arbitrarily long to resolve to a stable state.
Arbiters
In electronics, an arbiter is a circuit designed to determine which of several signals arrive first. Arbiters are used in asynchronous circuits to order computational activities for shared resources to prevent concurrent incorrect operations. Arbiters are used on the inputs of fully synchronous systems, and also between clock domains, as synchronizers for input signals. Although they can minimize the occurrence of metastability to very low probabilities, all arbiters nevertheless have metastable states, which are unavoidable at the boundaries of regions of the input state space resulting in different outputs.
Synchronous circuits
Synchronous circuit design techniques make digital circuits that are resistant to the failure modes that can be caused by metastability. A clock domain is defined as a group of flip-flops with a common clock. Such architectures can form a circuit guaranteed free of metastability (below a certain maximum clock frequency, above which first metastability, then outright failure occur), assuming a low-skew common clock. However, even then, if the system has a dependence on any continuous inputs then these are likely to be vulnerable to metastable states.
Synchronizer circuits are used to reduce the likelihood of metastability when receiving an asynchronous input or when transferring signals between different clock domains. Synchronizers may take the form of a cascade of D flip-flops (e.g. the shift register in Figure 3). Although each flip-flop stage adds an additional clock cycle of latency to the input data stream, each stage provides an opportunity to resolve metastability. Such synchronizers can be engineered to reduce metastability to a tolerable rate.
Schmitt triggers can also be used to reduce the likelihood of metastability, but as the researcher Chaney demonstrated in 1979, even Schmitt triggers may become metastable. He further argued that it is not possible to entirely remove the possibility of metastability from unsynchronized inputs within finite time and that "there is a great deal of theoretical and experimental evidence that a region of anomalous behavior exists for every device that has two stable states." In the face of this inevitability, hardware can only reduce the probability of metastability, and systems can try to gracefully handle the occasional metastable event.
Failure modes
Although metastability is well understood and architectural techniques to control it are known, it persists as a failure mode in equipment.
Serious computer and digital hardware bugs caused by metastability have a fascinating social history. Many engineers have refused to believe that a bistable device can enter into a state that is neither true nor false and has a positive probability that it will remain indefinite for any given period of time, albeit with exponentially decreasing probability over time. However, metastability is an inevitable result of any attempt to map a continuous domain to a discrete one. At the boundaries in the continuous domain between regions which map to different discrete outputs, points arbitrarily close together in the continuous domain map to different outputs, making a decision as to which output to select a difficult and potentially lengthy process. If the inputs to an arbiter or flip-flop arrive almost simultaneously, the circuit most likely will traverse a point of metastability. Metastability remains poorly understood in some circles, and various engineers have proposed their own circuits said to solve or filter out the metastability; typically these circuits simply shift the occurrence of metastability from one place to another. Chips using multiple clock sources are often tested with tester clocks that have fixed phase relationships, not the independent clocks drifting past each other that will be experienced during operation. This usually explicitly prevents the metastable failure mode that will occur in the field from being seen or reported. Proper testing for metastability frequently employs clocks of slightly different frequencies and ensuring correct circuit operation.
See also
Analog-to-digital converter
Buridan's ass
Asynchronous CPU
Ground bounce
Tri-state logic
References
External links
Metastability Performance of Clocked FIFOs
The 'Asynchronous' Bibliography
Asynchronous Logic
Efficient Self-Timed Interfaces for Crossing Clock Domains
Dr. Howard Johnson: Deliberately inducing the metastable state
Detailed explanations and Synchronizer designs
Metastability Bibliography
Clock Domain Crossing: Closing the Loop on Clock Domain Functional Implementation Problems, Cadence Design Systems
Stephenson, Jennifer. Understanding Metastability in FPGAs. Altera Corporation white paper. July 2009.
Bahukhandi, Ashirwad. Metastability. Lecture Notes for Advanced Logic Design and Switching Theory. January 2002.
Cummings, Clifford E. Synthesis and Scripting Techniques for Designing Multi-Asynchronous Clock Designs. SNUG 2001.
Haseloff, Eilhard. Metastable Response in 5-V Logic Circuits. Texas Instruments Report. February 1997.
Nystrom, Mika, and Alain J. Martin. Crossing the Synchronous Asynchronous Divide. WCED 2002.
Patil, Girish, IFV Division, Cadence Design Systems. Clock Synchronization Issues and Static Verification Techniques. Cadence Technical Conference 2004.
Smith, Michael John Sebastian. Application-Specific Integrated Circuits. Addison Wesley Longman, 1997, Chapter 6.4.1.
Stein, Mike. Crossing the abyss: asynchronous signals in a synchronous world EDN design feature. July 24, 2003.
Cox, Jerome R. and Engel, George L., Blendics, Inc. White Paper "Metastability and Fatal System Errors"] Nov. 2010
Adam Taylor, "Wrapping One's Brain Around Metastability", EE Times, 2013-11-20
Electrical engineering
Digital electronics | Metastability (electronics) | [
"Engineering"
] | 1,665 | [
"Electrical engineering",
"Electronic engineering",
"Digital electronics"
] |
2,278,116 | https://en.wikipedia.org/wiki/Special%20right%20triangle | A special right triangle is a right triangle with some regular feature that makes calculations on the triangle easier, or for which simple formulas exist. For example, a right triangle may have angles that form simple relationships, such as 45°–45°–90°. This is called an "angle-based" right triangle. A "side-based" right triangle is one in which the lengths of the sides form ratios of whole numbers, such as 3 : 4 : 5, or of other special numbers such as the golden ratio. Knowing the relationships of the angles or ratios of sides of these special right triangles allows one to quickly calculate various lengths in geometric problems without resorting to more advanced methods.
Angle-based
Angle-based special right triangles are specified by the relationships of the angles of which the triangle is composed. The angles of these triangles are such that the larger (right) angle, which is 90 degrees or radians, is equal to the sum of the other two angles.
The side lengths are generally deduced from the basis of the unit circle or other geometric methods. This approach may be used to rapidly reproduce the values of trigonometric functions for the angles 30°, 45°, and 60°.
Special triangles are used to aid in calculating common trigonometric functions, as below:
The 45°–45°–90° triangle, the 30°–60°–90° triangle, and the equilateral/equiangular (60°–60°–60°) triangle are the three Möbius triangles in the plane, meaning that they tessellate the plane via reflections in their sides; see Triangle group.
45° - 45° - 90° triangle
In plane geometry, dividing a square along its diagonal results in two isosceles right triangles, each with one right angle (90°, radians) and two other congruent angles each measuring half of a right angle (45°, or radians). The sides in this triangle are in the ratio 1 : 1 : , which follows immediately from the Pythagorean theorem.
Of all right triangles, such 45° - 45° - 90° degree triangles have the smallest ratio of the hypotenuse to the sum of the legs, namely . and the greatest ratio of the altitude from the hypotenuse to the sum of the legs, namely .
Triangles with these angles are the only possible right triangles that are also isosceles triangles in Euclidean geometry. However, in spherical geometry and hyperbolic geometry, there are infinitely many different shapes of right isosceles triangles.
30° - 60° - 90° triangle
This is a triangle whose three angles are in the ratio 1 : 2 : 3 and respectively measure 30° (), 60° (), and 90° (). The sides are in the ratio 1 : : 2.
The proof of this fact is clear using trigonometry. The geometric proof is:
Draw an equilateral triangle ABC with side length 2 and with point D as the midpoint of segment BC. Draw an altitude line from A to D. Then ABD is a 30°–60°–90° triangle with hypotenuse of length 2, and base BD of length 1.
The fact that the remaining leg AD has length follows immediately from the Pythagorean theorem.
The 30°–60°–90° triangle is the only right triangle whose angles are in an arithmetic progression. The proof of this fact is simple and follows on from the fact that if α, , are the angles in the progression then the sum of the angles = 180°. After dividing by 3, the angle must be 60°. The right angle is 90°, leaving the remaining angle to be 30°.
Side-based
Right triangles whose sides are of integer lengths, with the sides collectively known as Pythagorean triples, possess angles that cannot all be rational numbers of degrees. (This follows from Niven's theorem.) They are most useful in that they may be easily remembered and any multiple of the sides produces the same relationship. Using Euclid's formula for generating Pythagorean triples, the sides must be in the ratio
where m and n are any positive integers such that .
Common Pythagorean triples
There are several Pythagorean triples which are well-known, including those with sides in the ratios:
{| border="0" cellpadding="1" cellspacing="0"
|align="right"|3 :||align="right"| 4 :||align="right"| 5
|-
|align="right"|5 :||align="right"|12 :||align="right"|13
|-
|align="right"|8 :||align="right"|15 :||align="right"|17
|-
|align="right"|7 :||align="right"|24 :||align="right"|25
|-
|align="right"|9 :||align="right"|40 :||align="right"|41
|}
The 3 : 4 : 5 triangles are the only right triangles with edges in arithmetic progression. Triangles based on Pythagorean triples are Heronian, meaning they have integer area as well as integer sides.
The possible use of the 3 : 4 : 5 triangle in Ancient Egypt, with the supposed use of a knotted rope to lay out such a triangle, and the question whether Pythagoras' theorem was known at that time, have been much debated. It was first conjectured by the historian Moritz Cantor in 1882. It is known that right angles were laid out accurately in Ancient Egypt; that their surveyors did use ropes for measurement; that Plutarch recorded in Isis and Osiris (around 100 AD) that the Egyptians admired the 3 : 4 : 5 triangle; and that the Berlin Papyrus 6619 from the Middle Kingdom of Egypt (before 1700 BC) stated that "the area of a square of 100 is equal to that of two smaller squares. The side of one is + the side of the other." The historian of mathematics Roger L. Cooke observes that "It is hard to imagine anyone being interested in such conditions without knowing the Pythagorean theorem." Against this, Cooke notes that no Egyptian text before 300 BC actually mentions the use of the theorem to find the length of a triangle's sides, and that there are simpler ways to construct a right angle. Cooke concludes that Cantor's conjecture remains uncertain: he guesses that the Ancient Egyptians probably did know the Pythagorean theorem, but that "there is no evidence that they used it to construct right angles".
The following are all the Pythagorean triple ratios expressed in lowest form (beyond the five smallest ones in lowest form in the list above) with both non-hypotenuse sides less than 256:
{| border="0" cellpadding="1" cellspacing="0" align="left" style="margin-right: 2em"
|align="right"|11 :||align="right"| 60 :||align="right"| 61
|-
|align="right"|12 :||align="right"| 35 :||align="right"| 37
|-
|align="right"|13 :||align="right"| 84 :||align="right"| 85
|-
|align="right"|15 :||align="right"|112 :||align="right"|113
|-
|align="right"|16 :||align="right"| 63 :||align="right"| 65
|-
|align="right"|17 :||align="right"|144 :||align="right"|145
|-
|align="right"|19 :||align="right"|180 :||align="right"|181
|-
|align="right"|20 :||align="right"| 21 :||align="right"| 29
|-
|align="right"|20 :||align="right"| 99 :||align="right"|101
|-
|align="right"|21 :||align="right"|220 :||align="right"|:221
|}
Almost-isosceles Pythagorean triples
Isosceles right-angled triangles cannot have sides with integer values, because the ratio of the hypotenuse to either other side is and cannot be expressed as a ratio of two integers. However, infinitely many almost-isosceles right triangles do exist. These are right-angled triangles with integer sides for which the lengths of the non-hypotenuse edges differ by one. Such almost-isosceles right-angled triangles can be obtained recursively,
a0 = 1, b0 = 2
an = 2bn−1 + an−1
bn = 2an + bn−1
an is length of hypotenuse, n = 1, 2, 3, .... Equivalently,
where {x, y} are solutions to the Pell equation , with the hypotenuse y being the odd terms of the Pell numbers 1, 2, 5, 12, 29, 70, 169, 408, 985, 2378... .. The smallest Pythagorean triples resulting are:
{| border="0" cellpadding="1" cellspacing="0" align="left" style="margin-right: 3em"
|align="right"| 3 :||align="right"| 4 :||align="right"| 5
|-
|align="right"| 20 :||align="right"| 21 :||align="right"| 29
|-
|align="right"| 119 :||align="right"| 120 :||align="right"| 169
|-
|align="right"| 696 :||align="right"| 697 :||align="right"| 985
|}
Alternatively, the same triangles can be derived from the square triangular numbers.
Arithmetic and geometric progressions
The Kepler triangle is a right triangle whose sides are in geometric progression. If the sides are formed from the geometric progression a, ar, ar2 then its common ratio r is given by r = where φ is the golden ratio. Its sides are therefore in the ratio . Thus, the shape of the Kepler triangle is uniquely determined (up to a scale factor) by the requirement that its sides be in geometric progression.
The 3–4–5 triangle is the unique right triangle (up to scaling) whose sides are in arithmetic progression.
Sides of regular polygons
Let be the side length of a regular decagon inscribed in the unit circle, where is the golden ratio. Let be the side length of a regular hexagon in the unit circle, and let be the side length of a regular pentagon in the unit circle. Then , so these three lengths form the sides of a right triangle. The same triangle forms half of a golden rectangle. It may also be found within a regular icosahedron of side length : the shortest line segment from any vertex to the plane of its five neighbors has length , and the endpoints of this line segment together with any of the neighbors of form the vertices of a right triangle with sides , , and .
See also
Ailles rectangle, combining several special right triangles
Integer triangle
Spiral of Theodorus
References
External links
3 : 4 : 5 triangle
30–60–90 triangle
45–45–90 triangle with interactive animations
Euclidean plane geometry
Types of triangles | Special right triangle | [
"Mathematics"
] | 2,499 | [
"Planes (geometry)",
"Euclidean plane geometry"
] |
2,279,144 | https://en.wikipedia.org/wiki/Microdialysis | Microdialysis is a minimally-invasive sampling technique that is used for continuous measurement of free, unbound analyte concentrations in the extracellular fluid of virtually any tissue. Analytes may include endogenous molecules (e.g. neurotransmitter, hormones, glucose, etc.) to assess their biochemical functions in the body, or exogenous compounds (e.g. pharmaceuticals) to determine their distribution within the body. The microdialysis technique requires the insertion of a small microdialysis catheter (also referred to as microdialysis probe) into the tissue of interest. The microdialysis probe is designed to mimic a blood capillary and consists of a shaft with a semipermeable hollow fiber membrane at its tip, which is connected to inlet and outlet tubing. The probe is continuously perfused with an aqueous solution (perfusate) that closely resembles the (ionic) composition of the surrounding tissue fluid at a low flow rate of approximately 0.1-5μL/min. Once inserted into the tissue or (body)fluid of interest, small solutes can cross the semipermeable membrane by passive diffusion. The direction of the analyte flow is determined by the respective concentration gradient and allows the usage of microdialysis probes as sampling as well as delivery tools. The solution leaving the probe (dialysate) is collected at certain time intervals for analysis.
History
The microdialysis principle was first employed in the early 1960s, when push-pull canulas and dialysis sacs were implanted into animal tissues, especially into rodent brains, to directly study the tissues' biochemistry. While these techniques had a number of experimental drawbacks, such as the number of samples per animal or no/limited time resolution, the invention of continuously perfused dialytrodes in 1972 helped to overcome some of these limitations. Further improvement of the dialytrode concept resulted in the invention of the "hollow fiber", a tubular semipermeable membrane with a diameter of ~200-300μm, in 1974. Today's most prevalent shape, the needle probe, consists of a shaft with a hollow fiber at its tip and can be inserted by means of a guide cannula into the brain and other tissues. An alternative method, open flow micro-perfusion (OFM), replaces the membrane with macroscopic openings which facilitates sampling of lipophilic and hydrophilic compounds, protein bound and unbound drugs, neurotransmitters, peptides and proteins, antibodies, nanoparticles and nanocarriers, enzymes and vesicles.
Microdialysis probes
There are a variety of probes with different membrane and shaft length combinations available. The molecular weight cutoff of commercially available microdialysis probes covers a wide range of approximately 6-100kD, but also 1MD is available. While water-soluble compounds generally diffuse freely across the microdialysis membrane, the situation is not as clear for highly lipophilic analytes, where both successful (e.g. corticosteroids) and unsuccessful microdialysis experiments (e.g. estradiol, fusidic acid) have been reported. However, the recovery of water-soluble compounds usually decreases rapidly if the molecular weight of the analyte exceeds 25% of the membrane’s molecular weight cutoff.
Recovery and calibration methods
Due to the constant perfusion of the microdialysis probe with fresh perfusate, a total equilibrium cannot be established. This results in dialysate concentrations that are lower than those measured at the distant sampling site. In order to correlate concentrations measured in the dialysate with those present at the distant sampling site, a calibration factor (recovery) is needed. The recovery can be determined at steady-state using the constant rate of analyte exchange across the microdialysis membrane. The rate at which an analyte is exchanged across the semipermeable membrane is generally expressed as the analyte’s extraction efficiency. The extraction efficiency is defined as the ratio between the loss/gain of analyte during its passage through the probe (Cin−Cout) and the difference in concentration between perfusate and distant sampling site (Cin−Csample).
In theory, the extraction efficiency of a microdialysis probe can be determined by: 1) changing the drug concentrations while keeping the flow rate constant or 2) changing the flow rate while keeping the respective drug concentrations constant. At steady-state, the same extraction efficiency value is obtained, no matter if the analyte is enriched or depleted in the perfusate. Microdialysis probes can consequently be calibrated by either measuring the loss of analyte using drug-containing perfusate or the gain of analyte using drug-containing sample solutions. To date, the most frequently used calibration methods are the low-flow-rate method, the no-net-flux method, the dynamic (extended) no-net-flux method, and the retrodialysis method. The proper selection of an appropriate calibration method is critically important for the success of a microdialysis experiment. Supportive in vitro experiments prior to the use in animals or humans are therefore recommended. In addition, the recovery determined in vitro may differ from the recovery in humans. Its actual value therefore needs to be determined in every in vivo experiment.
Low-flow-rate method
The low-flow-rate method is based on the fact that the extraction efficiency is dependent on the flow-rate. At high flow-rates, the amount of drug diffusing from the sampling site into the dialysate per unit time is smaller (low extraction efficiency) than at lower flow-rates (high extraction efficiency). At a flow-rate of zero, a total equilibrium between these two sites is established (Cout = Csample). This concept is applied for the (low-)flow-rate method, where the probe is perfused with blank perfusate at different flow-rates. Concentration at the sampling site can be determined by plotting the extraction ratios against the corresponding flow-rates and extrapolating to zero-flow. The low-flow-rate method is limited by the fact that calibration times may be rather long before a sufficient sample volume has been collected.
No-net-flux-method
During calibration with the no-net-flux-method, the microdialysis probe is perfused with at least four different concentrations of the analyte of interest (Cin) and steady-state concentrations of the analyte leaving the probe are measured in the dialysate (Cout). The recovery for this method can be determined by plotting Cout−Cin over Cin and computing the slope of the regression line. If analyte concentrations in the perfusate are equal to concentrations at the sampling site, no-net flux occurs. Respective concentrations at the no-net-flux point are represented by the x-intercept of the regression line. The strength of this method is that, at steady-state, no assumptions about the behaviour of the compound in the vicinity of the probe have to be made, since equilibrium exists at a specific time and place. However, under transient conditions (e.g. after drug challenge), the probe recovery may be altered resulting in biased estimates of the concentrations at the sampling site. To overcome this limitation, several approaches have been developed that are also applicable under non-steady-state conditions. One of these approaches is the dynamic no-net-flux method.
Dynamic no-net-flux method
While a single subject/animal is perfused with multiple concentrations during the no-net-flux method, multiple subjects are perfused with a single concentration during the dynamic no-net-flux (DNNF) method. Data from the different subjects/animals is then combined at each time point for regression analysis allowing determination of the recovery over time. The design of the DNNF calibration method has proven very useful for studies that evaluate the response of endogenous compounds, such as neurotransmitters, to drug challenge.
Retrodialysis
During retrodialysis, the microdialysis probe is perfused with an analyte-containing solution and the disappearance of drug from the probe is monitored. The recovery for this method can be computed as the ratio of drug lost during passage (Cin−Cout) and drug entering the microdialysis probe (Cin). In principle, retrodialysis can be performed using either the analyte itself (retrodialysis by drug) or a reference compound (retrodialysis by calibrator) that closely resembles both the physiochemical and the biological properties of the analyte. Despite the fact that retrodialysis by drug cannot be used for endogenous compounds as it requires absence of analyte from the sampling site, this calibration method is most commonly used for exogenous compounds in clinical settings.
Applications
The microdialysis technique has undergone much development since its first use in 1972, when it was first employed to monitor concentrations of endogenous biomolecules in the brain. Today's area of application has expanded to monitoring free concentrations of endogenous as well as exogenous compounds in virtually any tissue. Although microdialysis is still primarily used in preclinical animal studies (e.g. laboratory rodents, dogs, sheep, pigs), it is now increasingly employed in humans to monitor free, unbound drug tissue concentrations as well as interstitial concentrations of regulatory cytokines and metabolites in response to homeostatic perturbations such as feeding and/or exercise.
When employed in brain research, microdialysis is commonly used to measure neurotransmitters (e.g. dopamine, serotonin, norepinephrine, acetylcholine, glutamate, GABA) and their metabolites, as well as small neuromodulators (e.g. cAMP, cGMP, NO), amino acids (e.g. glycine, cysteine, tyrosine), and energy substrates (e.g. glucose, lactate, pyruvate). Exogenous drugs to be analyzed by microdialysis include new antidepressants, antipsychotics, as well as antibiotics and many other drugs that have their pharmacological effect site in the brain. The first non-metabolite to be analyzed by microdialysis in vivo in the human brain was rifampicin.
Applications in other organs include the skin (assessment of bioavailability and bioequivalence of topically applied dermatological drug products), and monitoring of glucose concentrations in patients with diabetes (intravascular or subcutaneous probe placement). The latter may even be incorporated into an artificial pancreas system for automated insulin administration.
Microdialysis has also found increasing application in environmental research, sampling a diversity of compounds from waste-water and soil solution, including saccharides, metal ions, micronutrients, organic acids, and low molecular weight nitrogen. Given the destructive nature of conventional soil sampling methods, microdialysis has potential to estimate fluxes of soil ions that better reflect an undisturbed soil environment.
Critical analysis
Advantages
To date, microdialysis is the only in vivo sampling technique that can continuously monitor drug or metabolite concentrations in the extracellular fluid of virtually any tissue. Depending on the exact application, analyte concentrations can be monitored over several hours, days, or even weeks. Free, unbound extracellular tissue concentrations are in many cases of particular interest as they resemble pharmacologically active concentrations at or close to the site of action. Combination of microdialysis with modern imaging techniques, such positron emission tomography, further allow for determination of intracellular concentrations.
Insertion of the probe in a precise location of the selected tissue further allows for evaluation of extracellular concentration gradients due to transporter activity or other factors, such as perfusion differences. It has, therefore, been suggested as the most appropriate technique to be used for tissue distribution studies.
Exchange of analyte across the semipermeable membrane and constant replacement of the sampling fluid with fresh perfusate prevents drainage of fluid from the sampling site, which allows sampling without fluid loss. Microdialysis can consequently be used without disturbing the tissue conditions by local fluid loss or pressure artifacts, which can occur when using other techniques, such as microinjection or push-pull perfusion.
The semipermeable membrane prevents cells, cellular debris, and proteins from entering into the dialysate. Due to the lack of protein in the dialysate, a sample clean-up prior to analysis is not needed and enzymatic degradation is not a concern.
Limitations
Despite scientific advances in making microdialysis probes smaller and more efficient, the invasive nature of this technique still poses some practical and ethical limitations. For example, it has been shown that implantation of a microdialysis probe can alter tissue morphology resulting in disturbed microcirculation, rate of metabolism or integrity of physiological barriers, such as the blood–brain barrier. While acute reactions to probe insertion, such as implantation traumas, require sufficient recovery time, additional factors, such as necrosis, inflammatory responses, or wound healing processes have to be taken into consideration for long-term sampling as they may influence the experimental outcome. From a practical perspective, it has been suggested to perform microdialysis experiments within an optimal time window, usually 24–48 hours after probe insertion.
Microdialysis has a relatively low temporal and spatial resolution compared to, for example, electrochemical biosensors. While the temporal resolution is determined by the length of the sampling intervals (usually a few minutes), the spatial resolution is determined by the dimensions of the probe. The probe size can vary between different areas of application and covers a range of a few millimeters (intracerebral application) up to a few centimeters (subcutaneous application) in length and a few hundred micrometers in diameter.
Application of the microdialysis technique is often limited by the determination of the probe’s recovery, especially for in vivo experiments. Determination of the recovery may be time-consuming and may require additional subjects or pilot experiments. The recovery is largely dependent on the flow rate: the lower the flow rate, the higher the recovery. However, in practice the flow rate cannot be decreased too much since either the sample volume obtained for analysis will be insufficient or the temporal resolution of the experiment will be lost. It is therefore important to optimize the relationship between flow rate and the sensitivity of the analytical assay. The situation may be more complex for lipophilic compounds as they can stick to the tubing or other probe components, resulting in a low or no analyte recovery.
References
Biochemistry methods
Cell biology
Membrane technology | Microdialysis | [
"Chemistry",
"Biology"
] | 3,075 | [
"Biochemistry methods",
"Cell biology",
"Separation processes",
"Membrane technology",
"Biochemistry"
] |
2,279,750 | https://en.wikipedia.org/wiki/Propidium%20iodide | Propidium iodide (or PI) is a fluorescent intercalating agent that can be used to stain cells and nucleic acids. PI binds to DNA by intercalating between the bases with little or no sequence preference. When in an aqueous solution, PI has a fluorescent excitation maximum of 493 nm (blue-green), and an emission maximum of 636 nm (red). After binding DNA, the quantum yield of PI is enhanced 20-30 fold, and the excitation/emission maximum of PI is shifted to 535 nm (green) / 617 nm (orange-red). Propidium iodide is used as a DNA stain in flow cytometry to evaluate cell viability or DNA content in cell cycle analysis, or in microscopy to visualize the nucleus and other DNA-containing organelles. Propidium Iodide is not membrane-permeable, making it useful to differentiate necrotic, apoptotic and healthy cells based on membrane integrity. PI also binds to RNA, necessitating treatment with nucleases to distinguish between RNA and DNA staining. PI is widely used in fluorescence staining and visualization of the plant cell wall.
See also
Viability assay
Vital stain
SYBR Green I
Ethidium bromide
References
Flow cytometry
DNA-binding substances
Iodides
Phenanthridine dyes
Staining dyes | Propidium iodide | [
"Chemistry",
"Biology"
] | 294 | [
"Genetics techniques",
"DNA-binding substances",
"Flow cytometry"
] |
357,881 | https://en.wikipedia.org/wiki/Test-driven%20development | Test-driven development (TDD) is a way of writing code that involves writing an automated unit-level test case that fails, then writing just enough code to make the test pass, then refactoring both the test code and the production code, then repeating with another new test case.
Alternative approaches to writing automated tests is to write all of the production code before starting on the test code or to write all of the test code before starting on the production code. With TDD, both are written together, therefore shortening debugging time necessities.
TDD is related to the test-first programming concepts of extreme programming, begun in 1999, but more recently has created more general interest in its own right.
Programmers also apply the concept to improving and debugging legacy code developed with older techniques.
History
Software engineer Kent Beck, who is credited with having developed or "rediscovered" the technique, stated in 2003 that TDD encourages simple designs and inspires confidence.
Coding cycle
The TDD steps vary somewhat by author in count and description, but are generally as follows. These are based on the book Test-Driven Development by Example, and Kent Beck's Canon TDD article.
1. List scenarios for the new feature
List the expected variants in the new behavior. “There’s the basic case & then what-if this service times out & what-if the key isn’t in the database yet &…” The developer can discover these specifications by asking about use cases and user stories. A key benefit of TDD is that it makes the developer focus on requirements before writing code. This is in contrast with the usual practice, where unit tests are only written after code.
2. Write a test for an item on the list
Write an automated test that would pass if the variant in the new behavior is met.
3. Run all tests. The new test should fail for expected reasons
This shows that new code is actually needed for the desired feature. It validates that the test harness is working correctly. It rules out the possibility that the new test is flawed and will always pass.
4. Write the simplest code that passes the new test
Inelegant code and hard coding is acceptable. The code will be honed in Step 6. No code should be added beyond the tested functionality.
5. All tests should now pass
If any fail, fix failing tests with minimal changes until all pass.
6. Refactor as needed while ensuring all tests continue to pass
Code is refactored for readability and maintainability. In particular, hard-coded test data should be removed from the production code. Running the test suite after each refactor ensures that no existing functionality is broken. Examples of refactoring:
moving code to where it most logically belongs
removing duplicate code
making names self-documenting
splitting methods into smaller pieces
re-arranging inheritance hierarchies
Repeat
Repeat the process, starting at step 2, with each test on the list until all tests are implemented and passing.
Each tests should be small and commits made often. If new code fails some tests, the programmer can undo or revert rather than debug excessively.
When using external libraries, it is important not to write tests that are so small as to effectively test merely the library itself, unless there is some reason to believe that the library is buggy or not feature-rich enough to serve all the needs of the software under development.
Test-driven work
TDD has been adopted outside of software development, in both product and service teams, as test-driven work. For testing to be successful, it needs to be practiced at the micro and macro levels. Every method in a class, every input data value, log message, and error code, amongst other data points, need to be tested. Similar to TDD, non-software teams develop quality control (QC) checks (usually manual tests rather than automated tests) for each aspect of the work prior to commencing. These QC checks are then used to inform the design and validate the associated outcomes. The six steps of the TDD sequence are applied with minor semantic changes:
"Add a check" replaces "Add a test"
"Run all checks" replaces "Run all tests"
"Do the work" replaces "Write some code"
"Run all checks" replaces "Run tests"
"Clean up the work" replaces "Refactor code"
"Repeat"
Development style
There are various aspects to using test-driven development, for example the principles of "keep it simple, stupid" (KISS) and "You aren't gonna need it" (YAGNI). By focusing on writing only the code necessary to pass tests, designs can often be cleaner and clearer than is achieved by other methods. In Test-Driven Development by Example, Kent Beck also suggests the principle "Fake it till you make it".
To achieve some advanced design concept such as a design pattern, tests are written that generate that design. The code may remain simpler than the target pattern, but still pass all required tests. This can be unsettling at first but it allows the developer to focus only on what is important.
Writing the tests first: The tests should be written before the functionality that is to be tested. This has been claimed to have many benefits. It helps ensure that the application is written for testability, as the developers must consider how to test the application from the outset rather than adding it later. It also ensures that tests for every feature gets written. Additionally, writing the tests first leads to a deeper and earlier understanding of the product requirements, ensures the effectiveness of the test code, and maintains a continual focus on software quality. When writing feature-first code, there is a tendency by developers and organizations to push the developer on to the next feature, even neglecting testing entirely. The first TDD test might not even compile at first, because the classes and methods it requires may not yet exist. Nevertheless, that first test functions as the beginning of an executable specification.
Each test case fails initially: This ensures that the test really works and can catch an error. Once this is shown, the underlying functionality can be implemented. This has led to the "test-driven development mantra", which is "red/green/refactor", where red means fail and green means pass. Test-driven development constantly repeats the steps of adding test cases that fail, passing them, and refactoring. Receiving the expected test results at each stage reinforces the developer's mental model of the code, boosts confidence and increases productivity.
Code visibility
Test code needs access to the code it is testing, but testing should not compromise normal design goals such as information hiding, encapsulation and the separation of concerns. Therefore, unit test code is usually located in the same project or module as the code being tested.
In object oriented design this still does not provide access to private data and methods. Therefore, extra work may be necessary for unit tests. In Java and other languages, a developer can use reflection to access private fields and methods. Alternatively, an inner class can be used to hold the unit tests so they have visibility of the enclosing class's members and attributes. In the .NET Framework and some other programming languages, partial classes may be used to expose private methods and data for the tests to access.
It is important that such testing hacks do not remain in the production code. In C and other languages, compiler directives such as #if DEBUG ... #endif can be placed around such additional classes and indeed all other test-related code to prevent them being compiled into the released code. This means the released code is not exactly the same as what was unit tested. The regular running of fewer but more comprehensive, end-to-end, integration tests on the final release build can ensure (among other things) that no production code exists that subtly relies on aspects of the test harness.
There is some debate among practitioners of TDD, documented in their blogs and other writings, as to whether it is wise to test private methods and data anyway. Some argue that private members are a mere implementation detail that may change, and should be allowed to do so without breaking numbers of tests. Thus it should be sufficient to test any class through its public interface or through its subclass interface, which some languages call the "protected" interface. Others say that crucial aspects of functionality may be implemented in private methods and testing them directly offers advantage of smaller and more direct unit tests.
Fakes, mocks and integration tests
Unit tests are so named because they each test one unit of code. A complex module may have a thousand unit tests and a simple module may have only ten. The unit tests used for TDD should never cross process boundaries in a program, let alone network connections. Doing so introduces delays that make tests run slowly and discourage developers from running the whole suite. Introducing dependencies on external modules or data also turns unit tests into integration tests. If one module misbehaves in a chain of interrelated modules, it is not so immediately clear where to look for the cause of the failure.
When code under development relies on a database, a web service, or any other external process or service, enforcing a unit-testable separation is also an opportunity and a driving force to design more modular, more testable and more reusable code. Two steps are necessary:
Whenever external access is needed in the final design, an interface should be defined that describes the access available. See the dependency inversion principle for a discussion of the benefits of doing this regardless of TDD.
The interface should be implemented in two ways, one of which really accesses the external process, and the other of which is a fake or mock. Fake objects need do little more than add a message such as "Person object saved" to a trace log, against which a test assertion can be run to verify correct behaviour. Mock objects differ in that they themselves contain test assertions that can make the test fail, for example, if the person's name and other data are not as expected.
Fake and mock object methods that return data, ostensibly from a data store or user, can help the test process by always returning the same, realistic data that tests can rely upon. They can also be set into predefined fault modes so that error-handling routines can be developed and reliably tested. In a fault mode, a method may return an invalid, incomplete or null response, or may throw an exception. Fake services other than data stores may also be useful in TDD: A fake encryption service may not, in fact, encrypt the data passed; a fake random number service may always return 1. Fake or mock implementations are examples of dependency injection.
A test double is a test-specific capability that substitutes for a system capability, typically a class or function, that the UUT depends on. There are two times at which test doubles can be introduced into a system: link and execution. Link time substitution is when the test double is compiled into the load module, which is executed to validate testing. This approach is typically used when running in an environment other than the target environment that requires doubles for the hardware level code for compilation. The alternative to linker substitution is run-time substitution in which the real functionality is replaced during the execution of a test case. This substitution is typically done through the reassignment of known function pointers or object replacement.
Test doubles are of a number of different types and varying complexities:
Dummy – A dummy is the simplest form of a test double. It facilitates linker time substitution by providing a default return value where required.
Stub – A stub adds simplistic logic to a dummy, providing different outputs.
Spy – A spy captures and makes available parameter and state information, publishing accessors to test code for private information allowing for more advanced state validation.
Mock – A mock is specified by an individual test case to validate test-specific behavior, checking parameter values and call sequencing.
Simulator – A simulator is a comprehensive component providing a higher-fidelity approximation of the target capability (the thing being doubled). A simulator typically requires significant additional development effort.
A corollary of such dependency injection is that the actual database or other external-access code is never tested by the TDD process itself. To avoid errors that may arise from this, other tests are needed that instantiate the test-driven code with the "real" implementations of the interfaces discussed above. These are integration tests and are quite separate from the TDD unit tests. There are fewer of them, and they must be run less often than the unit tests. They can nonetheless be implemented using the same testing framework.
Integration tests that alter any persistent store or database should always be designed carefully with consideration of the initial and final state of the files or database, even if any test fails. This is often achieved using some combination of the following techniques:
The TearDown method, which is integral to many test frameworks.
try...catch...finally exception handling structures where available.
Database transactions where a transaction atomically includes perhaps a write, a read and a matching delete operation.
Taking a "snapshot" of the database before running any tests and rolling back to the snapshot after each test run. This may be automated using a framework such as Ant or NAnt or a continuous integration system such as CruiseControl.
Initialising the database to a clean state before tests, rather than cleaning up after them. This may be relevant where cleaning up may make it difficult to diagnose test failures by deleting the final state of the database before detailed diagnosis can be performed.
Keep the unit small
For TDD, a unit is most commonly defined as a class, or a group of related functions often called a module. Keeping units relatively small is claimed to provide critical benefits, including:
Reduced debugging effort – When test failures are detected, having smaller units aids in tracking down errors.
Self-documenting tests – Small test cases are easier to read and to understand.
Advanced practices of test-driven development can lead to acceptance test–driven development (ATDD) and specification by example where the criteria specified by the customer are automated into acceptance tests, which then drive the traditional unit test-driven development (UTDD) process. This process ensures the customer has an automated mechanism to decide whether the software meets their requirements. With ATDD, the development team now has a specific target to satisfy – the acceptance tests – which keeps them continuously focused on what the customer really wants from each user story.
Best practices
Test structure
Effective layout of a test case ensures all required actions are completed, improves the readability of the test case, and smooths the flow of execution. Consistent structure helps in building a self-documenting test case. A commonly applied structure for test cases has (1) setup, (2) execution, (3) validation, and (4) cleanup.
Setup: Put the Unit Under Test (UUT) or the overall test system in the state needed to run the test.
Execution: Trigger/drive the UUT to perform the target behavior and capture all output, such as return values and output parameters. This step is usually very simple.
Validation: Ensure the results of the test are correct. These results may include explicit outputs captured during execution or state changes in the UUT.
Cleanup: Restore the UUT or the overall test system to the pre-test state. This restoration permits another test to execute immediately after this one. In some cases, in order to preserve the information for possible test failure analysis, the cleanup should be starting the test just before the test's setup run.
Individual best practices
Some best practices that an individual could follow would be to separate common set-up and tear-down logic into test support services utilized by the appropriate test cases, to keep each test oracle focused on only the results necessary to validate its test, and to design time-related tests to allow tolerance for execution in non-real time operating systems. The common practice of allowing a 5-10 percent margin for late execution reduces the potential number of false negatives in test execution. It is also suggested to treat test code with the same respect as production code. Test code must work correctly for both positive and negative cases, last a long time, and be readable and maintainable. Teams can get together and review tests and test practices to share effective techniques and catch bad habits.
Practices to avoid, or "anti-patterns"
Having test cases depend on system state manipulated from previously executed test cases (i.e., you should always start a unit test from a known and pre-configured state).
Dependencies between test cases. A test suite where test cases are dependent upon each other is brittle and complex. Execution order should not be presumed. Basic refactoring of the initial test cases or structure of the UUT causes a spiral of increasingly pervasive impacts in associated tests.
Interdependent tests. Interdependent tests can cause cascading false negatives. A failure in an early test case breaks a later test case even if no actual fault exists in the UUT, increasing defect analysis and debug efforts.
Testing precise execution, behavior, timing or performance.
Building "all-knowing oracles". An oracle that inspects more than necessary is more expensive and brittle over time. This very common error is dangerous because it causes a subtle but pervasive time sink across the complex project.
Testing implementation details.
Slow running tests.
Comparison and demarcation
TDD and ATDD
Test-driven development is related to, but different from acceptance test–driven development (ATDD). TDD is primarily a developer's tool to help create well-written unit of code (function, class, or module) that correctly performs a set of operations. ATDD is a communication tool between the customer, developer, and tester to ensure that the requirements are well-defined. TDD requires test automation. ATDD does not, although automation helps with regression testing. Tests used in TDD can often be derived from ATDD tests, since the code units implement some portion of a requirement. ATDD tests should be readable by the customer. TDD tests do not need to be.
TDD and BDD
BDD (behavior-driven development) combines practices from TDD and from ATDD.
It includes the practice of writing tests first, but focuses on tests which describe behavior, rather than tests which test a unit of implementation. Tools such as JBehave, Cucumber, Mspec and Specflow provide syntaxes which allow product owners, developers and test engineers to define together the behaviors which can then be translated into automated tests.
Software for TDD
There are many testing frameworks and tools that are useful in TDD.
xUnit frameworks
Developers may use computer-assisted testing frameworks, commonly collectively named xUnit (which are derived from SUnit, created in 1998), to create and automatically run the test cases. xUnit frameworks provide assertion-style test validation capabilities and result reporting. These capabilities are critical for automation as they move the burden of execution validation from an independent post-processing activity to one that is included in the test execution. The execution framework provided by these test frameworks allows for the automatic execution of all system test cases or various subsets along with other features.
TAP results
Testing frameworks may accept unit test output in the language-agnostic Test Anything Protocol created in 1987.
TDD for complex systems
Exercising TDD on large, challenging systems requires a modular architecture, well-defined components with published interfaces, and disciplined system layering with maximization of platform independence. These proven practices yield increased testability and facilitate the application of build and test automation.
Designing for testability
Complex systems require an architecture that meets a range of requirements. A key subset of these requirements includes support for the complete and effective testing of the system. Effective modular design yields components that share traits essential for effective TDD.
High Cohesion ensures each unit provides a set of related capabilities and makes the tests of those capabilities easier to maintain.
Low Coupling allows each unit to be effectively tested in isolation.
Published Interfaces restrict Component access and serve as contact points for tests, facilitating test creation and ensuring the highest fidelity between test and production unit configuration.
A key technique for building effective modular architecture is Scenario Modeling where a set of sequence charts is constructed, each one focusing on a single system-level execution scenario. The Scenario Model provides an excellent vehicle for creating the strategy of interactions between components in response to a specific stimulus. Each of these Scenario Models serves as a rich set of requirements for the services or functions that a component must provide, and it also dictates the order in which these components and services interact together. Scenario modeling can greatly facilitate the construction of TDD tests for a complex system.
Managing tests for large teams
In a larger system, the impact of poor component quality is magnified by the complexity of interactions. This magnification makes the benefits of TDD accrue even faster in the context of larger projects. However, the complexity of the total population of tests can become a problem in itself, eroding potential gains. It sounds simple, but a key initial step is to recognize that test code is also important software and should be produced and maintained with the same rigor as the production code.
Creating and managing the architecture of test software within a complex system is just as important as the core product architecture. Test drivers interact with the UUT, test doubles and the unit test framework.
Advantages and Disadvantages of Test Driven Development
Advantages
Test Driven Development (TDD) is a software development approach where tests are written before the actual code. It offers several advantages:
Comprehensive Test Coverage: TDD ensures that all new code is covered by at least one test, leading to more robust software.
Enhanced Confidence in Code: Developers gain greater confidence in the code's reliability and functionality.
Enhanced Confidence in Tests: As the tests are known to be failing without the proper implementation, we know that the tests actually tests the implementation correctly.
Well-Documented Code: The process naturally results in well-documented code, as each test clarifies the purpose of the code it tests.
Requirement Clarity: TDD encourages a clear understanding of requirements before coding begins.
Facilitates Continuous Integration: It integrates well with continuous integration processes, allowing for frequent code updates and testing.
Boosts Productivity: Many developers find that TDD increases their productivity.
Reinforces Code Mental Model: TDD helps in building a strong mental model of the code's structure and behavior.
Emphasis on Design and Functionality: It encourages a focus on the design, interface, and overall functionality of the program.
Reduces Need for Debugging: By catching issues early in the development process, TDD reduces the need for extensive debugging later.
System Stability: Applications developed with TDD tend to be more stable and less prone to bugs.
Disadvantages
However, TDD is not without its drawbacks:
Increased Code Volume: Implementing TDD can result in a larger codebase as tests add to the total amount of code written.
False Security from Tests: A large number of passing tests can sometimes give a misleading sense of security regarding the code's robustness.
Maintenance Overheads: Maintaining a large suite of tests can add overhead to the development process.
Time-Consuming Test Processes: Writing and maintaining tests can be time-consuming.
Testing Environment Set-Up: TDD requires setting up and maintaining a suitable testing environment.
Learning Curve: It takes time and effort to become proficient in TDD practices.
Overcomplication: An overemphasis on TDD can lead to code that is more complex than necessary.
Neglect of Overall Design: Focusing too narrowly on passing tests can sometimes lead to neglect of the bigger picture in software design.
Increased Costs: The additional time and resources required for TDD can result in higher development costs.
Benefits
A 2005 study found that using TDD meant writing more tests and, in turn, programmers who wrote more tests tended to be more productive. Hypotheses relating to code quality and a more direct correlation between TDD and productivity were inconclusive.
Programmers using pure TDD on new ("greenfield") projects reported they only rarely felt the need to invoke a debugger. Used in conjunction with a version control system, when tests fail unexpectedly, reverting the code to the last version that passed all tests may often be more productive than debugging.
Test-driven development offers more than just simple validation of correctness, but can also drive the design of a program. By focusing on the test cases first, one must imagine how the functionality is used by clients (in the first case, the test cases). So, the programmer is concerned with the interface before the implementation. This benefit is complementary to design by contract as it approaches code through test cases rather than through mathematical assertions or preconceptions.
Test-driven development offers the ability to take small steps when required. It allows a programmer to focus on the task at hand as the first goal is to make the test pass. Exceptional cases and error handling are not considered initially, and tests to create these extraneous circumstances are implemented separately. Test-driven development ensures in this way that all written code is covered by at least one test. This gives the programming team, and subsequent users, a greater level of confidence in the code.
While it is true that more code is required with TDD than without TDD because of the unit test code, the total code implementation time could be shorter based on a model by Müller and Padberg. Large numbers of tests help to limit the number of defects in the code. The early and frequent nature of the testing helps to catch defects early in the development cycle, preventing them from becoming endemic and expensive problems. Eliminating defects early in the process usually avoids lengthy and tedious debugging later in the project.
TDD can lead to more modularized, flexible, and extensible code. This effect often comes about because the methodology requires that the developers think of the software in terms of small units that can be written and tested independently and integrated together later. This leads to smaller, more focused classes, looser coupling, and cleaner interfaces. The use of the mock object design pattern also contributes to the overall modularization of the code because this pattern requires that the code be written so that modules can be switched easily between mock versions for unit testing and "real" versions for deployment.
Because no more code is written than necessary to pass a failing test case, automated tests tend to cover every code path. For example, for a TDD developer to add an else branch to an existing if statement, the developer would first have to write a failing test case that motivates the branch. As a result, the automated tests resulting from TDD tend to be very thorough: they detect any unexpected changes in the code's behaviour. This detects problems that can arise where a change later in the development cycle unexpectedly alters other functionality.
Madeyski provided empirical evidence (via a series of laboratory experiments with over 200 developers) regarding the superiority of the TDD practice over the traditional Test-Last approach or testing for correctness approach, with respect to the lower coupling between objects (CBO). The mean effect size represents a medium (but close to large) effect on the basis of meta-analysis of the performed experiments which is a substantial finding. It suggests a better modularization (i.e., a more modular design), easier reuse and testing of the developed software products due to the TDD programming practice. Madeyski also measured the effect of the TDD practice on unit tests using branch coverage (BC) and mutation score indicator (MSI), which are indicators of the thoroughness and the fault detection effectiveness of unit tests, respectively. The effect size of TDD on branch coverage was medium in size and therefore is considered substantive effect. These findings have been subsequently confirmed by further, smaller experimental evaluations of TDD.
Psychological benefits to programmer
Increased Confidence: TDD allows programmers to make changes or add new features with confidence. Knowing that the code is constantly tested reduces the fear of breaking existing functionality. This safety net can encourage more innovative and creative approaches to problem-solving.
Reduced Fear of Change, Reduced Stress: In traditional development, changing existing code can be daunting due to the risk of introducing bugs. TDD, with its comprehensive test suite, reduces this fear, as tests will immediately reveal any problems caused by changes. Knowing that the codebase has a safety net of tests can reduce stress and anxiety associated with programming. Developers might feel more relaxed and open to experimenting and refactoring.
Improved Focus: Writing tests first helps programmers concentrate on requirements and design before writing the code. This focus can lead to clearer, more purposeful coding, as the developer is always aware of the goal they are trying to achieve.
Sense of Achievement and Job Satisfaction: Passing tests can provide a quick, regular sense of accomplishment, boosting morale. This can be particularly motivating in long-term projects where the end goal might seem distant. The combination of all these factors can lead to increased job satisfaction. When developers feel confident, focused, and part of a collaborative team, their overall job satisfaction can significantly improve.
Limitations
Test-driven development does not perform sufficient testing in situations where full functional tests are required to determine success or failure, due to extensive use of unit tests. Examples of these are user interfaces, programs that work with databases, and some that depend on specific network configurations. TDD encourages developers to put the minimum amount of code into such modules and to maximize the logic that is in testable library code, using fakes and mocks to represent the outside world.
Management support is essential. Without the entire organization believing that test-driven development is going to improve the product, management may feel that time spent writing tests is wasted.
Unit tests created in a test-driven development environment are typically created by the developer who is writing the code being tested. Therefore, the tests may share blind spots with the code: if, for example, a developer does not realize that certain input parameters must be checked, most likely neither the test nor the code will verify those parameters. Another example: if the developer misinterprets the requirements for the module they are developing, the code and the unit tests they write will both be wrong in the same way. Therefore, the tests will pass, giving a false sense of correctness.
A high number of passing unit tests may bring a false sense of security, resulting in fewer additional software testing activities, such as integration testing and compliance testing.
Tests become part of the maintenance overhead of a project. Badly written tests, for example ones that include hard-coded error strings, are themselves prone to failure, and they are expensive to maintain. This is especially the case with fragile tests. There is a risk that tests that regularly generate false failures will be ignored, so that when a real failure occurs, it may not be detected. It is possible to write tests for low and easy maintenance, for example by the reuse of error strings, and this should be a goal during the code refactoring phase described above.
Writing and maintaining an excessive number of tests costs time. Also, more-flexible modules (with limited tests) might accept new requirements without the need for changing the tests. For those reasons, testing for only extreme conditions, or a small sample of data, can be easier to adjust than a set of highly detailed tests.
The level of coverage and testing detail achieved during repeated TDD cycles cannot easily be re-created at a later date. Therefore, these original, or early, tests become increasingly precious as time goes by. The tactic is to fix it early. Also, if a poor architecture, a poor design, or a poor testing strategy leads to a late change that makes dozens of existing tests fail, then it is important that they are individually fixed. Merely deleting, disabling or rashly altering them can lead to undetectable holes in the test coverage.
Conference
First TDD Conference was held during July 2021. Conferences were recorded on YouTube
See also
References
External links
TestDrivenDevelopment on WikiWikiWeb
Microsoft Visual Studio Team Test from a TDD approach
Write Maintainable Unit Tests That Will Save You Time And Tears
Improving Application Quality Using Test-Driven Development (TDD)
Test Driven Development Conference
Extreme programming
Software development philosophies
Software development process
Software testing | Test-driven development | [
"Engineering"
] | 6,642 | [
"Software engineering",
"Software testing"
] |
359,135 | https://en.wikipedia.org/wiki/Chemical%20kinetics | Chemical kinetics, also known as reaction kinetics, is the branch of physical chemistry that is concerned with understanding the rates of chemical reactions. It is different from chemical thermodynamics, which deals with the direction in which a reaction occurs but in itself tells nothing about its rate. Chemical kinetics includes investigations of how experimental conditions influence the speed of a chemical reaction and yield information about the reaction's mechanism and transition states, as well as the construction of mathematical models that also can describe the characteristics of a chemical reaction.
History
The pioneering work of chemical kinetics was done by German chemist Ludwig Wilhelmy in 1850. He experimentally studied the rate of inversion of sucrose and he used integrated rate law for the determination of the reaction kinetics of this reaction. His work was noticed 34 years later by Wilhelm Ostwald. In 1864, Peter Waage and Cato Guldberg published the law of mass action, which states that the speed of a chemical reaction is proportional to the quantity of the reacting substances.
Van 't Hoff studied chemical dynamics and in 1884 published his famous "Études de dynamique chimique". In 1901 he was awarded the first Nobel Prize in Chemistry "in recognition of the extraordinary services he has rendered by the discovery of the laws of chemical dynamics and osmotic pressure in solutions". After van 't Hoff, chemical kinetics dealt with the experimental determination of reaction rates from which rate laws and rate constants are derived. Relatively simple rate laws exist for zero order reactions (for which reaction rates are independent of concentration), first order reactions, and second order reactions, and can be derived for others. Elementary reactions follow the law of mass action, but the rate law of stepwise reactions has to be derived by combining the rate laws of the various elementary steps, and can become rather complex. In consecutive reactions, the rate-determining step often determines the kinetics. In consecutive first order reactions, a steady state approximation can simplify the rate law. The activation energy for a reaction is experimentally determined through the Arrhenius equation and the Eyring equation. The main factors that influence the reaction rate include: the physical state of the reactants, the concentrations of the reactants, the temperature at which the reaction occurs, and whether or not any catalysts are present in the reaction.
Gorban and Yablonsky have suggested that the history of chemical dynamics can be divided into three eras. The first is the van 't Hoff wave searching for the general laws of chemical reactions and relating kinetics to thermodynamics. The second may be called the Semenov-Hinshelwood wave with emphasis on reaction mechanisms, especially for chain reactions. The third is associated with Aris and the detailed mathematical description of chemical reaction networks.
Factors affecting reaction rate
Nature of the reactants
The reaction rate varies depending upon what substances are reacting. Acid/base reactions, the formation of salts, and ion exchange are usually fast reactions. When covalent bond formation takes place between the molecules and when large molecules are formed, the reactions tend to be slower.
The nature and strength of bonds in reactant molecules greatly influence the rate of their transformation into products.
Physical state
The physical state (solid, liquid, or gas) of a reactant is also an important factor of the rate of change. When reactants are in the same phase, as in aqueous solution, thermal motion brings them into contact. However, when they are in separate phases, the reaction is limited to the interface between the reactants. Reaction can occur only at their area of contact; in the case of a liquid and a gas, at the surface of the liquid. Vigorous shaking and stirring may be needed to bring the reaction to completion. This means that the more finely divided a solid or liquid reactant the greater its surface area per unit volume and the more contact it with the other reactant, thus the faster the reaction. To make an analogy, for example, when one starts a fire, one uses wood chips and small branches — one does not start with large logs right away. In organic chemistry, on water reactions are the exception to the rule that homogeneous reactions take place faster than heterogeneous reactions (those in which solute and solvent are not mixed properly).
Surface area of solid state
In a solid, only those particles that are at the surface can be involved in a reaction. Crushing a solid into smaller parts means that more particles are present at the surface, and the frequency of collisions between these and reactant particles increases, and so reaction occurs more rapidly. For example, Sherbet (powder) is a mixture of very fine powder of malic acid (a weak organic acid) and sodium hydrogen carbonate. On contact with the saliva in the mouth, these chemicals quickly dissolve and react, releasing carbon dioxide and providing for the fizzy sensation. Also, fireworks manufacturers modify the surface area of solid reactants to control the rate at which the fuels in fireworks are oxidised, using this to create diverse effects. For example, finely divided aluminium confined in a shell explodes violently. If larger pieces of aluminium are used, the reaction is slower and sparks are seen as pieces of burning metal are ejected.
Concentration
The reactions are due to collisions of reactant species. The frequency with which the molecules or ions collide depends upon their concentrations. The more crowded the molecules are, the more likely they are to collide and react with one another. Thus, an increase in the concentrations of the reactants will usually result in the corresponding increase in the reaction rate, while a decrease in the concentrations will usually have a reverse effect. For example, combustion will occur more rapidly in pure oxygen than in air (21% oxygen).
The rate equation shows the detailed dependence of the reaction rate on the concentrations of reactants and other species present. The mathematical forms depend on the reaction mechanism. The actual rate equation for a given reaction is determined experimentally and provides information about the reaction mechanism. The mathematical expression of the rate equation is often given by
Here is the reaction rate constant, is the molar concentration of reactant i and is the partial order of reaction for this reactant. The partial order for a reactant can only be determined experimentally and is often not indicated by its stoichiometric coefficient.
Temperature
Temperature usually has a major effect on the rate of a chemical reaction. Molecules at a higher temperature have more thermal energy. Although collision frequency is greater at higher temperatures, this alone contributes only a very small proportion to the increase in rate of reaction. Much more important is the fact that the proportion of reactant molecules with sufficient energy to react (energy greater than activation energy: E > Ea) is significantly higher and is explained in detail by the Maxwell–Boltzmann distribution of molecular energies.
The effect of temperature on the reaction rate constant usually obeys the Arrhenius equation , where A is the pre-exponential factor or A-factor, Ea is the activation energy, R is the molar gas constant and T is the absolute temperature.
At a given temperature, the chemical rate of a reaction depends on the value of the A-factor, the magnitude of the activation energy, and the concentrations of the reactants. Usually, rapid reactions require relatively small activation energies.
The 'rule of thumb' that the rate of chemical reactions doubles for every 10 °C temperature rise is a common misconception. This may have been generalized from the special case of biological systems, where the α (temperature coefficient) is often between 1.5 and 2.5.
The kinetics of rapid reactions can be studied with the temperature jump method. This involves using a sharp rise in temperature and observing the relaxation time of the return to equilibrium. A particularly useful form of temperature jump apparatus is a shock tube, which can rapidly increase a gas's temperature by more than 1000 degrees.
Catalysts
A catalyst is a substance that alters the rate of a chemical reaction but it remains chemically unchanged afterwards. The catalyst increases the rate of the reaction by providing a new reaction mechanism to occur with in a lower activation energy. In autocatalysis a reaction product is itself a catalyst for that reaction leading to positive feedback. Proteins that act as catalysts in biochemical reactions are called enzymes. Michaelis–Menten kinetics describe the rate of enzyme mediated reactions. A catalyst does not affect the position of the equilibrium, as the catalyst speeds up the backward and forward reactions equally.
In certain organic molecules, specific substituents can have an influence on reaction rate in neighbouring group participation.
Pressure
Increasing the pressure in a gaseous reaction will increase the number of collisions between reactants, increasing the rate of reaction. This is because the activity of a gas is directly proportional to the partial pressure of the gas. This is similar to the effect of increasing the concentration of a solution.
In addition to this straightforward mass-action effect, the rate coefficients themselves can change due to pressure. The rate coefficients and products of many high-temperature gas-phase reactions change if an inert gas is added to the mixture; variations on this effect are called fall-off and chemical activation. These phenomena are due to exothermic or endothermic reactions occurring faster than heat transfer, causing the reacting molecules to have non-thermal energy distributions (non-Boltzmann distribution). Increasing the pressure increases the heat transfer rate between the reacting molecules and the rest of the system, reducing this effect.
Condensed-phase rate coefficients can also be affected by pressure, although rather high pressures are required for a measurable effect because ions and molecules are not very compressible. This effect is often studied using diamond anvils.
A reaction's kinetics can also be studied with a pressure jump approach. This involves making fast changes in pressure and observing the relaxation time of the return to equilibrium.
Absorption of light
The activation energy for a chemical reaction can be provided when one reactant molecule absorbs light of suitable wavelength and is promoted to an excited state. The study of reactions initiated by light is photochemistry, one prominent example being photosynthesis.
Experimental methods
The experimental determination of reaction rates involves measuring how the concentrations of reactants or products change over time. For example, the concentration of a reactant can be measured by spectrophotometry at a wavelength where no other reactant or product in the system absorbs light.
For reactions which take at least several minutes, it is possible to start the observations after the reactants have been mixed at the temperature of interest.
Fast reactions
For faster reactions, the time required to mix the reactants and bring them to a specified temperature may be comparable or longer than the half-life of the reaction. Special methods to start fast reactions without slow mixing step include
Stopped flow methods, which can reduce the mixing time to the order of a millisecond The stopped flow methods have limitation, for example, we need to consider the time it takes to mix gases or solutions and are not suitable if the half-life is less than about a hundredth of a second.
Chemical relaxation methods such as temperature jump and pressure jump, in which a pre-mixed system initially at equilibrium is perturbed by rapid heating or depressurization so that it is no longer at equilibrium, and the relaxation back to equilibrium is observed. For example, this method has been used to study the neutralization H3O+ + OH− with a half-life of 1 μs or less under ordinary conditions.
Flash photolysis, in which a laser pulse produces highly excited species such as free radicals, whose reactions are then studied.
Equilibrium
While chemical kinetics is concerned with the rate of a chemical reaction, thermodynamics determines the extent to which reactions occur. In a reversible reaction, chemical equilibrium is reached when the rates of the forward and reverse reactions are equal (the principle of dynamic equilibrium) and the concentrations of the reactants and products no longer change. This is demonstrated by, for example, the Haber–Bosch process for combining nitrogen and hydrogen to produce ammonia. Chemical clock reactions such as the Belousov–Zhabotinsky reaction demonstrate that component concentrations can oscillate for a long time before finally attaining the equilibrium.
Free energy
In general terms, the free energy change (ΔG) of a reaction determines whether a chemical change will take place, but kinetics describes how fast the reaction is. A reaction can be very exothermic and have a very positive entropy change but will not happen in practice if the reaction is too slow. If a reactant can produce two products, the thermodynamically most stable one will form in general, except in special circumstances when the reaction is said to be under kinetic reaction control. The Curtin–Hammett principle applies when determining the product ratio for two reactants interconverting rapidly, each going to a distinct product. It is possible to make predictions about reaction rate constants for a reaction from free-energy relationships.
The kinetic isotope effect is the difference in the rate of a chemical reaction when an atom in one of the reactants is replaced by one of its isotopes.
Chemical kinetics provides information on residence time and heat transfer in a chemical reactor in chemical engineering and the molar mass distribution in polymer chemistry. It is also provides information in corrosion engineering.
Applications and models
The mathematical models that describe chemical reaction kinetics provide chemists and chemical engineers with tools to better understand and describe chemical processes such as food decomposition, microorganism growth, stratospheric ozone decomposition, and the chemistry of biological systems. These models can also be used in the design or modification of chemical reactors to optimize product yield, more efficiently separate products, and eliminate environmentally harmful by-products. When performing catalytic cracking of heavy hydrocarbons into gasoline and light gas, for example, kinetic models can be used to find the temperature and pressure at which the highest yield of heavy hydrocarbons into gasoline will occur.
Chemical Kinetics is frequently validated and explored through modeling in specialized packages as a function of ordinary differential equation-solving (ODE-solving) and curve-fitting.
Numerical methods
In some cases, equations are unsolvable analytically, but can be solved using numerical methods if data values are given. There are two different ways to do this, by either using software programmes or mathematical methods such as the Euler method. Examples of software for chemical kinetics are i) Tenua, a Java app which simulates chemical reactions numerically and allows comparison of the simulation to real data, ii) Python coding for calculations and estimates and iii) the Kintecus software compiler to model, regress, fit and optimize reactions.
-Numerical integration: for a 1st order reaction A → B
The differential equation of the reactant A is:
It can also be expressed as
which is the same as
To solve the differential equations with Euler and Runge-Kutta methods we need to have the initial values.
See also
Autocatalytic reactions and order creation
Corrosion engineering
Detonation
Electrochemical kinetics
Flame speed
Heterogenous catalysis
Intrinsic low-dimensional manifold
MLAB chemical kinetics modeling package
Nonthermal surface reaction
PottersWheel Matlab toolbox to fit chemical rate constants to experimental data
Reaction progress kinetic analysis
References
External links
Chemistry applets
University of Waterloo
Chemical Kinetics of Gas Phase Reactions
Kinpy: Python code generator for solving kinetic equations
Reaction rate law and reaction profile - a question of temperature, concentration, solvent and catalyst - how fast will a reaction proceed (Video by SciFox on TIB AV-Portal)
Jacobus Henricus van 't Hoff | Chemical kinetics | [
"Chemistry"
] | 3,182 | [
"Chemical reaction engineering",
"Chemical kinetics"
] |
359,238 | https://en.wikipedia.org/wiki/Prescription%20drug | A prescription drug (also prescription medication, prescription medicine or prescription-only medication) is a pharmaceutical drug that is permitted to be dispensed only to those with a medical prescription. In contrast, over-the-counter drugs can be obtained without a prescription. The reason for this difference in substance control is the potential scope of misuse, from drug abuse to practicing medicine without a license and without sufficient education. Different jurisdictions have different definitions of what constitutes a prescription drug.
In North America, , usually printed as "Rx", is used as an abbreviation of the word "prescription". It is a contraction of the Latin word "recipe" (an imperative form of "recipere") meaning "take". Prescription drugs are often dispensed together with a monograph (in Europe, a Patient Information Leaflet or PIL) that gives detailed information about the drug.
The use of prescription drugs has been increasing since the 1960s.
Regulation
Australia
In Australia, the Standard for the Uniform Scheduling of Medicines and Poisons (SUSMP) governs the manufacture and supply of drugs with several categories:
Schedule 1 – Defunct Drug.
Schedule 2 – Pharmacy Medicine
Schedule 3 – Pharmacist-Only Medicine
Schedule 4 – Prescription-Only Medicine/Prescription Animal Remedy
Schedule 5 – Caution/Poison.
Schedule 6 – Poison
Schedule 7 – Dangerous Poison
Schedule 8 – Controlled Drug (Possession without authority illegal)
Schedule 9 – Prohibited Substance (Possession illegal without a license legal only for research purposes)
Schedule 10 – Controlled Poison.
Unscheduled Substances.
As in other developed countries, the person requiring a prescription drug attends the clinic of a qualified health practitioner, such as a physician, who may write the prescription for the required drug.
Many prescriptions issued by health practitioners in Australia are covered by the Pharmaceutical Benefits Scheme, a scheme that provides subsidised prescription drugs to residents of Australia to ensure that all Australians have affordable and reliable access to a wide range of necessary medicines. When purchasing a drug under the PBS, the consumer pays no more than the patient co-payment contribution, which, as of January 1, 2022, is A$42.50 for general patients. Those covered by government entitlements (low-income earners, welfare recipients, Health Care Card holders, etc.) and or under the Repatriation Pharmaceutical Benefits Scheme (RPBS) have a reduced co-payment, which is A$6.80 in 2022. The co-payments are compulsory and can be discounted by pharmacies up to a maximum of A$1.00 at cost to the pharmacy.
United Kingdom
In the United Kingdom, the Medicines Act 1968 and the Prescription Only Medicines (Human Use) Order 1997 contain regulations that cover the supply of sale, use, prescribing and production of medicines. There are three categories of medicine:
Prescription-only medicines (POM), which may be dispensed (sold in the case of a private prescription) by a pharmacist only to those to whom they have been prescribed
Pharmacy medicines (P), which may be sold by a pharmacist without a prescription
General sales list (GSL) medicines, which may be sold without a prescription in any shop
The simple possession of a prescription-only medicine without a prescription is legal unless it is covered by the Misuse of Drugs Act 1971.
A patient visits a medical practitioner or dentist, who may prescribe drugs and certain other medical items, such as blood glucose-testing equipment for diabetics. Also, qualified and experienced nurses, paramedics and pharmacists may be independent prescribers. Both may prescribe all POMs (including controlled drugs), but may not prescribe Schedule 1 controlled drugs, and 3 listed controlled drugs for the treatment of addiction; which is similar to doctors, who require a special licence from the Home Office to prescribe schedule 1 drugs. Schedule 1 drugs have little or no medical benefit, hence their limitations on prescribing. District nurses and health visitors have had limited prescribing rights since the mid-1990s; until then, prescriptions for dressings and simple medicines had to be signed by a doctor. Once issued, a prescription is taken by the patient to a pharmacy, which dispenses the medicine.
Most prescriptions are NHS prescriptions, subject to a standard charge that is unrelated to what is dispensed. The NHS prescription fee was increased to £9.90 for each item in England in May 2024; prescriptions are free of charge if prescribed and dispensed in Scotland, Wales and Northern Ireland, and for some patients in England, such as inpatients, children, those over 60s or with certain medical conditions, and claimants of certain benefits. The pharmacy charges the NHS the actual cost of the medicine, which may vary from a few pence to hundreds of pounds. A patient can consolidate prescription charges by using a prescription payment certificate (informally a "season ticket"), effectively capping costs at £31.25 a quarter or £111.60 for a year.
Outside the NHS, private prescriptions are issued by private medical practitioner and sometimes under the NHS for medicines that are not covered by the NHS. A patient pays the pharmacy the normal price for medicine prescribed outside the NHS.
Survey results published by Ipsos MORI in 2008 found that around 800,000 people in England were not collecting prescriptions or getting them dispensed because of the cost, the same as in 2001.
United States
In the United States, the Federal Food, Drug, and Cosmetic Act defines what substances, known as legend drugs, require a prescription for them to be dispensed by a pharmacy. The federal government authorizes physicians (of any specialty), physician assistants, nurse practitioners and other advanced practice nurses, veterinarians, dentists, and optometrists to prescribe any controlled substance. They are issued unique DEA numbers. Many other mental and physical health technicians, including basic-level registered nurses, medical assistants, emergency medical technicians, most psychologists, and social workers, are not authorized to prescribe legend drugs.
The federal Controlled Substances Act (CSA) was enacted in 1970. It regulates manufacture, importation, possession, use, and distribution of controlled substances, which are drugs with potential for abuse or addiction. The legislation classifies these drugs into five schedules, with varying qualifications for each schedule. The schedules are designated schedule I, schedule II, schedule III, schedule IV, and schedule V. Many drugs other than controlled substances require a prescription.
The safety and the effectiveness of prescription drugs in the US are regulated by the 1987 Prescription Drug Marketing Act (PDMA). The Food and Drug Administration (FDA) is charged with implementing the law.
As a general rule, over-the-counter drugs (OTC) are used to treat a condition that does not need care from a healthcare professional if have been proven to meet higher safety standards for self-medication by patients. Often, a lower strength of a drug will be approved for OTC use, but higher strengths require a prescription to be obtained; a notable case is ibuprofen, which has been widely available as an OTC pain killer since the mid-1980s, but it is available by prescription in doses up to four times the OTC dose for severe pain that is not adequately controlled by the OTC strength.
Herbal preparations, amino acids, vitamins, minerals, and other food supplements are regulated by the FDA as dietary supplements. Because specific health claims cannot be made, the consumer must make informed decisions when purchasing such products.
By law, American pharmacies operated by "membership clubs" such as Costco and Sam's Club must allow non-members to use their pharmacy services and may not charge more for these services than they charge as their members.
Physicians may legally prescribe drugs for uses other than those specified in the FDA approval, known as off-label use. Drug companies, however, are prohibited from marketing their drugs for off-label uses.
Some prescription drugs are commonly abused, particularly those marketed as analgesics, including fentanyl (Duragesic), hydrocodone (Vicodin), oxycodone (OxyContin), oxymorphone (Opana), propoxyphene (Darvon), hydromorphone (Dilaudid), meperidine (Demerol), and diphenoxylate (Lomotil).
Some prescription painkillers have been found to be addictive, and unintentional poisoning deaths in the United States have skyrocketed since the 1990s according to the National Safety Council. Prescriber education guidelines as well as patient education, prescription drug monitoring programs and regulation of pain clinics are regulatory tactics which have been used to curtail opioid use and misuse.
Expiration date
The expiration date, required in several countries, specifies the date up to which the manufacturer guarantees the full potency and safety of a drug. In the United States, expiration dates are determined by regulations established by the FDA. The FDA advises consumers not to use products after their expiration dates.
A study conducted by the U.S. Food and Drug Administration covered over 100 drugs, prescription and over-the-counter. The results showed that about 90% of them were safe and effective far past their original expiration date. At least one drug worked 15 years after its expiration date. Joel Davis, a former FDA expiration-date compliance chief, said that with a handful of exceptions—notably nitroglycerin, insulin, and some liquid antibiotics (outdated tetracyclines can cause Fanconi syndrome)—most expired drugs are probably effective.
The American Medical Association issued a report and statement on Pharmaceutical Expiration Dates. The Harvard Medical School Family Health Guide notes that, with rare exceptions, "it's true the effectiveness of a drug may decrease over time, but much of the original potency still remains even a decade after the expiration date".
The expiration date is the final day that the manufacturer guarantees the full potency and safety of a medication. Drug expiration dates exist on most medication labels, including prescription, over-the-counter and dietary supplements. U.S. pharmaceutical manufacturers are required by law to place expiration dates on prescription products prior to marketing. For legal and liability reasons, manufacturers will not make recommendations about the stability of drugs past the original expiration date.
Cost
Prices of prescription drugs vary widely around the world. Prescription costs for biosimilar and generic drugs are usually less than brand names, but the cost is different from one pharmacy to another.
To lower prescription drug costs, some U.S. states have sought federal approval to buy drugs in Canada, as of 2022.
Generics undergo strict scrutiny to meet the equal efficacy, safety, dosage, strength, stability, and quality of brand name drugs. Generics are developed after the brand name has already been established, and so generic drug approval in many aspects has a shortened approval process because it replicates the brand name drug.
Brand name drugs cost more due to time, money, and resources that drug companies invest in them to conduct development, including clinical trials that the FDA requires for the drug to be marketed. Because drug companies have to invest more in research costs to do this, brand name drug prices are much higher when sold to consumers.
When the patent expires for a brand name drug, generic versions of that drug are produced by other companies and are sold for lower price. By switching to generic prescription drugs, patients can save significant amounts of money: e.g. one study by the FDA showed an example with more than 52% savings of a consumer's overall costs of their prescription drugs.
Strategies to limit drug prices in the United States
In the United States there are many resources available to patients to lower the costs of medication. These include copayments, coinsurance, and deductibles. The Medicaid Drug Rebate Program is another example.
Generic drug programs lower the amount of money patients have to pay when picking up their prescription at the pharmacy. As their name implies, they only cover generic drugs.
Co-pay assistance programs are programs that help patients lower the costs of specialty medications; i.e., medications that are on restricted formularies, have limited distribution, and/or have no generic version available. These medications can include drugs for HIV, hepatitis C, and multiple sclerosis. Patient Assistance Program Center (RxAssist) has a list of foundations that provide co-pay assistance programs. Co-pay assistance programs are for under-insured patients. Patients without insurance are not eligible for this resource; however, they may be eligible for patient assistance programs.
Patient assistance programs are funded by the manufacturer of the medication. Patients can often apply to these programs through the manufacturer's website. This type of assistance program is one of the few options available to uninsured patients.
The out-of-pocket cost for patients enrolled in co-pay assistance or patient assistance programs is $0. It is a major resource to help lower costs of medicationshowever, many providers and patients are not aware of these resources.
Environment
Traces of prescription drugsincluding antibiotics, anti-convulsants, mood stabilizers and sex hormoneshave been detected in drinking water. Pharmaceutically active compounds (PhACs) discarded from human therapy and their metabolites may not be eliminated entirely by sewage treatment plants and have been detected at low concentrations in surface waters downstream from those plants. The continuous discarding of incompletely treated water may interact with other environmental chemicals and lead to uncertain ecological effects. Due to most pharmaceuticals being highly soluble, fish and other aquatic organisms are susceptible to their effects. The long-term effects of pharmaceuticals in the environment may affect survival and reproduction of such organisms. However, levels of medical drug waste in the water is at a low enough level that it is not a direct concern to human health. However, processes, such as biomagnification, are potential human health concerns.
On the other hand, there is clear evidence of harm to aquatic animals and fauna. Recent advancements in technology have allowed scientists to detect smaller, trace quantities of pharmaceuticals in the ng/ml range. Despite being found at low concentrations, female hormonal contraceptives may cause feminizing effects on male vertebrate species, such as fish, frogs and crocodiles.
The FDA established guidelines in 2007 to inform consumers should dispose of prescription drugs. When medications do not include specific disposal instructions, patients should not flush medications in the toilet, but instead use medication take-back programs to reduce the amount of pharmaceutical waste in sewage and landfills. If no take-back programs are available, prescription drugs can be discarded in household trash after they are crushed or dissolved and then mixed in a separate container or sealable bag with undesirable substances like cat litter or other unappealing material (to discourage consumption).
See also
U.S. Controlled Substances Act
Co-pay card
Classification of Pharmaco-Therapeutic Referrals
Drug policy – policy regulating drugs considered dangerous, rather than only medicinal
Inverse benefit law
List of pharmaceutical companies
Package insert
Pharmacy (shop)
Pharmacy automation
Pill splitting
Prescription drug prices in the United States
Regulation of therapeutic goods
References
Pharmaceuticals policy
Prescription of drugs
Pharmacy
es:Medicamento | Prescription drug | [
"Chemistry"
] | 3,139 | [
"Pharmacology",
"Pharmacy"
] |
360,501 | https://en.wikipedia.org/wiki/Aliquot%20stringing | Aliquot stringing is the use of extra, un-struck strings in a piano for the purpose of enriching the tone. Aliquot systems use an additional (hence fourth) string in each note of the top three piano octaves. This string is positioned slightly above the other three strings so that it is not struck by the hammer. Whenever the hammer strikes the three conventional strings, the aliquot string vibrates sympathetically. Aliquot stringing broadens the vibrational energy throughout the instrument, and creates an unusually complex and colorful tone.
Etymology
The word aliquot ultimately comes from a Latin word meaning 'some, several'. In mathematics, aliquot means 'an exact part or divisor', reflecting the fact that the length of an aliquot string forms an exact division of the length of longer strings with which it vibrates sympathetically.
History
Julius Blüthner invented the aliquot stringing system in 1873. The Blüthner aliquot system uses an additional (hence fourth) string in each note of the top three piano octaves. This string is positioned slightly above the other three strings so that it is not struck by the hammer. Whenever the hammer strikes the three conventional strings, the aliquot string vibrates sympathetically. This string resonance also occurs when other notes are played that are harmonically related to the pitch of an aliquot string, though only when the related notes' dampers are raised. Many piano-makers enrich the tone of the piano through sympathetic vibration, but use a different method known as duplex scaling (see piano). Confusingly, the portions of the strings used in duplex scaling are sometimes called "aliquot strings", and the contact points used in duplex scales are called aliquots. Aliquot stringing and the duplex scale, even if they use "aliquots", are not equivalent.
Because they are tuned an octave above their constituent pitch, true aliquot strings transmit strong vibrations to the soundboard. Duplex scaling, which typically is tuned a double octave or more above the speaking length, does not. And because aliquot strings are so active, they require dampers or they would sustain uncontrollably and muddy the sound. Aliquot stringing broadens the vibrational energy throughout the instrument, and creates an unusually complex and colorful tone. This results from hammers striking their respective three strings, followed by an immediate transfer of energy into their sympathetic strings. The noted piano authority Larry Fine observes that the Blüthner tone is "refined" and "delicate", particularly "at a low level of volume". The Blüthner company, however, claims that the effect of aliquot stringing is equally apparent in loud playing.
Tunable aliquots
Theodore Steinway of Steinway & Sons patented tunable aliquots in 1872. Short lengths of non-speaking wire were bridged by an aliquot throughout much of the upper range of the piano, always in locations that caused them to vibrate in conformity with their respective overtones—typically in doubled octaves and twelfths. This enhanced the power and sustain of the instrument's treble. Because it was time-consuming to correctly position each aliquot, Steinway abandoned individual aliquots for continuous cast-metal bars, each comprising an entire section of duplex bridge points. The company trusted that with an accurately templated bridge and carefully located duplex bar, the same result would be achieved with less fuss.
Mason & Hamlin, established in Boston in 1854, continued to use individual aliquots. They felt that the tuning of these short lengths of string was more accurate with an aliquot than what could be attained with a duplex bar. With the fixed points of a duplex bar, small variations in casting or bridge-pin positioning are liable to produce imperfections in the duplex string lengths. Furthermore, since variations in humidity can cause duplex scales to move in pitch more rapidly than the speaking scale, readjustments of aliquot positioning is more feasible than duplex bar re-positioning.
A modern piano manufacture, Fazioli (Sacile, Italy), has blended Steinway's original ideas by creating a stainless-steel track, fixed to the cast-iron plate, on which individual aliquots slide.
Other musical instruments
Makers of other string instruments sometimes use aliquot parts of the scale length to enhance the timbre. Examples of such instruments include the viola d'amore, and the sitar.
Notes
External links
A figure from the Blüthner company showing how their Patented Aliquot System is arranged
Blüthner—Photos and Aliquot-patent
Acoustics
Keyboard instruments
String instruments | Aliquot stringing | [
"Physics"
] | 972 | [
"Classical mechanics",
"Acoustics"
] |
360,835 | https://en.wikipedia.org/wiki/Coercivity | Coercivity, also called the magnetic coercivity, coercive field or coercive force, is a measure of the ability of a ferromagnetic material to withstand an external magnetic field without becoming demagnetized. Coercivity is usually measured in oersted or ampere/meter units and is denoted .
An analogous property in electrical engineering and materials science, electric coercivity, is the ability of a ferroelectric material to withstand an external electric field without becoming depolarized.
Ferromagnetic materials with high coercivity are called magnetically hard, and are used to make permanent magnets. Materials with low coercivity are said to be magnetically soft. The latter are used in transformer and inductor cores, recording heads, microwave devices, and magnetic shielding.
Definitions
Coercivity in a ferromagnetic material is the intensity of the applied magnetic field (H field) required to demagnetize that material, after the magnetization of the sample has been driven to saturation by a strong field. This demagnetizing field is applied opposite to the original saturating field. There are however different definitions of coercivity, depending on what counts as 'demagnetized', thus the bare term "coercivity" may be ambiguous:
The normal coercivity, , is the H field required to reduce the magnetic flux (average B field inside the material) to zero.
The intrinsic coercivity, , is the H field required to reduce the magnetization (average M field inside the material) to zero.
The remanence coercivity, , is the H field required to reduce the remanence to zero, meaning that when the H field is finally returned to zero, then both B and M also fall to zero (the material reaches the origin in the hysteresis curve).
The distinction between the normal and intrinsic coercivity is negligible in soft magnetic materials, however it can be significant in hard magnetic materials. The strongest rare-earth magnets lose almost none of the magnetization at HCn.
Experimental determination
Typically the coercivity of a magnetic material is determined by measurement of the magnetic hysteresis loop, also called the magnetization curve, as illustrated in the figure above. The apparatus used to acquire the data is typically a vibrating-sample or alternating-gradient magnetometer. The applied field where the data line crosses zero is the coercivity. If an antiferromagnet is present in the sample, the coercivities measured in increasing and decreasing fields may be unequal as a result of the exchange bias effect.
The coercivity of a material depends on the time scale over which a magnetization curve is measured. The magnetization of a material measured at an applied reversed field which is nominally smaller than the coercivity may, over a long time scale, slowly relax to zero. Relaxation occurs when reversal of magnetization by domain wall motion is thermally activated and is dominated by magnetic viscosity. The increasing value of coercivity at high frequencies is a serious obstacle to the increase of data rates in high-bandwidth magnetic recording, compounded by the fact that increased storage density typically requires a higher coercivity in the media.
Theory
At the coercive field, the vector component of the magnetization of a ferromagnet measured along the applied field direction is zero. There are two primary modes of magnetization reversal: single-domain rotation and domain wall motion. When the magnetization of a material reverses by rotation, the magnetization component along the applied field is zero because the vector points in a direction orthogonal to the applied field. When the magnetization reverses by domain wall motion, the net magnetization is small in every vector direction because the moments of all the individual domains sum to zero. Magnetization curves dominated by rotation and magnetocrystalline anisotropy are found in relatively perfect magnetic materials used in fundamental research. Domain wall motion is a more important reversal mechanism in real engineering materials since defects like grain boundaries and impurities serve as nucleation sites for reversed-magnetization domains. The role of domain walls in determining coercivity is complicated since defects may pin domain walls in addition to nucleating them. The dynamics of domain walls in ferromagnets is similar to that of grain boundaries and plasticity in metallurgy since both domain walls and grain boundaries are planar defects.
Significance
As with any hysteretic process, the area inside the magnetization curve during one cycle represents the work that is performed on the material by the external field in reversing the magnetization, and is dissipated as heat. Common dissipative processes in magnetic materials include magnetostriction and domain wall motion. The coercivity is a measure of the degree of magnetic hysteresis and therefore characterizes the lossiness of soft magnetic materials for their common applications.
The saturation remanence and coercivity are figures of merit for hard magnets, although maximum energy product is also commonly quoted. The 1980s saw the development of rare-earth magnets with high energy products but undesirably low Curie temperatures. Since the 1990s new exchange spring hard magnets with high coercivities have been developed.
See also
Magnetic susceptibility
Remanence
References
External links
Magnetization reversal applet (coherent rotation)
For a table of coercivities of various magnetic recording media, see "Degaussing Data Storage Tape Magnetic Media" (PDF), at fujifilmusa.com.
Physical quantities
Magnetic hysteresis | Coercivity | [
"Physics",
"Materials_science",
"Mathematics"
] | 1,149 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Physical properties",
"Hysteresis",
"Magnetic hysteresis"
] |
361,028 | https://en.wikipedia.org/wiki/Nitrification | Nitrification is the biological oxidation of ammonia to nitrate via the intermediary nitrite. Nitrification is an important step in the nitrogen cycle in soil. The process of complete nitrification may occur through separate organisms or entirely within one organism, as in comammox bacteria. The transformation of ammonia to nitrite is usually the rate limiting step of nitrification. Nitrification is an aerobic process performed by small groups of autotrophic bacteria and archaea.
Microbiology
Ammonia oxidation
The process of nitrification begins with the first stage of ammonia oxidation, where ammonia (NH3) or ammonium (NH4+) get converted into nitrite (NO2−). This first stage is sometimes known as nitritation. It is performed by two groups of organisms, ammonia-oxidizing bacteria (AOB) and ammonia-oxidizing archaea (AOA).
Ammonia-Oxidizing Bacteria
Ammonia-Oxidizing Bacteria (AOB) are typically Gram-negative bacteria and belong to Betaproteobacteria and Gammaproteobacteria including the commonly studied genera including Nitrosomonas and Nitrococcus. They are known for their ability to utilize ammonia as an energy source and are prevalent in a wide range of environments, such as soils, aquatic systems, and wastewater treatment plants.
AOB possess enzymes called ammonia monooxygenases (AMOs), which are responsible for catalyzing the conversion of ammonia to hydroxylamine (NH2OH), a crucial intermediate in the process of nitrification. This enzymatic activity is sensitive to environmental factors, such as pH, temperature, and oxygen availability.
AOB play a vital role in soil nitrification, making them key players in nutrient cycling. They contribute to the transformation of ammonia derived from organic matter decomposition or fertilizers into nitrite, which subsequently serves as a substrate for nitrite-oxidizing bacteria (NOB).
Ammonia-Oxidizing Archaea
Prior to the discovery of archaea capable of ammonia oxidation, ammonia-oxidizing bacteria (AOB) were considered the only organisms capable of ammonia oxidation. Since their discovery in 2005, two isolates of AOAs have been cultivated: Nitrosopumilus maritimus and Nitrososphaera viennensis. When comparing AOB and AOA, AOA dominate in both soils and marine environments, suggesting that Nitrososphaerota (formerly Thaumarchaeota) may be greater contributors to ammonia oxidation in these environments.
Crenarchaeol, which is generally thought to be produced exclusively by AOA (specifically Nitrososphaerota), has been proposed as a biomarker for AOA and ammonia oxidation. Crenarchaeol abundance has been found to track with seasonal blooms of AOA, suggesting that it may be appropriate to use crenarchaeol abundances as a proxy for AOA populations and thus ammonia oxidation more broadly. However the discovery of Nitrososphaerota that are not obligate ammonia-oxidizers complicates this conclusion, as does one study that suggests that crenarchaeol may be produced by Marine Group II Euryarchaeota.
Nitrite oxidation
The second step of nitrification is the oxidation of nitrite into nitrate. This process is sometimes known as nitratation. Nitrite oxidation is conducted by nitrite-oxidizing bacteria (NOB) from the taxa Nitrospirota, Nitrospinota, Pseudomonadota and Chloroflexota. NOB are typically present in soil, geothermal springs, freshwater and marine ecosystems.
Complete ammonia oxidation
Ammonia oxidation to nitrate in a single step within one organism was predicted in 2006 and discovered in 2015 in the species Nitrospira inopinata. A pure culture of the organism was obtained in 2017, representing a revolution in our understanding of the nitrification process.
History
The idea that oxidation of ammonia to nitrate is in fact a biological process was first given by Louis Pasteur in 1862. Later in 1875, Alexander Müller, while conducting a quality assessment of water from wells in Berlin, noted that ammonium was stable in sterilized solutions but nitrified in natural waters. A. Müller put forward, that nitrification is thus performed by microorganisms. In 1877, Jean-Jacques Schloesing and Achille Müntz, two French agricultural chemists working in Paris, proved that nitrification is indeed microbially mediated process by the experiments with liquid sewage and artificial soil matrix (sterilized sand with powdered chalk). Their findings were confirmed soon (in 1878) by Robert Warington who was investigating nitrification ability of garden soil at the Rothamsted experimental station in Harpenden in England. R. Warington made also the first observation that nitrification is a two-step process in 1879 which was confirmed by John Munro in 1886. Although at that time, it was believed that two-step nitrification is separated into distinct life phases or character traits of a single microorganism.
The first pure nitrifier (ammonia-oxidizing) was most probably isolated in 1890 by Percy Frankland and Grace Frankland, two English scientists from Scotland. Before that, Warington, Sergei Winogradsky and the Franklands were only able to enrich cultures of nitrifiers. Frankland and Frankland succeeded with a system of serial dilutions with very low inoculum and long cultivation times counting in years. Sergei Winogradsky claimed pure culture isolation in the same year (1890), but his culture was still co-culture of ammonia- and nitrite-oxidizing bacteria. S. Winogradsky succeeded just one year later in 1891.
In fact, during the serial dilutions ammonia-oxidizers and nitrite-oxidizers were unknowingly separated resulting in pure culture with ammonia-oxidation ability only. Thus Frankland and Frankland observed that these pure cultures lose ability to perform both steps. Loss of nitrite oxidation ability was observed already by R. Warington. Cultivation of pure nitrite oxidizer happened later during 20th century, however it is not possible to be certain which cultures were without contaminants as all theoretically pure strains share same trait (nitrite consumption, nitrate production).
Ecology
Both steps are producing energy to be coupled to ATP synthesis. Nitrifying organisms are chemoautotrophs, and use carbon dioxide as their carbon source for growth. Some AOB possess the enzyme, urease, which catalyzes the conversion of the urea molecule to two ammonia molecules and one carbon dioxide molecule. Nitrosomonas europaea, as well as populations of soil-dwelling AOB, have been shown to assimilate the carbon dioxide released by the reaction to make biomass via the Calvin Cycle, and harvest energy by oxidizing ammonia (the other product of urease) to nitrite. This feature may explain enhanced growth of AOB in the presence of urea in acidic environments.
In most environments, organisms are present that will complete both steps of the process, yielding nitrate as the final product. However, it is possible to design systems in which nitrite is formed (the Sharon process).
Nitrification is important in agricultural systems, where fertilizer is often applied as ammonia. Conversion of this ammonia to nitrate increases nitrogen leaching because nitrate is more water-soluble than ammonia.
Nitrification also plays an important role in the removal of nitrogen from municipal wastewater. The conventional removal is nitrification, followed by denitrification. The cost of this process resides mainly in aeration (bringing oxygen in the reactor) and the addition of an external carbon source (e.g., methanol) for the denitrification.
Nitrification can also occur in drinking water. In distribution systems where chloramines are used as the secondary disinfectant, the presence of free ammonia can act as a substrate for ammonia-oxidizing microorganisms. The associated reactions can lead to the depletion of the disinfectant residual in the system. The addition of chlorite ion to chloramine-treated water has been shown to control nitrification.
Together with ammonification, nitrification forms a mineralization process that refers to the complete decomposition of organic material, with the release of available nitrogen compounds. This replenishes the nitrogen cycle.
Nitrification in the marine environment
In the marine environment, nitrogen is often the limiting nutrient, so the nitrogen cycle in the ocean is of particular interest. The nitrification step of the cycle is of particular interest in the ocean because it creates nitrate, the primary form of nitrogen responsible for "new" production. Furthermore, as the ocean becomes enriched in anthropogenic CO2, the resulting decrease in pH could lead to decreasing rates of nitrification. Nitrification could potentially become a "bottleneck" in the nitrogen cycle.
Nitrification, as stated above, is formally a two-step process; in the first step ammonia is oxidized to nitrite, and in the second step nitrite is oxidized to nitrate. Diverse microbes are responsible for each step in the marine environment. Several groups of ammonia-oxidizing bacteria (AOB) are known in the marine environment, including Nitrosomonas, Nitrospira, and Nitrosococcus. All contain the functional gene ammonia monooxygenase (AMO) which, as its name implies, is responsible for the oxidation of ammonia. Subsequent metagenomic studies and cultivation approaches have revealed that some Thermoproteota (formerly Crenarchaeota) possess AMO. Thermoproteota are abundant in the ocean and some species have a 200 times greater affinity for ammonia than AOB, contrasting with the previous belief that AOB are primarily responsible for nitrification in the ocean. Furthermore, though nitrification is classically thought to be vertically separated from primary production because the oxidation of nitrate by bacteria is inhibited by light, nitrification by AOA does not appear to be light inhibited, meaning that nitrification is occurring throughout the water column, challenging the classical definitions of "new" and "recycled" production.
In the second step, nitrite is oxidized to nitrate. In the oceans, this step is not as well understood as the first, but the bacteria Nitrospina and Nitrobacter are known to carry out this step in the ocean.
Chemistry and enzymology
Nitrification is a process of nitrogen compound oxidation (effectively, loss of electrons from the nitrogen atom to the oxygen atoms), and is catalyzed step-wise by a series of enzymes.
2NH4+ + 3O2 -> 2NO2- + 4H+ + 2H2O (Nitrosomonas, Comammox)
2NO2- + O2 -> 2NO3- (Nitrobacter, Nitrospira, Comammox)
OR
NH3 + O2 -> NO2- + 3H+ + 2e-
NO2- + H2O -> NO3- + 2H+ + 2e-
In Nitrosomonas europaea, the first step of oxidation (ammonia to hydroxylamine) is carried out by the enzyme ammonia monooxygenase (AMO).
NH3 + O2 + 2H+ -> NH2OH + H2O
The second step (hydroxylamine to nitrite) is catalyzed by two enzymes. Hydroxylamine oxidoreductase (HAO), converts hydroxylamine to nitric oxide.
NH2OH -> NO + 3H+ + 3e-
Another currently unknown enzyme converts nitric oxide to nitrite.
The third step (nitrite to nitrate) is completed in a distinct organism.
{nitrite} + acceptor <=> {nitrate} + reduced\ acceptor
Factors Affecting Nitrification Rates
Soil conditions
Due to its inherent microbial nature, nitrification in soils is greatly susceptible to soil conditions. In general, soil nitrification will proceed at optimal rates if the conditions for the microbial communities foster healthy microbial growth and activity. Soil conditions that have an effect on nitrification rates include:
Substrate availability (presence of NH4+)
Aeration (availability of O2)
Soil moisture content (availability of H2O)
pH (near neutral)
Temperature
Inhibitors of nitrification
Nitrification inhibitors are chemical compounds that slow the nitrification of ammonia, ammonium-containing, or urea-containing fertilizers, which are applied to soil as fertilizers. These inhibitors can help reduce losses of nitrogen in soil that would otherwise be used by crops. Nitrification inhibitors are used widely, being added to approximately 50% of the fall-applied anhydrous ammonia in states in the U.S., like Illinois. They are usually effective in increasing recovery of nitrogen fertilizer in row crops, but the level of effectiveness depends on external conditions and their benefits are most likely to be seen at less than optimal nitrogen rates.
The environmental concerns of nitrification also contribute to interest in the use of nitrification inhibitors: the primary product, nitrate, leaches into groundwater, producing toxicity in both humans and some species of wildlife and contributing to the eutrophication of standing water. Some inhibitors of nitrification also inhibit the production of methane, a greenhouse gas.
The inhibition of the nitrification process is primarily facilitated by the selection and inhibition/destruction of the bacteria that oxidize ammonia compounds. A multitude of compounds inhibit nitrification, which can be divided into the following areas: the active site of ammonia monooxygenase (AMO), mechanistic inhibitors, and the process of N-heterocyclic compounds. The process for the latter of the three is not yet widely understood, but is prominent. The presence of AMO has been confirmed on many substrates that are nitrogen inhibitors such as dicyandiamide, ammonium thiosulfate, and nitrapyrin.
The conversion of ammonia to hydroxylamine is the first step in nitrification, where AH2 represents a range of potential electron donors.
+ + → + A +
This reaction is catalyzed by AMO. Inhibitors of this reaction bind to the active site on AMO and prevent or delay the process. The process of oxidation of ammonia by AMO is regarded with importance due to the fact that other processes require the co-oxidation of NH3 for a supply of reducing equivalents. This is usually supplied by the compound hydroxylamine oxidoreductase (HAO) which catalyzes the reaction:
+ → − + 5 H+ + 4 e−
The mechanism of inhibition is complicated by this requirement. Kinetic analysis of the inhibition of NH3 oxidation has shown that the substrates of AMO have shown kinetics ranging from competitive to noncompetitive. The binding and oxidation can occur on two sites on AMO: in competitive substrates, binding and oxidation occurs at the NH3 site, while in noncompetitive substrates it occurs at another site.
Mechanism based inhibitors can be defined as compounds that interrupt the normal reaction catalyzed by an enzyme. This method occurs by the inactivation of the enzyme via covalent modification of the product, which ultimately inhibits nitrification. Through the process, AMO is deactivated and one or more proteins is covalently bonded to the final product. This is found to be most prominent in a broad range of sulfur or acetylenic compounds.
Sulfur-containing compounds, including ammonium thiosulfate (a popular inhibitor) are found to operate by producing volatile compounds with strong inhibitory effects such as carbon disulfide and thiourea.
In particular, thiophosphoryl triamide has been a notable addition where it has the dual purpose of inhibiting both the production of urease and nitrification. In a study of inhibitory effects of oxidation by the bacteria Nitrosomonas europaea, the use of thioethers resulted in the oxidation of these compounds to sulfoxides, where the S atom is the primary site of oxidation by AMO. This is most strongly correlated to the field of competitive inhibition.
N-heterocyclic compounds are also highly effective nitrification inhibitors and are often classified by their ring structure. The mode of action for these compounds is not well understood: while nitrapyrin, a widely used inhibitor and substrate of AMO, is a weak mechanism-based inhibitor of said enzyme, the effects of said mechanism are unable to correlate directly with the compound's ability to inhibit nitrification. It is suggested that nitrapyrin acts against the monooxygenase enzyme within the bacteria, preventing growth and CH4/NH4 oxidation. Compounds containing two or three adjacent ring N atoms (pyridazine, pyrazole, indazole) tend to have a significantly higher inhibition effect than compounds containing non-adjacent N atoms or singular ring N atoms (pyridine, pyrrole). This suggests that the presence of ring N atoms is directly correlated with the inhibition effect of this class of compounds.
Methane oxidation inhibition
Some enzymatic nitrification inhibitors, such as nitrapyrin, can also inhibit the oxidation of methane in methanotrophic bacteria. AMO shows similar kinetic turnover rates to methane monooxygenase (MMO) found in methanotrophs, indicating that MMO is a similar catalyst to AMO for the purpose of methane oxidation. Furthermore, methanotrophic bacteria share many similarities to oxidizers such as Nitrosomonas. The inhibitor profile of particulate forms of MMO (pMMO) shows similarity to the profile of AMO, leading to similarity in properties between MMO in methanotrophs and AMO in autotrophs.
Environmental concerns
Nitrification inhibitors are also of interest from an environmental standpoint because of the production of nitrates and nitrous oxide from the nitrification process. Nitrous oxide (N2O), although its atmospheric concentration is much lower than that of CO2, has a global warming potential of about 300 times greater than carbon dioxide and contributes 6% of planetary warming due to greenhouse gases. This compound is also notable for catalyzing the breakup of ozone in the stratosphere. Nitrates, a toxic compound for wildlife and livestock and a product of nitrification, are also of concern.
Soil, consisting of polyanionic clays and silicates, generally has a net anionic charge. Consequently, ammonium (NH4+) binds tightly to the soil, but nitrate ions (NO3−) do not. Because nitrate is more mobile, it leaches into groundwater supplies through agricultural runoff. Nitrates in groundwater can affect surface water concentrations through direct groundwater-surface water interactions (e.g., gaining stream reaches, springs) or from when it is extracted for surface use. For example, much of the drinking water in the United States comes from groundwater, but most wastewater treatment plants discharge to surface water.
Among wildlife, amphibians (tadpoles) and freshwater fish eggs are most sensitive to elevated nitrate levels and experience growth and developmental damage at levels commonly found in U.S. freshwater bodies (<20 mg/l). In contrast, freshwater invertebrates are more tolerant (~90+mg/l), and adult freshwater fish can tolerate very high levels (800 mg+/l). Nitrate levels also contribute to eutrophication, a process in which large algal blooms reduce oxygen levels in bodies of water and lead to death in oxygen-consuming creatures due to anoxia. Nitrification is also thought to contribute to the formation of photochemical smog, ground-level ozone, acid rain, changes in species diversity, and other undesirable processes. In addition, nitrification inhibitors have also been shown to suppress the oxidation of methane (CH4), a potent greenhouse gas, to CO2. Both nitrapyrin and acetylene are shown to be potent suppressors of both processes, although the modes of action distinguishing them are unclear.
See also
f-ratio
Haber process
Nitrifying bacteria
Nitrogen fixation
Simultaneous nitrification-denitrification
Comammox
References
External links
Nitrification at the heart of filtration at fishdoc.co.uk
Nitrification at University of Aberdeen · King's College
Nitrification Basics for Aerated Lagoon Operators at lagoonsonline.com
Biochemical reactions
Nitrogen cycle
Soil biology | Nitrification | [
"Chemistry",
"Biology"
] | 4,309 | [
"Biochemical reactions",
"Nitrogen cycle",
"Soil biology",
"Biochemistry",
"Metabolism"
] |
361,157 | https://en.wikipedia.org/wiki/Bio-inspired%20computing | Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.
History
Early Ideas
The ideas behind biological computing trace back to 1936 and the first description of an abstract computer, which is now known as a Turing machine. Turing firstly described the abstract construct using a biological specimen. Turing imagined a mathematician that has three important attributes. He always has a pencil with an eraser, an unlimited number of papers and a working set of eyes. The eyes allow the mathematician to see and perceive any symbols written on the paper while the pencil allows him to write and erase any symbols that he wants. Lastly, the unlimited paper allows him to store anything he wants memory. Using these ideas he was able to describe an abstraction of the modern digital computer. However Turing mentioned that anything that can perform these functions can be considered such a machine and he even said that even electricity should not be required to describe digital computation and machine thinking in general.
Neural Networks
First described in 1943 by Warren McCulloch and Walter Pitts, neural networks are a prevalent example of biological systems inspiring the creation of computer algorithms. They first mathematically described that a system of simplistic neurons was able to produce simple logical operations such as logical conjunction, disjunction and negation. They further showed that a system of neural networks can be used to carry out any calculation that requires finite memory. Around 1970 the research around neural networks slowed down and many consider a 1969 book by Marvin Minsky and Seymour Papert as the main cause. Their book showed that neural network models were able only model systems that are based on Boolean functions that are true only after a certain threshold value. Such functions are also known as threshold functions. The book also showed that a large amount of systems cannot be represented as such meaning that a large amount of systems cannot be modeled by neural networks. Another book by James Rumelhart and David McClelland in 1986 brought neural networks back to the spotlight by demonstrating the linear back-propagation algorithm something that allowed the development of multi-layered neural networks that did not adhere to those limits.
Ant Colonies
Douglas Hofstadter in 1979 described an idea of a biological system capable of performing intelligent calculations even though the individuals comprising the system might not be intelligent. More specifically, he gave the example of an ant colony that can carry out intelligent tasks together but each individual ant cannot exhibiting something called "emergent behavior." Azimi et al. in 2009 showed that what they described as the "ant colony" algorithm, a clustering algorithm that is able to output the number of clusters and produce highly competitive final clusters comparable to other traditional algorithms. Lastly Hölder and Wilson in 2009 concluded using historical data that ants have evolved to function as a single "superogranism" colony. A very important result since it suggested that group selection evolutionary algorithms coupled together with algorithms similar to the "ant colony" can be potentially used to develop more powerful algorithms.
Areas of research
Some areas of study in biologically inspired computing, and their biological counterparts:
Population Based Bio-Inspired Algorithms
Bio-inpsired computing, which work on a population of possible solutions in the context of evolutionary algorithms or in the context of swarm intelligence algorithms, are subdivided into Population Based Bio-Inspired Algorithms (PBBIA). They include Evolutionary Algorithms, Particle Swarm Optimization, Ant colony optimization algorithms and Artificial bee colony algorithms.
Virtual Insect Example
Bio-inspired computing can be used to train a virtual insect. The insect is trained to navigate in an unknown terrain for finding food equipped with six simple rules:
turn right for target-and-obstacle left;
turn left for target-and-obstacle right;
turn left for target-left-obstacle-right;
turn right for target-right-obstacle-left;
turn left for target-left without obstacle;
turn right for target-right without obstacle.
The virtual insect controlled by the trained spiking neural network can find food after training in any unknown terrain. After several generations of rule application it is usually the case that some forms of complex behaviour emerge. Complexity gets built upon complexity until the result is something markedly complex, and quite often completely counterintuitive from what the original rules would be expected to produce (see complex systems). For this reason, when modeling the neural network, it is necessary to accurately model an in vivo network, by live collection of "noise" coefficients that can be used to refine statistical inference and extrapolation as system complexity increases.
Natural evolution is a good analogy to this method–the rules of evolution (selection, recombination/reproduction, mutation and more recently transposition) are in principle simple rules, yet over millions of years have produced remarkably complex organisms. A similar technique is used in genetic algorithms.
Brain-inspired computing
Brain-inspired computing refers to computational models and methods that are mainly based on the mechanism of the brain, rather than completely imitating the brain. The goal is to enable the machine to realize various cognitive abilities and coordination mechanisms of human beings in a brain-inspired manner, and finally achieve or exceed Human intelligence level.
Research
Artificial intelligence researchers are now aware of the benefits of learning from the brain information processing mechanism. And the progress of brain science and neuroscience also provides the necessary basis for artificial intelligence to learn from the brain information processing mechanism. Brain and neuroscience researchers are also trying to apply the understanding of brain information processing to a wider range of science field. The development of the discipline benefits from the push of information technology and smart technology and in turn brain and neuroscience will also inspire the next generation of the transformation of information technology.
The influence of brain science on Brain-inspired computing
Advances in brain and neuroscience, especially with the help of new technologies and new equipment, support researchers to obtain multi-scale, multi-type biological evidence of the brain through different experimental methods, and are trying to reveal the structure of bio-intelligence from different aspects and functional basis. From the microscopic neurons, synaptic working mechanisms and their characteristics, to the mesoscopic network connection model, to the links in the macroscopic brain interval and their synergistic characteristics, the multi-scale structure and functional mechanisms of brains derived from these experimental and mechanistic studies will provide important inspiration for building a future brain-inspired computing model.
Brain-inspired chip
Broadly speaking, brain-inspired chip refers to a chip designed with reference to the structure of human brain neurons and the cognitive mode of human brain. Obviously, the "neuromorphic chip" is a brain-inspired chip that focuses on the design of the chip structure with reference to the human brain neuron model and its tissue structure, which represents a major direction of brain-inspired chip research. Along with the rise and development of “brain plans” in various countries, a large number of research results on neuromorphic chips have emerged, which have received extensive international attention and are well known to the academic community and the industry. For example, EU-backed SpiNNaker and BrainScaleS, Stanford's Neurogrid, IBM's TrueNorth, and Qualcomm's Zeroth.
TrueNorth is a brain-inspired chip that IBM has been developing for nearly 10 years. The US DARPA program has been funding IBM to develop pulsed neural network chips for intelligent processing since 2008. In 2011, IBM first developed two cognitive silicon prototypes by simulating brain structures that could learn and process information like the brain. Each neuron of a brain-inspired chip is cross-connected with massive parallelism. In 2014, IBM released a second-generation brain-inspired chip called "TrueNorth." Compared with the first generation brain-inspired chips, the performance of the TrueNorth chip has increased dramatically, and the number of neurons has increased from 256 to 1 million; the number of programmable synapses has increased from 262,144 to 256 million; Subsynaptic operation with a total power consumption of 70 mW and a power consumption of 20 mW per square centimeter. At the same time, TrueNorth handles a nuclear volume of only 1/15 of the first generation of brain chips. At present, IBM has developed a prototype of a neuron computer that uses 16 TrueNorth chips with real-time video processing capabilities. The super-high indicators and excellence of the TrueNorth chip have caused a great stir in the academic world at the beginning of its release.
In 2012, the Institute of Computing Technology of the Chinese Academy of Sciences(CAS) and the French Inria collaborated to develop the first chip in the world to support the deep neural network processor architecture chip "Cambrian". The technology has won the best international conferences in the field of computer architecture, ASPLOS and MICRO, and its design method and performance have been recognized internationally. The chip can be used as an outstanding representative of the research direction of brain-inspired chips.
Unclear Brain mechanism cognition
The human brain is a product of evolution. Although its structure and information processing mechanism are constantly optimized, compromises in the evolution process are inevitable. The cranial nervous system is a multi-scale structure. There are still several important problems in the mechanism of information processing at each scale, such as the fine connection structure of neuron scales and the mechanism of brain-scale feedback. Therefore, even a comprehensive calculation of the number of neurons and synapses is only 1/1000 of the size of the human brain, and it is still very difficult to study at the current level of scientific research.
Recent advances in brain simulation linked individual variability in human cognitive processing speed and fluid intelligence to the balance of excitation and inhibition in structural brain networks, functional connectivity, winner-take-all decision-making and attractor working memory.
Unclear Brain-inspired computational models and algorithms
In the future research of cognitive brain computing model, it is necessary to model the brain information processing system based on multi-scale brain neural system data analysis results, construct a brain-inspired multi-scale neural network computing model, and simulate multi-modality of brain in multi-scale. Intelligent behavioral ability such as perception, self-learning and memory, and choice. Machine learning algorithms are not flexible and require high-quality sample data that is manually labeled on a large scale. Training models require a lot of computational overhead. Brain-inspired artificial intelligence still lacks advanced cognitive ability and inferential learning ability.
Constrained Computational architecture and capabilities
Most of the existing brain-inspired chips are still based on the research of von Neumann architecture, and most of the chip manufacturing materials are still using traditional semiconductor materials. The neural chip is only borrowing the most basic unit of brain information processing. The most basic computer system, such as storage and computational fusion, pulse discharge mechanism, the connection mechanism between neurons, etc., and the mechanism between different scale information processing units has not been integrated into the study of brain-inspired computing architecture. Now an important international trend is to develop neural computing components such as brain memristors, memory containers, and sensory sensors based on new materials such as nanometers, thus supporting the construction of more complex brain-inspired computing architectures. The development of brain-inspired computers and large-scale brain computing systems based on brain-inspired chip development also requires a corresponding software environment to support its wide application.
See also
Applications of artificial intelligence
Behavior based robotics
Bioinformatics
Bionics
Cognitive architecture
Cognitive modeling
Cognitive science
Connectionism
Digital morphogenesis
Digital organism
Fuzzy logic
Gene expression programming
Genetic algorithm
Genetic programming
Gerald Edelman
Janine Benyus
Learning classifier system
Mark A. O'Neill
Mathematical biology
Mathematical model
Natural computation
Neuroevolution
Olaf Sporns
Organic computing
Unconventional computing
Lists
List of emerging technologies
Outline of artificial intelligence
References
Further reading
(the following are presented in ascending order of complexity and depth, with those new to the field suggested to start from the top)
"Nature-Inspired Algorithms"
"Biologically Inspired Computing"
"Digital Biology", Peter J. Bentley.
"First International Symposium on Biologically Inspired Computing"
Emergence: The Connected Lives of Ants, Brains, Cities and Software, Steven Johnson.
Dr. Dobb's Journal, Apr-1991. (Issue theme: Biocomputing)
Turtles, Termites and Traffic Jams, Mitchel Resnick.
Understanding Nonlinear Dynamics, Daniel Kaplan and Leon Glass.
Swarms and Swarm Intelligence by Michael G. Hinchey, Roy Sterritt, and Chris Rouff,
Fundamentals of Natural Computing: Basic Concepts, Algorithms, and Applications, L. N. de Castro, Chapman & Hall/CRC, June 2006.
"The Computational Beauty of Nature", Gary William Flake. MIT Press. 1998, hardcover ed.; 2000, paperback ed. An in-depth discussion of many of the topics and underlying themes of bio-inspired computing.
Kevin M. Passino, Biomimicry for Optimization, Control, and Automation, Springer-Verlag, London, UK, 2005.
Recent Developments in Biologically Inspired Computing, L. N. de Castro and F. J. Von Zuben, Idea Group Publishing, 2004.
Nancy Forbes, Imitation of Life: How Biology is Inspiring Computing, MIT Press, Cambridge, MA 2004.
M. Blowers and A. Sisti, Evolutionary and Bio-inspired Computation: Theory and Applications, SPIE Press, 2007.
X. S. Yang, Z. H. Cui, R. B. Xiao, A. H. Gandomi, M. Karamanoglu, Swarm Intelligence and Bio-Inspired Computation: Theory and Applications, Elsevier, 2013.
"Biologically Inspired Computing Lecture Notes", Luis M. Rocha
The portable UNIX programming system (PUPS) and CANTOR: a computational envorionment for dynamical representation and analysis of complex neurobiological data, Mark A. O'Neill, and Claus-C Hilgetag, Phil Trans R Soc Lond B 356 (2001), 1259–1276
"Going Back to our Roots: Second Generation Biocomputing", J. Timmis, M. Amos, W. Banzhaf, and A. Tyrrell, Journal of Unconventional Computing 2 (2007) 349–378.
C-M. Pintea, 2014, Advances in Bio-inspired Computing for Combinatorial Optimization Problem, Springer
"PSA: A novel optimization algorithm based on survival rules of porcellio scaber", Y. Zhang and S. Li
External links
Nature Inspired Computing and Engineering (NICE) Group, University of Surrey, UK
ALife Project in Sussex
Biologically Inspired Computation for Chemical Sensing Neurochem Project
AND Corporation
Centre of Excellence for Research in Computational Intelligence and Applications Birmingham, UK
BiSNET: Biologically-inspired architecture for Sensor NETworks
BiSNET/e: A Cognitive Sensor Networking Architecture with Evolutionary Multiobjective Optimization
Biologically inspired neural networks
NCRA UCD, Dublin Ireland
The PUPS/P3 Organic Computing Environment for Linux
SymbioticSphere: A Biologically-inspired Architecture for Scalable, Adaptive and Survivable Network Systems
The runner-root algorithm
Bio-inspired Wireless Networking Team (BioNet)
Biologically Inspired Intelligence
Theoretical computer science
Natural computation
Bioinspiration | Bio-inspired computing | [
"Mathematics",
"Engineering",
"Biology"
] | 3,123 | [
"Theoretical computer science",
"Applied mathematics",
"Biological engineering",
"Bioinspiration"
] |
361,356 | https://en.wikipedia.org/wiki/Negentropy | In information theory and statistics, negentropy is used as a measure of distance to normality. The concept and phrase "negative entropy" was introduced by Erwin Schrödinger in his 1944 popular-science book What is Life? Later, French physicist Léon Brillouin shortened the phrase to néguentropie (negentropy). In 1974, Albert Szent-Györgyi proposed replacing the term negentropy with syntropy. That term may have originated in the 1940s with the Italian mathematician Luigi Fantappiè, who tried to construct a unified theory of biology and physics. Buckminster Fuller tried to popularize this usage, but negentropy remains common.
In a note to What is Life? Schrödinger explained his use of this phrase.
Information theory
In information theory and statistics, negentropy is used as a measure of distance to normality. Out of all distributions with a given mean and variance, the normal or Gaussian distribution is the one with the highest entropy. Negentropy measures the difference in entropy between a given distribution and the Gaussian distribution with the same mean and variance. Thus, negentropy is always nonnegative, is invariant by any linear invertible change of coordinates, and vanishes if and only if the signal is Gaussian.
Negentropy is defined as
where is the differential entropy of the Gaussian density with the same mean and variance as and is the differential entropy of :
Negentropy is used in statistics and signal processing. It is related to network entropy, which is used in independent component analysis.
The negentropy of a distribution is equal to the Kullback–Leibler divergence between and a Gaussian distribution with the same mean and variance as (see for a proof). In particular, it is always nonnegative.
Correlation between statistical negentropy and Gibbs' free energy
There is a physical quantity closely linked to free energy (free enthalpy), with a unit of entropy and isomorphic to negentropy known in statistics and information theory. In 1873, Willard Gibbs created a diagram illustrating the concept of free energy corresponding to free enthalpy. On the diagram one can see the quantity called capacity for entropy. This quantity is the amount of entropy that may be increased without changing an internal energy or increasing its volume. In other words, it is a difference between maximum possible, under assumed conditions, entropy and its actual entropy. It corresponds exactly to the definition of negentropy adopted in statistics and information theory. A similar physical quantity was introduced in 1869 by Massieu for the isothermal process (both quantities differs just with a figure sign) and by then Planck for the isothermal-isobaric process. More recently, the Massieu–Planck thermodynamic potential, known also as free entropy, has been shown to play a great role in the so-called entropic formulation of statistical mechanics, applied among the others in molecular biology and thermodynamic non-equilibrium processes.
where:
is entropy
is negentropy (Gibbs "capacity for entropy")
is the Massieu potential
is the partition function
the Boltzmann constant
In particular, mathematically the negentropy (the negative entropy function, in physics interpreted as free entropy) is the convex conjugate of LogSumExp (in physics interpreted as the free energy).
Brillouin's negentropy principle of information
In 1953, Léon Brillouin derived a general equation stating that the changing of an information bit value requires at least energy. This is the same energy as the work Leó Szilárd's engine produces in the idealistic case. In his book, he further explored this problem concluding that any cause of this bit value change (measurement, decision about a yes/no question, erasure, display, etc.) will require the same amount of energy.
See also
Exergy
Free entropy
Entropy in thermodynamics and information theory
Notes
Entropy and information
Statistical deviation and dispersion
Thermodynamic entropy | Negentropy | [
"Physics",
"Mathematics"
] | 844 | [
"Physical quantities",
"Thermodynamic entropy",
"Entropy and information",
"Entropy",
"Statistical mechanics",
"Dynamical systems"
] |
8,941,378 | https://en.wikipedia.org/wiki/MIT%20General%20Circulation%20Model | The MIT General Circulation Model (MITgcm) is a numerical computer code that solves the equations of motion governing the ocean or Earth's atmosphere using the finite volume method. It was developed at the Massachusetts Institute of Technology and was one of the first non-hydrostatic models of the ocean. It has an automatically generated adjoint that allows the model to be used for data assimilation. The MITgcm is written in the programming language Fortran.
History
See also
Physical oceanography
Global climate model
References
External links
The MITgcm home page
Department of Earth, Atmospheric and Planetary Science at MIT
The ECCO2 consortium
Physical oceanography
Numerical climate and weather models | MIT General Circulation Model | [
"Physics"
] | 135 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
8,941,394 | https://en.wikipedia.org/wiki/Downs%20cell | Downs' process is an electrochemical method for the commercial preparation of metallic sodium, in which molten NaCl is electrolyzed in a special apparatus called the Downs cell. The Downs cell was invented in 1923 (patented: 1924) by the American chemist James Cloyd Downs (1885–1957).
Operation
The Downs cell uses a carbon anode and an iron cathode. The electrolyte is sodium chloride that has been heated to the liquid state. Although solid sodium chloride is a poor conductor of electricity, when molten the sodium and chloride ions are mobilized, which become charge carriers and allow conduction of electric current.
Some calcium chloride and/or chlorides of barium (BaCl2) and strontium (SrCl2), and, in some processes, sodium fluoride (NaF) are added to the electrolyte to reduce the temperature required to keep the electrolyte liquid. Sodium chloride (NaCl) melts at 801 °C (1074 Kelvin), but a salt mixture can be kept liquid at a temperature as low as 600 °C at the mixture containing, by weight: 33.2% NaCl and 66.8% CaCl2. If pure sodium chloride is used, a metallic sodium emulsion is formed in the molten NaCl which is impossible to separate. Therefore, one option is to have a NaCl (42%) and CaCl2 (58%) mixture.
The anode reaction is:
2Cl− → Cl2 (g) + 2e−
The cathode reaction is:
2Na+ + 2e− → 2Na (l)
for an overall reaction of
2Na+ + 2Cl− → 2Na (l) + Cl2 (g)
The calcium does not enter into the reaction because its reduction potential of -2.87 volts is lower than that of sodium, which is -2.38 volts. Hence the sodium ions are reduced to metallic form in preference to those of calcium. If the electrolyte contained only calcium ions and no sodium, calcium metal would be produced as the cathode product (which indeed is how metallic calcium is produced).
Both the products of the electrolysis, sodium metal and chlorine gas, are less dense than the electrolyte and therefore float to the surface. Perforated iron baffles are arranged in the cell to direct the products into separate chambers without their ever coming into contact with each other.
Although theory predicts that a potential of a little over 4.07 volts should be sufficient to cause the reaction to go forward, in practice potentials of up to 8 volts are used. This is done in order to achieve useful current densities in the electrolyte despite its inherent electrical resistance. The overvoltage and consequent resistive heating contributes to the heat required to keep the electrolyte in a liquid state.
The Downs' process also produces chlorine as a byproduct, although chlorine produced this way accounts for only a small fraction of chlorine produced industrially by other methods.
References
Chemical processes
Electrolytic cells
Metallurgical processes | Downs cell | [
"Chemistry",
"Materials_science"
] | 645 | [
"Metallurgical processes",
"Metallurgy",
"Chemical processes",
"nan",
"Chemical process engineering"
] |
8,941,596 | https://en.wikipedia.org/wiki/Cured-in-place%20pipe | A cured-in-place pipe (CIPP) is a trenchless rehabilitation method used to repair existing pipelines. It is a jointless, seamless pipe lining within an existing pipe. As one of the most widely used rehabilitation methods, CIPP has applications in sewer, water, gas, chemical and district heating pipelines ranging in diameter from 0.1 to 2.8 meters (2–110 inches).
The process of CIPP involves inserting and running a felt lining into a preexisting pipe that is the subject of repair. Resin within the liner is then exposed to a curing element to harden it and make it attach to the inner walls of the pipe. Once fully cured, the lining now acts as a new pipeline.
Process
Installation
A resin impregnated felt tube made of polyester, fiberglass cloth, spread tow carbon fiber or other resin-impregnable substance, is inserted or pulled through a damaged pipe, usually from an upstream access point such as a manhole or excavation. (It is possible to insert the liner from a downstream access point, but this is more risky). CIPP is considered a trenchless technology, meaning little to no digging is typically required, for a potentially more cost-effective and less disruptive method than traditional "dig and replace" pipe repair methods. The liner is inserted using water or air pressure, applied via pressure vessels, scaffolds or a "chip unit".
Curing
Cured-in-place pipes require that their resin be cured after installation to achieve full strength, by hot water or steam or, if a fiberglass tube is used, by UV light. As the resin cures, a tight-fitting, jointless and corrosion-resistant replacement pipe is formed. Service laterals, where present, can be reconnected from within the newly-formed larger-diameter pipe, by cutting replacement openings using robotically controlled cutting devices, then sealed using specially-designed CIPP materials referred to as 'top-hats'. The resins used are typically polyester for mainline lining and epoxy for lateral lines.
Since all resins shrink (epoxy resins shrink far less than poly and vinyl ester versions) and because it is impossible to bond to a sewer line with fats, oils, and grease present, an annular space is always created around the new CIPP liner, between it and the host pipe. Some spaces are large enough to require additional work to prevent water from moving along them and re-entering the waste stream, for example: insertion of hydrophilic material which swells to fill the void; lining of the entire connection and host pipe with continuous repair (YT repair) gaskets; and point repairs placed at the ends of the host pipe.
History
Conception
In 1971, Eric Wood implemented the first cured-in-place pipe technology in London, England. He called the CIPP process insitu form, derived from the Latin meaning "form in place". Wood applied for U.S. patent no. 4009063 on January 29, 1975. The patent was granted February 22, 1977, and was commercialized by Insituform Technologies until it entered the public domain on February 22, 1994.
Implementation
The process began to be used in residential and commercial applications in Japan and Europe in the 1970s and for residential application in the United States in the 1980s.
Advantages
CIPP does not typically require excavation to rehabilitate a leaking or structurally unsound pipeline. (Depending upon design considerations an excavation may be made, but the liner is often installed through a manhole or other existing access point.) CIPP has a smooth, jointless interior and no joints.
Disadvantages and limitations
Except for very common sizes, liners are not usually stocked and must be made specifically for each project. CIPP requires bypassing the existing pipeline while the liner is being installed, which may be inconvenient as, depending on diameter and system used (steam, water or UV), curing may take from one to 30 hours and must be carefully monitored, inspected, and tested. Obstructions in the existing pipeline, such as protruding laterals, must be removed prior to installation. CIPP is not always cheaper than similar methods such as Shotcrete, thermoformed pipe, close-fit pipe, spiral wound pipe and sliplining. The CIPP process may release chemical agents into the surrounding environment. The most common liner material, a non-woven felted fabric, does not go around bends well without wrinkling nor maintain roundness going around corners. Once a line is repaired with the CIPP method, it can no longer be cleaned using cables or snakes; instead, high-pressure water blasting (hydrojetting) must be used.
Quality assurance and quality control
Testing of CIPP installations is required to confirm that the materials used comply with the site and engineering requirements. Since ground and ambient installation conditions as well as crew skills can affect the success or failure of a cure cycle, testing is performed by 3rd party laboratories in normal cases and should be requested by the owner.
Samples should be representative of the installation environment since the liner is installed in the ground. Wet sandbags should be used around the restraint where the test sample will be extracted from. As with any specimen preparation for a materials test, it is important to not affect the material properties during the specimen preparation process. Research has shown that test specimen selection can have a significant effect on the CIPP flexural testing results. A technical presentation at the CERIU INFRA 2012 Infrastructures Municipales Conference in Montreal outlined the results of a research project which examined the effects of test specimen preparation on measured flexural properties. Test specimens for ASTM D790 flexural testing must meet the dimensional tolerances of ASTM D790.
The North American CIPP industry has standardized around the standard ASTM F1216 which uses test specimens oriented parallel with the pipe axis, while Europe uses the standard EN ISO 11296–4 with test specimens oriented in the hoop direction. Research has shown that flexural testing results from the same liner material are usually lower when determined using EN ISO 11296-4 as compared to ASTM F1216.
Environmental, public health, and infrastructure incidents
Testing conducted by the Virginia Department of Transportation and university researchers from 2011 to 2013 showed that some CIPP installations can cause aquatic toxicity. A list of environmental, public health, and infrastructure incidents caused by CIPP installations as of 2013 was published by the Journal of Environmental Engineering. In 2014, university researchers published a more detailed study in Environmental Science & Technology that examined CIPP condensate chemical and aquatic toxicity as well as chemical leaching from stormwater culvert CIPP installations in Alabama. In this new report additional water and air environmental contamination incidents were reported not previously described elsewhere.
In 2017, CALTRANS backed university researchers examined water impacts caused by CIPPs used for stormwater culvert repairs.
In April 2018, a study funded by six state transportation agencies (1) compiled and reviewed CIPP-related surface water contamination incidents from publicly reported data; (2) analyzed CIPP water quality impacts; (3) evaluated current construction practices for CIPP installations as reported by US state transportation agencies; and (4) reviewed current standards, textbooks, and guideline documents. In 2019, another study funded by these agencies identified actions to reduce chemical release from ultraviolet light (UV) CIPP manufacturing sites.
With proper engineering design specifications, contractor installation procedures, and construction oversight many of these problems can likely be prevented.
Worker and public safety concerns
On July 26, 2017, Purdue University researchers published a peer-reviewed study in the American Chemical Society's journal Environmental Science & Technology Letters about material emissions collected and analyzed from steam cured CIPP installations in Indiana and California. To further make the study accessible to the public and CIPP worker community, the study authors established a website and made their publication open-access, freely available for download. Purdue University professors also commented on their study and called for changes to the process to better protect workers, the public, and environment from harm.
On August 25, 2017, the National Association of Sewer Service Companies, Incorporated (NASSCO), which is a (501c6) nonprofit dedicated to "improving the success rate of everyone involved in the pipeline rehabilitation industry through education, technical resources, and industry advocacy", posted a document on its website bringing up several important concerns and unanswered questions regarding the study, and its messaging. NASSCO then sent a letter to the researchers who then responded.
On September 22, 2017, NASSCO announced it would fund and coordinate an assessment of previous data and studies, and an additional study and analysis of possible risks related to the CIPP installation and curing process. Later in September, the NASSCO posted a request for proposals to “review of recent publication(s) that propose the presence of organic chemicals and other available literature relating to emissions associated with the CIPP installation process, and a scope of services for additional sampling and analysis of emissions during the field installation of CIPP using the steam cure process.” The request specifically identified the project would review studies conducted by the Virginia Department of Transportation, California Department of Transportation, and Purdue University.
At the federal and state levels in September 2017, on September 26, the US Centers for Disease Control and Prevention (CDC) National Institute for Occupational Safety and Health (NIOSH) published a Science Blog contribution regarding Inhalation and Dermal Exposure Risks Associated with Sanitary Sewer, Storm Sewer, and Drinking Water Pipe Repairs. In September 2017, the California Department of Public Health issued a notice to municipalities and health officials about CIPP installations. One of several statements in this document was that "municipalities, engineers, and contractors should not tell residents the exposures are safe."
On October 5, 2017, the National Environmental Health Association sponsored a webinar about the hazards involved for workers and residents associated with cured-in-place pipe repair. The video can be found here. Several questions about the webinar, and the study have been raised, and feedback noted by industry members.
On October 25, 2017, a 22-year old CIPP worker died at a sanitary sewer worksite in Streamwood, Illinois. The U.S. Occupational Safety and Health Administration (OSHA) completed their investigation April 2018 and issued the company a penalty. Chemical exposure was a contributing factor in the worker fatality.
In 2018, NASSCO funded a study on chemical emissions from six CIPP installations. In 2020, the study was completed. A few locations and worker tasks were identified of potential chemical exposure concern and worksite recommendations were provided.
In 2019 and 2021, the U.S. National Institute for Occupational Safety and Health published a safety evaluations ofUV, steam and hot water CIPP worksites. A UV CIPP company was the first to engage NIOSH. Study results indicated several worker chemical exposure conditions that exceeded recommended limits, and this US federal agency recommended several actions to reduce worker exposures. Two years later, the NIOSH published results of a steam and hot water CIPP worksite study. Results indicated several worker chemical exposure conditions that exceeded recommended limits. The US federal agency recommended several actions to reduce worker exposures.
In 2020, the Florida Department of Health issued their own factsheet about CIPP to municipalities and health departments. The document explained the CIPP process, health concerns, chemicals used and created, how persons living nearby can protect themselves from exposure, and biomonitoring and blood testing considerations after exposure.
In 2022, researchers made several additional discoveries. In the Journal of Hazardous Materials, a study funded by the National Institute of Environmental Health Sciences and National Science Foundation revealed CIPP pressure makes blowback from sinks and toilets in nearby buildings possible and provided recommendations for emergency responders and health officials. Later that year, a study in the Journal of Cleaner Production revealed that by modifying the initiator loading, an ingredient in thermal based CIPP resins, pollution potential of the process could be reduced by 33-42%. Though, also found was that non-styrene CIPP resin contained styrene due to handling at the resin processing facility. In October, researchers discovered that steam based CIPP creates and emits nanoplastics into the air during plastic manufacture. Results of these investigations help better understand the occupational safety, bystander safety, and environmental pollution risks associated with current practices, and also improve technology and practice to reduce undesirable consequences.
See also
Hobas
References
External links
New research on CIPP published by a scientific journal
Information on U.S. Patent no. 4009063
Related information on CIPP patents
Trenchless technology: Wayback Machine
How CIPP is installed
North American & European Test Methods - Impact on CIPP Flexural Properties
Water quality and aquatic toxicity impacts of CIPP sites from the ASCE Journal of Environmental Engineering: Whelton, A., Salehi, M., Tabor, M., Donaldson, B., and Estaba, J. (2013). ”Impact of Infrastructure Coating Materials on Storm-Water Quality: Review and Experimental Study.” J. Environ. Eng., 139(5), 746–756.
CIPP Worker and Public Safety Study - Chemical Air Emissions: [Whelton, Sendesi S.M.T., Ra K., Conkling E.N., Boor B.E., Nuruddin M., Howarter J.A., Youngblood Y.P., Kobo L.M., Shannahan J.H., Jafvert C.T., Whelton A.J. (2017). ”Worksite Chemical Air Emissions and Worker Exposure during Sanitary Sewer and Stormwater Pipe Rehabilitation Using Cured-in-Place-Pipe (CIPP).” Environ. Sci. Technol. Letters, DOI: 10.1021/acs.estlett.7b00237]
CIPP Worker and Public Safety Website
Piping
Trenchless technology | Cured-in-place pipe | [
"Chemistry",
"Engineering"
] | 2,873 | [
"Piping",
"Chemical engineering",
"Mechanical engineering",
"Building engineering"
] |
8,942,120 | https://en.wikipedia.org/wiki/Vertical%20resistance | The term vertical resistance, used commonly in the context of plant selection, was first used in 1963 by James Edward Van der Plank to describe single-gene resistance. This contrasted with the term horizontal resistance which was used to describe many-gene resistance.
In 1976, Raoul A. Robinson adapted the original definition of vertical resistance and argued that in vertical resistance there were individual genes for resistance in the host plant and also individual genes for parasitic ability in the parasite. This phenomenon is known as the gene-for-gene relationship, and it was the defining character of vertical resistance.
References
Phytopathology
Molecular biology | Vertical resistance | [
"Chemistry",
"Biology"
] | 123 | [
"Biochemistry",
"Molecular biology"
] |
8,942,272 | https://en.wikipedia.org/wiki/Horizontal%20resistance | In genetics, the term horizontal resistance was first used by J. E. Vanderplank to describe many-gene resistance, which is sometimes also called generalized resistance. This contrasts with the term vertical resistance which was used to describe single-gene resistance. Raoul A. Robinson further refined the definition of horizontal resistance. Unlike vertical resistance and parasitic ability, horizontal resistance and horizontal parasitic ability are entirely independent of each other in genetic terms.
In the first round of breeding for horizontal resistance, plants are exposed to pathogens and selected for partial resistance. Those with no resistance die, and plants unaffected by the pathogen have vertical resistance and are removed. The remaining plants have partial resistance and their seed is stored and bred back up to sufficient volume for further testing. The hope is that in these remaining plants are multiple types of partial-resistance genes, and by crossbreeding this pool back on itself, multiple partial resistance genes will come together and provide resistance to a larger variety of pathogens.
Successive rounds of breeding for horizontal resistance proceed in a more traditional fashion, selecting plants for disease resistance as measured by yield. These plants are exposed to native regional pathogens, and given minimal assistance in fighting them.
References
Phytopathology
Molecular biology | Horizontal resistance | [
"Chemistry",
"Biology"
] | 248 | [
"Biochemistry",
"Molecular biology"
] |
8,943,611 | https://en.wikipedia.org/wiki/Prompt%20gamma%20neutron%20activation%20analysis | Prompt-gamma neutron activation analysis (PGAA) is a very widely applicable technique for determining the presence and amount of many elements simultaneously in samples ranging in size from micrograms to many grams. It is a non-destructive method, and the chemical form and shape of the sample are relatively unimportant. Typical measurements take from a few minutes to several hours per sample.
The technique can be described as follows. The sample is continuously irradiated with a beam of neutrons. The constituent elements of the sample absorb some of these neutrons and emit prompt gamma rays which are measured with a gamma ray spectrometer. The energies of these gamma rays identify the neutron-capturing elements, while the intensities of the peaks at these energies reveal their concentrations. The amount of analyte element is given by the ratio of count rate of the characteristic peak in the sample to the rate in a known mass of the appropriate elemental standard irradiated under the same conditions.
Typically, the sample will not acquire significant long-lived radioactivity, and the sample may be removed from the facility and used for other purposes. One of the typical applications of PGAA is an online belt elemental analyzer or bulk material analyzer used in cement, coal and mineral industries.
References
External links
https://www.nist.gov/manuscript-publication-search.cfm?pub_id=903948
Video on the PGAA-NIPS at the Budapest Neutron Centre
Analytical chemistry
Neutron | Prompt gamma neutron activation analysis | [
"Physics",
"Chemistry"
] | 303 | [
" and optical physics stubs",
" molecular",
"nan",
"Analytical chemistry stubs",
"Atomic",
"Physical chemistry stubs",
" and optical physics"
] |
8,946,593 | https://en.wikipedia.org/wiki/CAFASP | CAFASP, or the Critical Assessment of Fully Automated Structure Prediction, is a large-scale blind experiment in protein structure prediction that studies the performance of automated structure prediction webservers in homology modeling, fold recognition, and ab initio prediction of protein tertiary structures based only on amino acid sequence. The experiment runs once every two years in parallel with CASP, which focuses on predictions that incorporate human intervention and expertise. Compared to related benchmarking techniques LiveBench and EVA, which run weekly against newly solved protein structures deposited in the Protein Data Bank, CAFASP generates much less data, but has the advantage of producing predictions that are directly comparable to those produced by human prediction experts. Recently CAFASP has been run essentially integrated into the CASP results rather than as a separate experiment.
References
External links
Protein Structure Prediction Center
CAFASP4 (2004)
CAFASP5 (2006)
Bioinformatics
Protein methods | CAFASP | [
"Chemistry",
"Engineering",
"Biology"
] | 190 | [
"Biochemistry methods",
"Biological engineering",
"Bioinformatics stubs",
"Protein methods",
"Biotechnology stubs",
"Protein biochemistry",
"Biochemistry stubs",
"Bioinformatics"
] |
8,947,095 | https://en.wikipedia.org/wiki/Solar%20energetic%20particles | Solar energetic particles (SEP), formerly known as solar cosmic rays, are high-energy, charged particles originating in the solar atmosphere and solar wind. They consist of protons, electrons and heavy ions with energies ranging from a few tens of keV to many GeV. The exact processes involved in transferring energy to SEPs is a subject of ongoing study.
SEPs are relevant to the field of space weather, as they are responsible for SEP events and ground level enhancements.
History
SEPs were first detected in February and March 1942 by Scott Forbush indirectly as ground level enhancements.
Solar particle events
SEPs are accelerated during solar particle events. These can originate either from a solar flare site or by shock waves associated with coronal mass ejections (CMEs). However, only about 1% of CMEs produce strong SEP events.
Two main mechanisms of acceleration are possible: diffusive shock acceleration (DSA, an example of second-order Fermi acceleration) or the shock-drift mechanism. SEPs can be accelerated to energies of several tens of MeV within 5–10 solar radii (5% of the Sun–Earth distance) and can reach Earth in a few minutes in extreme cases. This makes prediction and warning of SEP events quite challenging.
In March 2021, NASA reported that scientists had located the source of several SEP events, potentially leading to improved predictions in the future.
Research
SEPs are of interest to scientists because they provide a good sample of solar material. Despite the nuclear fusion occurring in the core, the majority of solar material is representative of the material that formed the solar system. By studying SEP's isotopic composition, scientists can indirectly measure the material that formed the solar system.
See also
Solar wind
References
Reames D.V., Solar Energetic Particles, Springer, Berlin, (2017a) , doi: 10.1007/978-3-319-50871-9.
External links
Solar Energetic Particles (Rainer Schwenn)
NASA
The Isotopic Composition of Solar Energetic Particles
Solar phenomena | Solar energetic particles | [
"Physics"
] | 422 | [
"Physical phenomena",
"Stellar phenomena",
"Solar phenomena"
] |
8,949,285 | https://en.wikipedia.org/wiki/Ringwoodite | Ringwoodite is a high-pressure phase of Mg2SiO4 (magnesium silicate) formed at high temperatures and pressures of the Earth's mantle between depth. It may also contain iron and hydrogen. It is polymorphous with the olivine phase forsterite (a magnesium iron silicate).
Ringwoodite is notable for being able to contain hydroxide ions (oxygen and hydrogen atoms bound together) within its structure. In this case two hydroxide ions usually take the place of a magnesium ion and two oxide ions.
Combined with evidence of its occurrence deep in the Earth's mantle, this suggests that there is from one to three times the world ocean's equivalent of water in the mantle transition zone from 410 to 660 km deep.
This mineral was first identified in the Tenham meteorite in 1969, and is inferred to be present in large quantities in the Earth's mantle.
Olivine, wadsleyite, and ringwoodite are polymorphs found in the upper mantle of the earth. At depths greater than about , other minerals, including some with the perovskite structure, are stable. The properties of these minerals determine many of the properties of the mantle.
Ringwoodite was named after the Australian earth scientist Ted Ringwood (1930–1993), who studied polymorphic phase transitions in the common mantle minerals olivine and pyroxene at pressures equivalent to depths as great as about 600 km.
Characteristics
Ringwoodite is polymorphous with forsterite, Mg2SiO4, and has a spinel structure. Spinel group minerals crystallize in the isometric system with an octahedral habit. Olivine is most abundant in the upper mantle, above about ; the olivine polymorphs wadsleyite and ringwoodite are thought to dominate the transition zone of the mantle, a zone present from about 410 to 660 km depth.
Ringwoodite is thought to be the most abundant mineral phase in the lower part of Earth's transition zone. The physical and chemical property of this mineral partly determine properties of the mantle at those depths. The pressure range for stability of ringwoodite lies in the approximate range from 18 to 23 GPa.
Natural ringwoodite has been found in many shocked chondritic meteorites, in which the ringwoodite occurs as fine-grained polycrystalline aggregates.
Natural ringwoodite generally contains much more magnesium than iron and can form a gapless solid solution series from the pure magnesium endmember to the pure iron endmember. The latter, the iron-rich endmember of the γ-olivine solid solution series, γ-Fe2SiO4, was named ahrensite in honor of US mineral physicist Thomas J. Ahrens (1936–2010).
Geological occurrences
In meteorites, ringwoodite occurs in the veinlets of quenched shock-melt cutting the matrix and replacing olivine probably produced during shock metamorphism.
In Earth's interior, olivine occurs in the upper mantle at depths less than about 410 km, and ringwoodite is inferred to be present within the transition zone from about 520 to 660 km depth. Seismic activity discontinuities at about 410 km, 520 km, and at 660 km depth have been attributed to phase changes involving olivine and its polymorphs.
The 520-km depth discontinuity is generally believed to be caused by the transition of the olivine polymorph wadsleyite (beta-phase) to ringwoodite (gamma-phase), while the 660-km depth discontinuity by the phase transformation of ringwoodite (gamma-phase) to a silicate perovskite plus magnesiowüstite.
Ringwoodite in the lower half of the transition zone is inferred to play a pivotal role in mantle dynamics, and the plastic properties of ringwoodite are thought to be critical in determining flow of material in this part of the mantle. The ability of ringwoodite to incorporate hydroxide is important because of its effect on rheology.
Ringwoodite has been synthesized at conditions appropriate to the transition zone, containing up to 2.6 weight percent water.
Because the transition zone between the Earth's upper and lower mantle helps govern the scale of mass and heat transport throughout the Earth, the presence of water within this region, whether global or localized, may have a significant effect on mantle rheology and therefore mantle circulation. In subduction zones, the ringwoodite stability field hosts high levels of seismicity.
An "ultradeep" diamond (one that has risen from a great depth) found in Juína in western Brazil contained an inclusion of ringwoodite — at the time the only known sample of natural terrestrial origin — thus providing evidence of significant amounts of water as hydroxide in the Earth's mantle. The gemstone, about 5mm long, was brought up by a diatreme eruption. The ringwoodite inclusion is too small to see with the naked eye. A second such diamond was later found.
The mantle reservoir could contain about three times more water, in the form of hydroxide contained within the wadsleyite and ringwoodite crystal structure, than the Earth's oceans combined.
Synthetic
For experiments, hydrous ringwoodite has been synthesized by mixing powders of forsterite (), brucite (), and silica () so as to give the desired final elemental composition. Putting this under 20 gigapascals of pressure at for three or four hours turns this into ringwoodite, which can then be cooled and depressurized.
Crystal structure
Ringwoodite has the spinel structure, in the isometric crystal system with space group Fdm (or F3m). On an atomic scale, magnesium and silicon are in octahedral and tetrahedral coordination with oxygen, respectively. The Si-O and Mg-O bonds have mixed ionic and covalent character. The cubic unit cell parameter is 8.063 Å for pure Mg2SiO4 and 8.234 Å for pure Fe2SiO4.
Chemical composition
Ringwoodite compositions range from pure Mg2SiO4 to Fe2SiO4 in synthesis experiments. Ringwoodite can incorporate up to 2.6 percent by weight H2O.
Physical properties
The physical properties of ringwoodite are affected by pressure and temperature. At the pressure and temperature condition of the Mantle Transition Zone, the calculated density value of ringwoodite is 3.90 g/cm3 for pure Mg2SiO4; 4.13 g/cm3 for (Mg0.91,Fe0.09)2SiO4 of pyrolitic mantle; and 4.85 g/cm3 for Fe2SiO4. It is an isotropic mineral with an index of refraction n = 1.768.
The colour of ringwoodite varies between the meteorites, between different ringwoodite bearing aggregates, and even in one single aggregate. The ringwoodite aggregates can show every shade of blue, purple, grey and green, or have no colour at all.
A closer look at coloured aggregates shows that the colour is not homogeneous, but seems to originate from something with a size similar to the ringwoodite crystallites. In synthetic samples, pure Mg ringwoodite is colourless, whereas samples containing more than one mole percent Fe2SiO4 are deep blue in colour. The colour is thought to be due to Fe2+–Fe3+ charge transfer.
References
Magnesium minerals
Iron minerals
Nesosilicates
Polymorphism (materials science)
Spinel group
Cubic minerals
Minerals in space group 227
High pressure science | Ringwoodite | [
"Physics",
"Materials_science",
"Engineering"
] | 1,591 | [
"High pressure science",
"Applied and interdisciplinary physics",
"Materials science",
"Polymorphism (materials science)"
] |
8,950,098 | https://en.wikipedia.org/wiki/Soluforce | SoluForce is a type of Reinforced Thermoplastic Pipe (RTP, also known as flexible composite pipe or FCP).
Introduction
SoluForce is a brand name of Pipelife Nederland B.V. (part of Wienerberger AG), with its main offices and production facilities located in Enkhuizen, The Netherlands. It develops, manufactures and markets RTP, which is a flexible high pressure pipe. It is supplied in long length coils of up to 400m length and has design pressure ratings from 36 to 450 bar. This type of pipe is typically used in the oil and gas industry for oil and gas flowlines, high pressure water injection and water transportation lines. However, they are also used for applications outside of the oil and gas industry including domestic gas, mining, and hydrogen applications.
This pipe has faster installation time compared to conventional steel pipes, as speeds of up to 2000m per day have been reached installing RTP in ground surface (with average speeds being approx. 1000m per day for normal RTP installations). The pipe mainly benefits applications where steel fails due to corrosion and installation time is an issue.
History
RTP was developed in the early 1990s by Wavin Repox, Akzo Nobel and by Tubes d'Aquitaine from France. They developed the first pipes reinforced with synthetic fibre to replace medium pressure steel pipes in response to growing demand for non-corrosive conduits for application in the onshore oil and gas industry, particularly from Shell in the Middle East. Because of its expertise in producing pipes, Pipelife Netherlands was involved in the project to develop long length RTP in 1998. The resulting system is marketed today under the name SoluForce.
SoluForce was the first ever RTP to be installed and used in the year 2000.
Properties
The Soluforce RTP has a three layer pipe construction:
A HDPE liner pipe (different composition of material for low or high operating temperatures)
A reinforcement layer, typically Aramid (Twaron or Kevlar) or high strength steel wire
A white HDPE protective outer layer for UV, damage and abrasion protection
In some SoluForce pipe versions, an extra bonded aluminium layer is added to prevent light components and gasses from permeating.
SoluForce pipes are available in 4 and 6 inch versions. Depending on the reïnforcement layer, SoluForce pipes have design pressures of up to 450 bar / 6527 psi.
Typical applications
Soluforce is used for the following applications:
Oil and/or gas flowlines
Oil field waste water disposal lines
Oil field injection lines
Offshore water injection risers
Offshore oil flowlines
High pressure Water injection lines
High pressure gas transport lines
Relining existing pipes
Although these kind of pipes have been developed for the oil and gas industry, they are also used for domestic gas, mining, and hydrogen applications.
Testing and qualification
Soluforce RTP is tested and acknowledged by the following organisations:
DNV Certification D-2615 - Soluforce System 4" and 5" with in-line couplings and end fittings
ASTM - WK11803
API - RP 15S (oil field service)
ISO/TS 18226:2006 (gas service)
DVGW VP 642 (German gas service)
NYSEARCH project by the Northeast Gas Association (USA)
See also
Plastic Pressure Pipe Systems
Pipeline transport
Reinforced Thermoplastic Pipes
Water injection (oil production)
Wienerberger
References
External links
Official website
JIP proposal 1999 from Newcastle University
Conference paper 23rd World Gas Conference
Bibliography
Piping
Pipeline transport
Petroleum production
Composite materials
Brand name materials | Soluforce | [
"Physics",
"Chemistry",
"Engineering"
] | 741 | [
"Building engineering",
"Chemical engineering",
"Composite materials",
"Materials",
"Mechanical engineering",
"Piping",
"Matter"
] |
8,950,551 | https://en.wikipedia.org/wiki/Operational%20calculus | Operational calculus, also known as operational analysis, is a technique by which problems in analysis, in particular differential equations, are transformed into algebraic problems, usually the problem of solving a polynomial equation.
History
The idea of representing the processes of calculus, differentiation and integration, as operators
has a long history that goes back to Gottfried Wilhelm Leibniz. The mathematician Louis François Antoine Arbogast was one of the first to manipulate these symbols independently of the function to which they were applied.
This approach was further developed by Francois-Joseph Servois who developed convenient notations. Servois was followed by a school of British and Irish mathematicians including Charles James Hargreave, George Boole, Bownin, Carmichael, Doukin, Graves, Murphy, William Spottiswoode and Sylvester.
Treatises describing the application of operator methods to ordinary and partial differential equations were written by Robert Bell Carmichael in 1855 and by Boole in 1859.
This technique was fully developed by the physicist Oliver Heaviside in 1893, in connection with his work in telegraphy.
Guided greatly by intuition and his wealth of knowledge on the physics behind his circuit studies, [Heaviside] developed the operational calculus now ascribed to his name.
At the time, Heaviside's methods were not rigorous, and his work was not further developed by mathematicians.
Operational calculus first found applications in electrical engineering problems, for
the calculation of transients in linear circuits after 1910, under the impulse of Ernst Julius Berg, John Renshaw Carson and Vannevar Bush.
A rigorous mathematical justification of Heaviside's operational methods came only
after the work of Bromwich that related operational calculus with
Laplace transformation methods (see the books by Jeffreys, by Carslaw or by MacLachlan for a detailed exposition).
Other ways of justifying the operational methods of Heaviside were introduced in the mid-1920s using
integral equation techniques (as done by Carson) or Fourier transformation (as done by Norbert Wiener).
A different approach to operational calculus was developed in the 1930s by Polish mathematician
Jan Mikusiński, using algebraic reasoning.
Norbert Wiener laid the foundations for operator theory in his review of the existential status of the operational calculus in 1926:
The brilliant work of Heaviside is purely heuristic, devoid of even the pretense to mathematical rigor. Its operators apply to electric voltages and currents, which may be discontinuous and certainly need not be analytic. For example, the favorite corpus vile on which he tries out his operators is a function which vanishes to the left of the origin and is 1 to the right. This excludes any direct application of the methods of Pincherle…
Although Heaviside’s developments have not been justified by the present state of the purely mathematical theory of operators, there is a great deal of what we may call experimental evidence of their validity, and they are very valuable to the electrical engineers. There are cases, however, where they lead to ambiguous or contradictory results.
Principle
The key element of the operational calculus is to consider differentiation as an operator acting on functions. Linear differential equations can then be recast in the form of "functions" of the operator acting on the unknown function equaling the known function. Here, is defining something that takes in an operator and returns another operator .
Solutions are then obtained by making the inverse operator of act on the known function. The operational calculus generally is typified by two symbols: the operator , and the unit function . The operator in its use probably is more mathematical than physical, the unit function more physical than mathematical. The operator in the Heaviside calculus initially is to represent the time differentiator . Further, it is desired for this operator to bear the reciprocal relation such that denotes the operation of integration.
In electrical circuit theory, one is trying to determine the response of an electrical circuit to an impulse. Due to linearity, it is enough to consider a unit step function , such that if , and if .
The simplest example of application of the operational calculus is to solve: , which gives
From this example, one sees that represents integration. Furthermore iterated integrations is represented by so that
Continuing to treat as if it were a variable,
which can be rewritten by using a geometric series expansion:
Using partial fraction decomposition, one can define any fraction in the operator and compute its action on .
Moreover, if the function has a series expansion of the form
it is straightforward to find
Applying this rule, solving any linear differential equation is reduced to a purely algebraic problem.
Heaviside went further and defined fractional power of , thus establishing a connection between operational calculus and fractional calculus.
Using the Taylor expansion, one can also verify the Lagrange–Boole translation formula, , so the operational
calculus is also applicable to finite-difference equations and to electrical engineering problems with delayed signals.
See also
Calculus of finite differences
Umbral calculus
References
Further sources
During Heaviside's lifetime
— Some historical references on the precursor work up to Carmichael].
After Heaviside's death
External links
IV Lindell HEAVISIDE OPERATIONAL RULES APPLICABLE TO ELECTROMAGNETIC PROBLEMS
Ron Doerfler Heaviside's Calculus
Jack Crenshaw essay showing use of operators More On the Rosetta Stone
Linear operators
Electrical engineering
Differential equations | Operational calculus | [
"Mathematics",
"Engineering"
] | 1,071 | [
"Functions and mappings",
"Mathematical objects",
"Linear operators",
"Equations",
"Differential equations",
"Mathematical relations",
"Electrical engineering"
] |
7,432,107 | https://en.wikipedia.org/wiki/Subgrain%20rotation%20recrystallization | In metallurgy, materials science and structural geology, subgrain rotation recrystallization is recognized as an important mechanism for dynamic recrystallisation. It involves the rotation of initially low-angle sub-grain boundaries until the mismatch between the crystal lattices across the boundary is sufficient for them to be regarded as grain boundaries. This mechanism has been recognized in many minerals (including quartz, calcite, olivine, pyroxenes, micas, feldspars, halite, garnets and zircons) and in metals (various magnesium, aluminium and nickel alloys).
Structure
In metals and minerals, grains are ordered structures in different crystal orientations. Subgrains are defined as grains that are oriented at a < 10–15 degree angle at the grain boundary, making it a low-angle grain boundary (LAGB). Due to the relationship between the energy versus the number of dislocations at the grain boundary, there is a driving force for fewer high-angle grain boundaries (HAGB) to form and grow instead of a higher number of LAGB. The energetics of the transformation depend on the interfacial energy at the boundaries, the lattice geometry (atomic and planar spacing, structure [i.e. FCC/BCC/HCP] of the material, and the degrees of freedom of the grains involved (misorientation, inclination). The recrystallized material has less total grain boundary area, which means that failure via brittle fracture along the grain boundary is less probable.
Mechanism
Subgrain rotation recrystallization is a type of continuous dynamic recrystallization. Continuous dynamic recrystallization involves the evolution of low-angle grains into high-angle grains, increasing their degree of misorientation. One mechanism could be the migration and agglomeration of like-sign dislocations in the LAGB, followed by grain boundary shearing. The transformation occurs when the subgrain boundaries contain small precipitates, which pin them in place. As the subgrain boundaries absorb dislocations, the subgrains transform into grains by rotation, instead of growth. This process generally occurs at elevated temperatures, which allows dislocations to both glide and climb; at low temperatures, dislocation movement is more difficult and the grains are less mobile.
By contrast, discontinuous dynamic recrystallization involves nucleation and growth of new grains, where due to increased temperature and/or pressure, new grains grow at high angles compared to the surrounding grains.
Mechanical properties
Grain strength generally follows the Hall–Petch relation, which states that material strength decreases with the square root of the grain size. A higher number of smaller subgrains leads to a higher yield stress, and so some materials may be purposefully manufactured to have many subgrains, and in this case subgrain rotation recrystallization should be avoided.
Precipitates may also form in grain boundaries. It has been observed that precipitates in subgrain boundaries grow in a more elongated shape parallel to the adjacent grains, whereas precipitates in HAGB are blockier. This difference in aspect ratio may provide different strengthening effects to the material; long plate-like precipitates in the LAGB may delaminate and cause brittle failure under stress. Subgrain rotation recrystallization reduces the number of LAGB, thus reducing the number of flat, long precipitates, and also reducing the number of available pathways for this brittle failure.
Experimental techniques
Different grains and their orientations can be observed using scanning electron microscope (SEM) techniques such as electron backscatter diffraction (EBSD) or polarized optical microscopy (POM). Samples are initially cold- or hot-rolled to introduce a high degree of dislocation density, and then deformed at different strain rates so that dynamic recrystallization occurs. The deformation may be in the form of compression, tension, or torsion. The grains elongate in the direction of applied stress and the misorientation angle of subgrain boundaries increases.
References
Metallurgy
Structural geology | Subgrain rotation recrystallization | [
"Chemistry",
"Materials_science",
"Engineering"
] | 860 | [
"Metallurgy",
"Materials science",
"nan"
] |
7,436,045 | https://en.wikipedia.org/wiki/6061%20aluminium%20alloy | 6061 aluminium alloy (Unified Numbering System (UNS) designation A96061) is a precipitation-hardened aluminium alloy, containing magnesium and silicon as its major alloying elements. Originally called "Alloy 61S", it was developed in 1935. It has good mechanical properties, exhibits good weldability, and is very commonly extruded (second in popularity only to 6063). It is one of the most common alloys of aluminium for general-purpose use.
It is commonly available in pre-tempered grades such as 6061-O (annealed), tempered grades such as 6061-T6 (solutionized and artificially aged) and 6061-T651 (solutionized, stress-relieved stretched and artificially aged).
Chemical composition
6061 Aluminium alloy composition by mass:
Properties
The mechanical properties of 6061 greatly depend on the temper, or heat treatment, of the material. Young's Modulus is regardless of temper.
6061-O
Annealed 6061 (6061-O temper) has maximum ultimate tensile strength no more than , and maximum yield strength no more than or . The material has elongation (stretch before ultimate failure) of 10–18%. To obtain the annealed condition, the alloy is typically heat soaked at 415 °C for 2-3 hours.
6061-T4
T4 temper 6061 has an ultimate tensile strength of at least or and yield strength of at least . It has elongation of 10-16%.
6061-T6
T6 temper 6061 has been treated to provide the maximum precipitation hardening (and therefore maximum yield strength) for a 6061 aluminium alloy. It has an ultimate tensile strength of at least and yield strength of at least . More typical values are and , respectively. This can exceed the yield strength of certain types of stainless steel. In thicknesses of or less, it has elongation of 8% or more; in thicker sections, it has elongation of 10%. T651 temper has similar mechanical properties.
The typical value for thermal conductivity for 6061-T6 at is around 152 W/m K.
The fatigue limit under cyclic load is for 500,000,000 completely reversed cycles using a standard RR Moore test machine and specimen. Note that aluminium does not exhibit a well defined "knee" on its S-N curve, so there is some debate as to how many cycles equates to "infinite life". Also note the actual value of fatigue limit for an application can be dramatically affected by the conventional de-rating factors of loading, gradient, and surface finish.
Microstructure
Different aluminium heat treatments control the size and dispersion of precipitates in the material. Grain boundary sizes also change, but do not have as important of an impact on strength as the precipitates. Grain sizes can change orders of magnitude based upon stress and can have grains as small as a few hundred nanometres, but are typically a few micrometres to hundreds of micrometres in diameter. Iron, manganese, and chromium secondary phases (, ) often form as inclusions in the material.
Grain sizes in aluminium alloys are heavily dependent upon the processing techniques and heat treatment. Different cross-sections of material which has been stressed can cause order of magnitude differences in grain size. Some specially processed aluminium alloys have grain diameters which are hundreds of nanometres, but most range from a few micrometres to hundreds of micrometres.
Uses
6061 is commonly used for the following:
construction of aircraft structures, such as wings and fuselages, more commonly in homebuilt aircraft than commercial or military aircraft. 2024 alloy is somewhat stronger, but 6061 is more easily worked and remains resistant to corrosion even when the surface is abraded. This is not the case for 2024, which is usually used with a thin Alclad coating for corrosion resistance.
yacht construction, including small utility boats.
automotive parts, such as the chassis of the Audi A8 and the Plymouth Prowler.
flashlights
Scuba tanks and other high pressure gas storage cylinders (post 1995)
6061-T6 is used for:
bicycle frames and components
middle to high-end recurve risers
many fly fishing reels.
the Pioneer plaque
the secondary chambers and baffle systems in firearm sound suppressors (primarily pistol suppressors for reduced weight and improved mechanical functionality), while the primary expansion chambers usually require 17-4PH or 303 stainless steel or titanium.
the upper and lower receivers of many non mil-spec AR-15 rifle variants.
many aluminium docks and gangways, welded into place.
material used in some ultra-high vacuum (UHV) chambers
many parts for remote controlled model aircraft, notably helicopter rotor components.
large amateur radio antennas.
fire department rescue ladders
Welding
6061 is highly weldable, for example using tungsten inert gas welding (TIG) or metal inert gas welding (MIG). Typically, after welding, the properties near the weld are those of 6061-T4, a loss of strength of around 40%. The material can be re-heat-treated to restore near -T6 temper for the whole piece. After welding, the material can naturally age and restore some of its strength as well. Most strength is recovered in the first few days to a few weeks. Nevertheless, the Aluminum Design Manual (Aluminum Association) recommends the design strength of the material adjacent to the weld to be taken as 165 MPa/24000 PSI without proper heat treatment after the welding. Typical filler material is 4043 or 5356.
Extrusions
6061 is an alloy used in the production of extrusions—long constant–cross-section structural shapes produced by pushing metal through a shaped die.
Cold and Hot Stamping
6061 sheet in the T4 condition can be formed with limited ductility in the cold state. For deep draw and complex shapes, and for the avoidance of spring-back, an aluminium hot stamping process (Hot Form Quench) can be used, which forms a blank at a elevated temperature (~ 550 C) in a cooled die, leaving a part in W-temper condition before artificial aging to the T6 full strength state.
Forgings
6061 is an alloy that is suitable for hot forging. The billet is heated through an induction furnace and forged using a closed die process. This particular alloy is suitable for open die forgings. Automotive parts, ATV parts, and industrial parts are just some of the uses as a forging. Aluminium 6061 can be forged into flat or round bars, rings, blocks, discs and blanks, hollows, and spindles. 6061 can be forged into special and custom shapes.
Castings
6061 is not an alloy that is traditionally cast due to its low silicon content affecting the fluidity in casting. It can be suitably cast using a specialized centrifugal casting method. Centrifugally cast 6061 is ideal for larger rings and sleeve applications that exceed the limitations of most wrought offerings.
Equivalent materials
6061 Aluminium Equivalent Table
Standards
Different forms and tempers of 6061 aluminium alloy are discussed in the following standards:
ASTM B209: Standard Specification for Aluminum and Aluminum-Alloy Sheet and Plate
ASTM B210: Standard Specification for Aluminum and Aluminum-Alloy Drawn Seamless Tubes
ASTM B211: Standard Specification for Aluminum and Aluminum-Alloy Bar, Rod, and Wire
ASTM B221: Standard Specification for Aluminum and Aluminum-Alloy Extruded Bars, Rods, Wire, Profiles, and Tubes
ASTM B308/308M: Standard Specification for Aluminum-Alloy 6061-T6 Standard Structural Profiles
ASTM B483: Standard Specification for Aluminum and Aluminum-Alloy Drawn Tube and Pipe for General Purpose Applications
ASTM B547: Standard Specification for Aluminum and Aluminum-Alloy Formed and Arc-Welded Round Tube
ISO 6361: Wrought Aluminium and Aluminium Alloy Sheets, Strips and Plates
References
Further reading
"Properties of Wrought Aluminum and Aluminum Alloys: 6061 Alclad 6061", Properties and Selection: Nonferrous Alloys and Special-Purpose Materials, Vol 2, ASM Handbook, ASM International, 1990, p. 102-103.
External links
Aluminum 6061 Properties
6061 Aluminum vs 5052 Aluminum
Aluminium alloy table
6061
Aerospace materials
Silicon alloys
Aluminium–silicon alloys
Aluminium–magnesium–silicon alloys | 6061 aluminium alloy | [
"Chemistry",
"Engineering"
] | 1,766 | [
"Aerospace materials",
"Aluminium alloys",
"Silicon alloys",
"Alloys",
"Aerospace engineering"
] |
14,769,219 | https://en.wikipedia.org/wiki/C.%20N.%20Yang%20Institute%20for%20Theoretical%20Physics |
The C. N. Yang Institute of Theoretical Physics (YITP) is a research center at Stony Brook University. In 1965, it was the vision of then University President J.S. Toll and Physics Department chair T.A. Pond to create an institute for theoretical physics and invite the famous physicist Chen Ning Yang from Institute for Advanced Study to serve as its director with the Albert Einstein Professorship of Physics. While the center is often referred to as "YITP", this can be confusing as YITP also stands for the Yukawa Institute for Theoretical Physics in Japan.
The active research areas of the institute include: quantum field theory, string theory, conformal field theory, mathematical physics and statistical mechanics. The YITP is situated on top of the Math Tower, home to the Department of Mathematics which is connected to the Department of Physics and the Simons Center for Geometry and Physics—therefore the physicists enjoy intimate interactions with the mathematicians. This close relationship dates back to the friendship of C.N. Yang and the mathematician James Harris Simons.
Founded in 1967, YITP celebrated its 50th anniversary in 2017. During the time span, the YITP has produced significant results in different areas, most notably was the discovery of supergravity in 1976 by Peter van Nieuwenhuizen, Daniel Z. Freedman, and Sergio Ferrara, who were all working there at the time.
It houses two Breakthrough Prize in Fundamental Physics laureates; Peter Van Nieuwenhuizen (2019) and Alexander Zamolodchikov (2024). Former director Chen Ning Yang is a Nobel Prize in Physics laureate (1957).
Directors
Chen Ning Yang - First director (1967-1999) and 1957 Nobel Laureate.
Peter van Nieuwenhuizen - Second director (1999-2002) and co-discoverer of supergravity.
George Sterman - Third director (2002-) and noted field theorist
Notable tenants
Luis Álvarez-Gaumé - String theory
Gerald E. Brown - Nuclear physics, theoretical astrophysics
Michael Creutz - Lattice gauge theory, computational physics
Michael Douglas - String theory
Ephraim Fischbach - Nuclear physics
Zohar Komargodski - Conformal field theory
Vladimir Korepin - Mathematical physics, quantum information
Barry M. McCoy - Statistical mechanics, conformal field theory
Nikita Nekrasov - Mathematical physics
Peter van Nieuwenhuizen - Field theory, string theory, co-discoverer of supergravity
Martin Roček - Mathematical physics, string theory
Warren Siegel - Field theory, string theory
George Sterman - Field theory, quantum chromodynamics
Alexander Zamolodchikov - Quantum field theory, statistical mechanics, conformal field theory
See also
Institute for Theoretical Physics (disambiguation)
Center for Theoretical Physics (disambiguation)
References
External links
YITP website
8th Simons Workshop in Mathematics and Physics
Yang Chen-Ning
Physics research institutes
Stony Brook University
Brookhaven, New York
Research institutes in New York (state)
1967 establishments in New York (state)
Theoretical physics institutes | C. N. Yang Institute for Theoretical Physics | [
"Physics"
] | 621 | [
"Theoretical physics",
"Theoretical physics institutes"
] |
14,771,723 | https://en.wikipedia.org/wiki/PKNOX1 | PBX/Knotted 1 Homeobox 1 (PKNOX1) is a protein that in humans is encoded by the PKNOX1 gene.
An important paralog of this gene is PKNOX2.
Function
PKNOX1 belongs to the three amino acid loop extension (TALE) class of homeodomain transcription factors that form transcriptionally active complexes involved in development and organogenesis. PKNOX1 is essential for embryogenesis, but it can also act as a tumor suppressor in adulthood.
References
Further reading
External links
Transcription factors | PKNOX1 | [
"Chemistry",
"Biology"
] | 120 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,771,908 | https://en.wikipedia.org/wiki/PROX1 | Prospero homeobox protein 1 is a protein that in humans is encoded by the PROX1 gene. The Prox1 gene is critical for the development of multiple
tissues. Prox1 activity is necessary and sufficient to specify a lymphatic endothelial cell fate in endothelial progenitors located in the embryonic veins.
Interactions
PROX1 has been shown to interact with EP300.
Production
PROX1 is produced primarily in the dentate gyrus in the mouse, and in the dentate gyrus and white matter in humans. Gene expression data for mouse, human and macaque from the Allen Brain Atlases can be found here.
Clinical significance
PROX1 is used as a marker for lymphatic endothelium in biopsy samples.
Homologous gene
PROX2
References
Further reading
External links
Transcription factors | PROX1 | [
"Chemistry",
"Biology"
] | 179 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,771,927 | https://en.wikipedia.org/wiki/GeneMark | GeneMark is a generic name for a family of ab initio gene prediction algorithms and software programs developed at the Georgia Institute of Technology in Atlanta. Developed in 1993, original GeneMark was used in 1995 as a primary gene prediction tool for annotation of the first completely sequenced bacterial genome of Haemophilus influenzae, and in 1996 for the first archaeal genome of Methanococcus jannaschii. The algorithm introduced inhomogeneous three-periodic Markov chain models of protein-coding DNA sequence that became standard in gene prediction as well as Bayesian approach to gene prediction in two DNA strands simultaneously. Species specific parameters of the models were estimated from training sets of sequences of known type (protein-coding and non-coding). The major step of the algorithm computes for a given DNA fragment posterior probabilities of either being "protein-coding" (carrying genetic code) in each of six possible reading frames (including three frames in the complementary DNA strand) or being "non-coding". The original GeneMark (developed before the advent of the HMM applications in Bioinformatics) was an HMM-like algorithm; it could be viewed as approximation to known in the HMM theory posterior decoding algorithm for appropriately defined HMM model of DNA sequence.
Further improvements in the algorithms for gene prediction in prokaryotic genomes
The GeneMark.hmm algorithm (1998) was designed to improve accuracy of prediction of short genes and gene starts. The idea was to use the inhomogeneous Markov chain models introduced in GeneMark for computing likelihoods of the sequences emitted by the states of a hidden Markov model, or rather semi-Markov HMM, or generalized HMM describing the genomic sequence. The borders between coding and non-coding regions were formally interpreted as transitions between hidden states. Additionally, the ribosome binding site model was added to the GHMM model to improve accuracy of gene start prediction. The next important step in the algorithm development was introduction of self-training or unsupervised training of the model parameters in the new gene prediction tool GeneMarkS (2001). Rapid accumulation of prokaryotic genomes in the following years has shown that the structure of sequence patterns related to gene expression regulation signals near gene starts may vary. Also, it was observed that prokaryotic genome may exhibit GC content variability due to the lateral gene transfer. The new algorithm, GeneMarkS-2 was designed to make automatic adjustments to the types of gene expression patterns and the GC content changes along the genomic sequence. GeneMarkS and, then GeneMarkS-2 have been used in the NCBI pipeline for prokaryotic genomes annotation (PGAP).
().
Heuristic Models and Gene Prediction in Metagenomes and Metatransciptomes
Accurate identification of species specific parameters of a gene finding algorithm is a necessary condition for making accurate gene predictions. However, in the studies of viral genomes one needs to estimate parameters from a rather short sequence that has no large genomic context. Importantly, starting 2004, the same question had to be addressed for gene prediction in short metagenomic sequences. A surprisingly accurate answer was found by introduction of parameter generating functions depending on a single variable, the sequence G+C content ("heurisic method" 1999). Subsequently, analysis of several hundred prokaryotic genomes led to developing more advanced heuristic method in 2010 (implemented in MetaGeneMark). Further on, the need to predict genes in RNA transcripts led to development of GeneMarkS-T (2015), a tool that identifies intron-less genes in long transcript sequences assembled from RNA-Seq reads.
Eukaryotic gene prediction
In eukaryotic genomes modeling of exon borders with introns and intergenic regions present a major challenge. The GHMM architecture of eukaryotic GeneMark.hmm includes hidden states for initial, internal, and terminal exons, introns, intergenic regions and single exon genes located in both DNA strands. Initial version of the eukaryotic GeneMark.hmm needed manual compilation of training sets of protein-coding sequences for estimation of the algorithm parameters. However, in 2005, the first self-training eukaryotic gene finder, GeneMark-ES, was developed. A fungal version of GeneMark-ES developed in 2008 features a more complex intron model and hierarchical strategy of self-training. In 2014, in GeneMark-ET the self-training of parameters was aided by extrinsic hints generated by mapping to the genome short RNA-Seq reads. Extrinsic evidence is not limited to the 'native' RNA sequences. The cross-species proteins collected in the vast protein databases could be a source for external hints, if the homologous relationships between the already known proteins and the proteins encoded by yet unknown genes in the novel genome are established. This task was solved upon developing the new algorithm, GeneMark-EP+ (2020). Integration of the RNA and protein sources of the intrinsic hints was done in GeneMark-ETP (2023). Versatility and accuracy of the eukaryotic gene finders of the GeneMark family have led to their incorporation into number of pipelines of genome annotation. Also, since 2016, the pipelines BRAKER1, BRAKER2, BRAKER3 were developed to combine the strongest features of GeneMark and AUGUSTUS.
Notably, gene prediction in eukaryotic transcripts can be done by the new algorithm GeneMarkS-T (2015)
GeneMark Family of Gene Prediction Programs
Bacteria, Archaea
GeneMark
GeneMarkS
GeneMarkS-2
Metagenomes and Metatranscriptomes
MetaGeneMark
GeneMarkS-T
Eukaryotes
GeneMark
GeneMark.hmm
GeneMark-ES: ab initio gene finding algorithm for eukaryotic genomes with automatic (unsupervised) training.
GeneMark-ET: augments GeneMark-ES by integrating RNA-Seq read alignments into the self-training procedure.
GeneMark-EP+: augments GeneMark-ES by iterative finding genes in a novel genome, detecting similarities of predicted genes to known proteins, splice-aligning of the known proteins to the genome and generating hints for the next round of prediction, and correction based on the external evidence.
GeneMark-ETP: integrates genomic, transcript and protein evidence into the gene prediction
Viruses, phages and plasmids
Heuristic models
Transcripts assembled from RNA-Seq read
GeneMarkS-T
See also
List of gene prediction software
Gene prediction
References
Borodovsky M. and McIninch J. "GeneMark: parallel gene recognition for both DNA strands." Computers & Chemistry (1993) 17 (2): 123–133. DOI
Lukashin A. and Borodovsky M. "GeneMark.hmm: new solutions for gene finding." Nucleic Acids Research (1998) 26 (4): 1107–1115. DOI PMID
Besemer J. and Borodovsky M. "Heuristic approach to deriving models for gene finding." Nucleic Acids Research (1999) 27 (19): 3911–3920. DOI PMID
Besemer J., Lomsadze A., and Borodovsky M. "GeneMarkS: a self-training method for prediction of gene starts in microbial genomes. Implications for finding sequence motifs in regulatory regions." Nucleic Acids Research (2001) 29 (12): 2607–2618. DOI PMID
Mills R., Rozanov M., Lomsadze A., Tatusova T., and Borodovsky M. "Improving gene annotation in complete viral genomes." Nucleic Acids Research (2003) 31 (23): 7041–7055. DOI PMID
Besemer J. and Borodovsky M. "GeneMark: web software for gene finding in prokaryotes, eukaryotes and viruses." Nucleic Acids Research (2005) 33 (Web Server Issue): W451-454. DOI PMID
Lomsadze A., Ter-Hovhannisyan V., Chernoff Y., and Borodovsky M. "Gene identification in novel eukaryotic genomes by self-training algorithm." Nucleic Acids Research (2005) 33 (20): 6494–6506. DOI PMID
Ter-Hovhannisyan V., Lomsadze A., Chernoff Y., and Borodovsky M. "Gene prediction in novel fungal genomes using an ab initio algorithm with unsupervised training." Genome Research (2008) 18 (12): 1979-1990. DOI PMID
Zhu W., Lomsadze A., and Borodovsky M. "Ab initio gene identification in metagenomic sequences." Nucleic Acids Research (2010) 38 (12): e132. DOI PMID
Lomsadze A., Burns P.D., and Borodovsky M. "Integration of mapped RNA-Seq reads into automatic training of eukaryotic gene finding algorithm." Nucleic Acids Research (2014) 42 (15): e119. DOI PMID
Tang S., Lomsadze A., and Borodovsky M. "Identification of protein coding regions in RNA transcripts." Nucleic Acids Research (2015) 43 (12): e78. DOI PMID
Tatusova T., DiCuccio M., Badretdin A., Chetvernin V., Nawrocki E., Zaslavsky L., Lomsadze A., Pruitt K., Borodovsky M., and Ostell J. "NCBI prokaryotic genome annotation pipeline." Nucleic Acids Research (2016) 44 (14): 6614-6624. DOI PMID
Hoff K., Lange S., Lomsadze A., Borodovsky M., and Stanke M. "BRAKER1: Unsupervised RNA-Seq-Based Genome Annotation with GeneMark-ET and AUGUSTUS." Bioinformatics (2016) 32 (5): 767-769. DOI PMID
Lomsadze A., Gemayel K., Tang S., and Borodovsky M. "Modeling leaderless transcription and atypical genes results in more accurate gene prediction in prokaryotes." Genome Research (2018) 28 (7): 1079-1089. DOI PMID
Bruna T., Hoff K., Lomsadze A., Stanke M., and Borodovsky M. "BRAKER2: automatic eukaryotic genome annotation with GeneMark-EP+ and AUGUSTUS supported by a protein database." NAR Genomics and Bioinformatics (2021) 3 (1): lqaa108 DOI PMID
Bruna T., Lomsadze A., and Borodovsky M. "GeneMark-EP+: eukaryotic gene prediction with self-training in the space of genes and proteins." NAR Genomics and Bioinformatics (2022) 2 (2): lqaa026 DOI PMID
Bruna T., Lomsadze A., and Borodovsky M. "GeneMark-ETP: Automatic Gene Finding in Eukaryotic Genomes in Consistence with Extrinsic Data." bioRxiv (Jan 5, 2023) DOI PMID
Gabriel L., Brůna T., Hoff K., Ebel M., Lomsadze A., Borodovsky M., and Stanke M. "BRAKER3: Fully automated genome annotation using RNA-Seq and protein evidence with GeneMark-ETP, AUGUSTUS and TSEBRA." bioRxiv (Nov 27, 2023) DOI PMID
External links
Metagenomics software
Mathematical and theoretical biology
Genomics
Bioinformatics software
zh:基因识别 | GeneMark | [
"Mathematics",
"Biology"
] | 2,528 | [
"Bioinformatics",
"Applied mathematics",
"Bioinformatics software",
"Mathematical and theoretical biology"
] |
14,773,056 | https://en.wikipedia.org/wiki/ETV1 | ETS translocation variant 1 is a protein that in humans is encoded by the ETV1 gene.
References
Further reading
External links
Transcription factors | ETV1 | [
"Chemistry",
"Biology"
] | 31 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,773,279 | https://en.wikipedia.org/wiki/PPARGC1B | Peroxisome proliferator-activated receptor gamma coactivator 1-beta is a protein that in humans is encoded by the PPARGC1B gene.
See also
PPARGC1A
Peroxisome proliferator-activated receptor
Peroxisome proliferator-activated receptor alpha
Peroxisome proliferator-activated receptor delta
Peroxisome proliferator-activated receptor gamma
Transcription coregulator
References
Further reading
External links
Gene expression
Transcription coregulators | PPARGC1B | [
"Chemistry",
"Biology"
] | 102 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
14,773,579 | https://en.wikipedia.org/wiki/HIC1 | Hypermethylated in cancer 1 protein is a protein that in humans is encoded by the HIC1 gene.
References
Further reading
External links
Transcription factors | HIC1 | [
"Chemistry",
"Biology"
] | 32 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,773,607 | https://en.wikipedia.org/wiki/HLF%20%28gene%29 | Hepatic leukemia factor is a protein that in humans is encoded by the HLF gene.
Function
This gene encodes a member of the proline and acidic-rich (PAR) protein family, a subset of the bZIP transcription factors. The encoded protein forms homodimers or heterodimers with other PAR family members and binds sequence-specific promoter elements to activate transcription. Chromosomal translocations fusing portions of this gene with the E2A gene cause a subset of childhood B-lineage acute lymphoid leukemias. Alternatively spliced transcript variants have been described, but their biological validity has not been determined.
References
Further reading
External links
Transcription factors | HLF (gene) | [
"Chemistry",
"Biology"
] | 141 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,773,643 | https://en.wikipedia.org/wiki/CRTC1 | CREB-regulated transcription coactivator 1 (CRTC1), previously referred to as TORC1 (), is a protein that in humans is encoded by the CRTC1 gene. It is expressed in a limited number of tissues that include fetal brain and liver and adult heart, skeletal muscles, liver and salivary glands and various regions of the adult central nervous system.
Clinical significance
Production of CRTC1 is blocked in Alzheimer's disease.
See also
Transcription coregulator
References
Further reading
External links
Gene expression
Transcription coregulators
Immunology
Alzheimer's disease research | CRTC1 | [
"Chemistry",
"Biology"
] | 120 | [
"Gene expression",
"Immunology",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
14,774,442 | https://en.wikipedia.org/wiki/RFX5 | DNA-binding protein RFX5 is a protein that in humans is encoded by the RFX5 gene.
Function
A lack of MHC-II expression results in a severe immunodeficiency syndrome called MHC-II deficiency, or the bare lymphocyte syndrome (BLS; MIM 209920). At least 4 complementation groups have been identified in B-cell lines established from patients with BLS. The molecular defects in complementation groups B, C, and D all lead to a deficiency in RFX, a nuclear protein complex that binds to the Xbox of MHC-II promoters. The lack of RFX binding activity in complementation group C results from mutations in the RFX5 gene encoding the 75-kD subunit of RFX (Steimle et al., 1995). RFX5 is the fifth member of the growing family of DNA-binding proteins sharing a novel and highly characteristic DNA-binding domain called the RFX motif. Multiple alternatively spliced transcript variants have been found but the full-length natures of only two have been determined.
Interactions
RFX5 has been shown to interact with CIITA.
References
Further reading
External links
Transcription factors | RFX5 | [
"Chemistry",
"Biology"
] | 247 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,775,326 | https://en.wikipedia.org/wiki/MTA2 | Metastasis-associated protein MTA2 is a protein that in humans is encoded by the MTA2 gene.
MTA2 is the second member of the MTA family of genes. MTA2 protein localizes in the nucleus and is a component of the nucleosome remodeling and the deacetylation complex (NuRD). Similar to the founding family member MTA1, MTA2 functions as a chromatin remodeling factor and regulates gene expression. MTA2 is overexpressed in human cancer and its dysregulated level correlates well with cancer invasiveness and aggressive phenotypes.
Discovery
MTA2 was initially recognized as an MTA1 like 1 gene, named MTA1-L1, from a large scale sequencing of randomly selected clones from human cDNA libraries in 1999. Clues about the role of MTA2 in gene expression came from the association of MTA2 polypeptides in the NuRD complex in a proteomic study This was followed by targeted cloning of murine Mta2 in 2001.
Gene and spliced variants
MTA2 is localized on chromosome 11q12-q13.1 in human and on 19B in mice. The 8.6-kb long human MTA2 gene contains 20 exons and seven transcripts inclusive of three protein-coding transcripts but predicted to code for two polypeptides of 688 amino acids and 495 amino acids. The remaining four MTA2 transcripts are non-coding RNA transcripts ranging from 532-bp to 627-bp. The murine Mta2 consists of a 3.1-kb protein-coding transcript to code a protein of 668 amino acids, and five non-coding RNAs transcripts, ranging from 620-bp to 839-bp.
Structure
Amino acid sequence of MTA2 shares 68.2% homology with MTA1’s sequence. MTA2 domains include, a BAH (Bromo-Adjacent Homology), an ELM2 (egl-27 and MTA1 homology), a SANT domain (SWI, ADA2, N-CoR, TFIIIB-B), and a GATA-like zinc finger. MTA2 is acetylated at lysine 152 within the BAH domain
Function
This gene encodes a protein that has been identified as a component of NuRD, a nucleosome remodeling deacetylase complex identified in the nucleus of human cells. It shows a very broad expression pattern and is strongly expressed in many tissues. It may represent one member of a small gene family that encode different but related proteins involved either directly or indirectly in transcriptional regulation. Their indirect effects on transcriptional regulation may include chromatin remodeling.
MTA2 inhibits estrogen receptor-transactivation functions, and participates in the development of hormones independent of breast cancer cells. The MTA2 participate in the circadian rhythm through CLOCK-BMAL1 complex. MTA2 inhibits the expression of target genes owing to its ability to interact with chromatin remodeling complexes, and modulates pathways involved in cellular functions, including invasion, apoptosis, epithelial-to-mesenchymal transition, and growth of normal and cancer cells
Regulation
Expression of MTA2 is stimulated by Sp1 transcription factor and repressed by Kaiso. Growth regulatory activity of MTA2 is modulated through its acetylation by histone acetylase p300 [12]. The expression of MTA2 is inhibited by the Rho GDIa in breast cancer cells and by human β-defensins in colon cancer cells. MicroRNAs-146a and miR-34a also regulate the levels of MTA2 mRNA through post-transcriptional mechanism.
Targets
MTA2 deacetylates the estrogen receptor alpha and p53 and inhibits their transactivation functions. MTA2 represses the expression of E-cadherin in non-small-cell lung cancer cells. but stimulates the expression of IL-11 in gastric cancer cells. The MTA2-containing chromatin remodeling complex targets CLOCK-BMAL1 complex.
Interactions
MTA2 has been shown to interact with:
CHD4,
HDAC1,
HDAC2,
MBD3
MTA1,
RBBP4,
RBBP7, and
SATB1.
Notes
References
External links
Transcription factors | MTA2 | [
"Chemistry",
"Biology"
] | 931 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,775,483 | https://en.wikipedia.org/wiki/ZBTB33 | Transcriptional regulator Kaiso is a protein that in humans is encoded by the ZBTB33 gene. This gene encodes a transcriptional regulator with bimodal DNA-binding specificity, which binds to methylated CGCG and also to the non-methylated consensus KAISO-binding site TCCTGCNA. The protein contains an N-terminal POZ/BTB domain and 3 C-terminal zinc finger motifs. It recruits the N-CoR repressor complex to promote histone deacetylation and the formation of repressive chromatin structures in target gene promoters. It may contribute to the repression of target genes of the Wnt signaling pathway, and may also activate transcription of a subset of target genes by the recruitment of catenin delta-2 (CTNND2). Its interaction with catenin delta-1 (CTNND1) inhibits binding to both methylated and non-methylated DNA. It also interacts directly with the nuclear import receptor Importin-α2 (also known as karyopherin alpha2 or RAG cohort 1), which may mediate nuclear import of this protein. Alternatively spliced transcript variants encoding the same protein have been identified.
NAMED by Dr.Juliet Daniel's, the KAISO gene was named after 'calypso' music popular in the Caribbeans, Trinidad & Tobago, e.t.c.
Interactions
ZBTB33 has been shown to interact with HDAC3, Nuclear receptor co-repressor 1 and CTNND1.
References
Further reading
External links
Transcription factors | ZBTB33 | [
"Chemistry",
"Biology"
] | 329 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,775,592 | https://en.wikipedia.org/wiki/OLIG2 | Oligodendrocyte transcription factor (OLIG2) is a basic helix-loop-helix (bHLH) transcription factor encoded by the OLIG2 gene. The protein is of 329 amino acids in length, 32 kDa in size and contains one basic helix-loop-helix DNA-binding domain. It is one of the three members of the bHLH family. The other two members are OLIG1 and OLIG3. The expression of OLIG2 is mostly restricted in central nervous system, where it acts as both an anti-neurigenic and a neurigenic factor at different stages of development. OLIG2 is well known for determining motor neuron and oligodendrocyte differentiation, as well as its role in sustaining replication in early development. It is mainly involved in diseases such as brain tumor and Down syndrome.
Function
OLIG2 is mostly expressed in restricted domains of the brain and spinal cord ventricular zone which give rise to oligodendrocytes and specific types of neurons. In the spinal cord, the pMN region sequentially generates motor neurons and oligodendrocytes. During embryogenesis, OLIG2 first directs motor neuron fate by establishing a ventral domain of motor neuron progenitors and promoting neuronal differentiation. OLIG2 then switches to promoting the formation of oligodendrocyte precursors and oligodendrocyte differentiation at later stages of development. Apart from functioning as a neurogenic factor in specification and the differentiation of motor neurons and oligodendrocytes, OLIG2 also functions as an anti-neurogenic factor at early time points in pMN progenitors to sustain the cycling progenitor pool. This side of anti-neurogenicity of OLIG2 later plays a bigger role in malignancies like glioma.
The role of phosphorylation has been highlighted recently to account for the multifaceted functions of OLIG2 in differentiation and proliferation. Studies showed that the phosphorylation state of OLIG2 at Ser30 determines the fate of cortical progenitor cells, in which cortical progenitor cells will either differentiate into astrocytes or remain as neuronal progenitors. Phosphorylation at a triple serine motif (Ser10, Ser13 and Ser14) on the other hand was shown to regulate the proliferative function of OLIG2. Another phosphorylation site Ser147 predicted by bioinformatics was found to regulate motor neuron development by regulating the binding between OLIG2 and NGN2. Further, OLIG2 contains a ST box composed of a string of 12 contiguous serine and threonine residues at position Ser77-Ser88. It is believed that phosphorylation at ST box is biologically functional, yet the role of it still remains to be elucidated in vivo.
OLIG2 has also been implicated in bovine horn ontogenesis. It was the only gene in the bovine polled locus to show differential expression between the putative horn bud and the frontal forehead skin.
Clinical Significance
OLIG2 in Cancer
OLIG2 is well recognized for its importance in cancer research, particularly in brain tumors and leukemia. OLIG2 is universally expressed in glioblastoma and other diffuse gliomas (astrocytomas, oligodendrogliomas and oligoastrocytomas), and is a useful positive diagnostic marker of these brain tumors. Although in normal brain tissue OLIG2 expresses mostly on oligodendrocytes but not on mature astrocytes, in adult glioma, OLIG2 expresses on both IDH1 or IDH2-mutant adult lower grade astrocytoma and oligodendroglioma on similar levels, but it is expressed on a lower level on IDH-wildtype glioblastoma. OLIG2 overexpression is a good surrogate marker for IDH mutation with an AUC of 0.90, but predicts poorly (AUC = 0.55) for 1p/19q co-deletion, a class-defining chromosomal alteration for oligodendroglioma. In survival analysis, higher mRNA levels of OLIG2 were associated with better overall survival, but this association was completely dependent on IDH mutation status.
In particular, OLIG2 is selectively expressed in a subgroup of glioma cells that are highly tumorigenic, and is shown to be required for proliferation of human glioma cells implanted in the brain of severe combined immunodeficiency (SCID) mice.
Though the molecular mechanism behind this tumorigenesis is not entirely clear, more studies have recently been published pinpointing diverse evidence and potential roles for OLIG2 in glioma progression. It is believed that OLIG2 promotes neural stem cell and progenitor cell proliferation by opposing p53 pathway, which potentially contributes to glioma progression. OLIG2 has been shown to directly repress the p53 tumor-suppressor pathway effector p21WAF1/CIP1, suppress p53 acetylation and impede the binding of p53 to several enhancer sites. It is further found that the phosphorylation of triple-serine motif in OLIG2 is present in several glioma lines and is more tumorigenic than the unphosphorylated status. In a study using the U12-1 cell line for controlled expression of OLIG2, researchers showed that OLIG2 can suppress the proliferation of U12-1 by transactivating the p27Kip1 gene and can inhibit the motility of the cell by activating RhoA.
Besides glioma, OLIG2 is also involved in leukemogenesis. The Olig2 gene was actually first identified in a study in T-cell acute lymphoblastic leukemia, in which the expression of OLIG2 was found elevated after t(14;21)(q11.2;q22) chromosomal translocation. The overexpression of OLIG2 was later shown present in malignancies beyond glioma and leukemia, such as breast cancer, melanoma and non-small cell lung carcinoma cell lines. It also has been shown that up-regulation of OLIG2 together with LMO1 and Notch1 helps to provide proliferation signals.
OLIG2 in Neural Diseases
OLIG2 is also associated with Down syndrome, as it locates at chromosome 21 within or near the Down syndrome critical region on the long arm. This region is believed to contribute to the cognitive defects of Down syndrome. The substantial increase in the number of forebrain inhibitory neurons often observed in Ts65dn mouse (a murine model of trisomy 21) could lead to imbalance between excitation and inhibition and behavioral abnormalities. However, genetic reduction of OLIG2 and OLIG1 from three copies to two rescued the overproduction of interneurons, indicating the pivotal role of OLIG2 expression level in Down syndrome. The association between OLIG2 and neural diseases (i.e. schizophrenia and Alzheimer's disease) are under scrutiny, as several single nucleotide polymorphisms (SNPs) associated with these diseases in OLIG2 were identified by genome-wide association work.
OLIG2 also plays a functional role in neural repair. Studies showed that the number of OLIG2-expressing cells increased in the lesion after cortical stab-wound injury, supporting the role for OLIG2 in reactive gliosis. OLIG2 was also implicated in generating reactive astrocytes possibly in a transient re-expression manner, but the mechanisms are unclear.
References
Further reading
External links
Transcription factors | OLIG2 | [
"Chemistry",
"Biology"
] | 1,612 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
9,633,335 | https://en.wikipedia.org/wiki/Gauss%E2%80%93Codazzi%20equations | In Riemannian geometry and pseudo-Riemannian geometry, the Gauss–Codazzi equations (also called the Gauss–Codazzi–Weingarten-Mainardi equations or Gauss–Peterson–Codazzi formulas) are fundamental formulas that link together the induced metric and second fundamental form of a submanifold of (or immersion into) a Riemannian or pseudo-Riemannian manifold.
The equations were originally discovered in the context of surfaces in three-dimensional Euclidean space. In this context, the first equation, often called the Gauss equation (after its discoverer Carl Friedrich Gauss), says that the Gauss curvature of the surface, at any given point, is dictated by the derivatives of the Gauss map at that point, as encoded by the second fundamental form. The second equation, called the Codazzi equation or Codazzi-Mainardi equation, states that the covariant derivative of the second fundamental form is fully symmetric. It is named for Gaspare Mainardi (1856) and Delfino Codazzi (1868–1869), who independently derived the result, although it was discovered earlier by Karl Mikhailovich Peterson.
Formal statement
Let be an n-dimensional embedded submanifold of a Riemannian manifold P of dimension . There is a natural inclusion of the tangent bundle of M into that of P by the pushforward, and the cokernel is the normal bundle of M:
The metric splits this short exact sequence, and so
Relative to this splitting, the Levi-Civita connection of P decomposes into tangential and normal components. For each and vector field Y on M,
Let
The Gauss formula now asserts that is the Levi-Civita connection for M, and is a symmetric vector-valued form with values in the normal bundle. It is often referred to as the second fundamental form.
An immediate corollary is the Gauss equation for the curvature tensor. For ,
where is the Riemann curvature tensor of P and R is that of M.
The Weingarten equation is an analog of the Gauss formula for a connection in the normal bundle. Let and a normal vector field. Then decompose the ambient covariant derivative of along X into tangential and normal components:
Then
Weingarten's equation:
DX is a metric connection in the normal bundle.
There are thus a pair of connections: ∇, defined on the tangent bundle of M; and D, defined on the normal bundle of M. These combine to form a connection on any tensor product of copies of TM and T⊥M. In particular, they defined the covariant derivative of :
The Codazzi–Mainardi equation is
Since every immersion is, in particular, a local embedding, the above formulas also hold for immersions.
Gauss–Codazzi equations in classical differential geometry
Statement of classical equations
In classical differential geometry of surfaces, the Codazzi–Mainardi equations are expressed via the second fundamental form (L, M, N):
The Gauss formula, depending on how one chooses to define the Gaussian curvature, may be a tautology. It can be stated as
where (e, f, g) are the components of the first fundamental form.
Derivation of classical equations
Consider a parametric surface in Euclidean 3-space,
where the three component functions depend smoothly on ordered pairs (u,v) in some open domain U in the uv-plane. Assume that this surface is regular, meaning that the vectors ru and rv are linearly independent. Complete this to a basis {ru,rv,n}, by selecting a unit vector n normal to the surface. It is possible to express the second partial derivatives of r (vectors of ) with the Christoffel symbols and the elements of the second fundamental form. We choose the first two components of the basis as they are intrinsic to the surface and intend to prove intrinsic property of the Gaussian curvature. The last term in the basis is extrinsic.
Clairaut's theorem states that partial derivatives commute:
If we differentiate ruu with respect to v and ruv with respect to u, we get:
Now substitute the above expressions for the second derivatives and equate the coefficients of n:
Rearranging this equation gives the first Codazzi–Mainardi equation.
The second equation may be derived similarly.
Mean curvature
Let M be a smooth m-dimensional manifold immersed in the (m + k)-dimensional smooth manifold P. Let be a local orthonormal frame of vector fields normal to M. Then we can write,
If, now, is a local orthonormal frame (of tangent vector fields) on the same open subset of M, then we can define the mean curvatures of the immersion by
In particular, if M is a hypersurface of P, i.e. , then there is only one mean curvature to speak of. The immersion is called minimal if all the are identically zero.
Observe that the mean curvature is a trace, or average, of the second fundamental form, for any given component. Sometimes mean curvature is defined by multiplying the sum on the right-hand side by .
We can now write the Gauss–Codazzi equations as
Contracting the components gives us
When M is a hypersurface, this simplifies to
where and . In that case, one more contraction yields,
where and are the scalar curvatures of P and M respectively, and
If , the scalar curvature equation might be more complicated.
We can already use these equations to draw some conclusions. For example, any minimal immersion into the round sphere must be of the form
where runs from 1 to and
is the Laplacian on M, and is a positive constant.
See also
Darboux frame
Notes
References
Historical references
("General Discussions about Curved Surfaces")
.
Textbooks
do Carmo, Manfredo P. Differential geometry of curves & surfaces. Revised & updated second edition. Dover Publications, Inc., Mineola, NY, 2016. xvi+510 pp.
do Carmo, Manfredo Perdigão. Riemannian geometry. Translated from the second Portuguese edition by Francis Flaherty. Mathematics: Theory & Applications. Birkhäuser Boston, Inc., Boston, MA, 1992. xiv+300 pp.
Kobayashi, Shoshichi; Nomizu, Katsumi. Foundations of differential geometry. Vol. II. Interscience Tracts in Pure and Applied Mathematics, No. 15 Vol. II Interscience Publishers John Wiley & Sons, Inc., New York-London-Sydney 1969 xv+470 pp.
O'Neill, Barrett. Semi-Riemannian geometry. With applications to relativity. Pure and Applied Mathematics, 103. Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York, 1983. xiii+468 pp.
Articles
Simons, James. Minimal varieties in riemannian manifolds. Ann. of Math. (2) 88 (1968), 62–105.
External links
Peterson–Mainardi–Codazzi Equations – from Wolfram MathWorld
Peterson–Codazzi Equations
Differential geometry of surfaces
Riemannian geometry
Curvature (mathematics)
Surfaces | Gauss–Codazzi equations | [
"Physics"
] | 1,491 | [
"Geometric measurement",
"Physical quantities",
"Curvature (mathematics)"
] |
9,634,115 | https://en.wikipedia.org/wiki/Business%20informatics | Business informatics (BI) is a discipline combining economics, the economics of digitization, business administration, accounting, internal auditing, information technology (IT), and concepts of computer science. Business informatics centers around creating programming and equipment frameworks which ultimately provide the organization with effective operation based on information technology application. The focus on programming and equipment boosts the value of the analysis of economics and information technology. The BI discipline was created in Germany (in German: Wirtschaftsinformatik). It is an established academic discipline, including bachelor, master, diploma, and PhD programs in Austria, Belgium, Egypt, France, Germany, Hungary, Ireland, The Netherlands, Russia, Slovakia, Sweden, Switzerland, and Turkey, and is establishing itself in an increasing number of other countries as well, including Finland, Australia, Bosnia and Herzegovina, Malaysia, Mexico, Poland, India and South Africa.
Business informatics as an integrative discipline
Business informatics shows similarities to information systems (IS), which is a well-established discipline originating in North America. However, there are a few differences that make business informatics a unique discipline:
Business informatics includes information technology, like the relevant portions of applied computer science, to a significantly larger extent than information systems do.
Business informatics includes significant construction and implementation-oriented elements. Another thing is one major focus lies in the development of solutions for business problems rather than the ex post investigation of their impact.
Information systems (IS) focus on empirically explaining the phenomena of the real world. Information systems has been said to have an "explanation-oriented" focus in contrast to the "solution-oriented" focus that dominates business informatics. Information systems researchers make an effort to explain the phenomena of acceptance and influence of IT in organizations and society by applying an empirical approach. In order to do that, usually qualitative and quantitative empirical studies are conducted and evaluated. In contrast to that, business informatics researchers mainly focus on the creation of IT solutions for challenges they have observed or assumed, and thereby they focus more on the possible future uses of IT.
Tight integration between research and teaching following the Humboldtian ideal a major goal in business informatics. Insights gained in actual research projects become part of the curricula quite quickly since most researchers are also lecturers at the same time. The pace of scientific and technological progress in business informatics is quite rapid; therefore, subjects taught are under permanent reconsideration and revision. In its evolution, the business informatics discipline is fairly young. Therefore, significant hurdles have to be overcome in order to further establish its vision.
Career prospects
Specialists in business informatics can work both in research and in commerce. In business, there are various uses, which may vary depending on professional experience. Fields of employment may include:
Management consulting
Information technology consulting
IT account manager
Systems analysis and organization
Business analyst
IT project manager
IT auditor
Solution architect
Enterprise architect
Information technology management
In consulting, a clear line must be drawn between strategic and IT consulting.
Tertiary Institutions Providing Business Informatics Degree
University of Louisiana at Lafayette
Idaho State University
Northern Kentucky College
University of South Africa (Unisa)
There are more and more universities and community colleges providing business informatics degree.
Journal
Business & Information Systems Engineering
See also
Bachelor of Business Information Systems
Master of Business Informatics
References
Academic disciplines
Information systems
Information technology management | Business informatics | [
"Technology"
] | 677 | [
"Information systems",
"Information technology",
"Information technology management"
] |
12,098,816 | https://en.wikipedia.org/wiki/Damage%20mechanics | Damage mechanics is concerned with the representation, or modeling, of damage of materials that is suitable for making engineering predictions about the initiation, propagation, and fracture of materials without resorting to a microscopic description that would be too complex for practical engineering analysis.
Damage mechanics illustrates the typical engineering approach to model complex phenomena. To quote Dusan Krajcinovic, "It is often argued that the ultimate task of engineering research is to provide not so much a better insight into the examined phenomenon but to supply a rational predictive tool applicable in design." Damage mechanics is a topic of applied mechanics that relies heavily on continuum mechanics. Most of the work on damage mechanics uses state variables to represent the effects of damage on the stiffness and remaining life of the material that is damaging as a result of thermomechanical load and ageing. The state variables may be measurable, e.g., crack density, or inferred from the effect they have on some macroscopic property, such as stiffness, coefficient of thermal expansion, remaining life, etc. The state variables have conjugate thermodynamic forces that motivate further damage. Initially the material is pristine, or intact. A damage activation criterion is needed to predict damage initiation. Damage evolution does not progress spontaneously after initiation, thus requiring a damage evolution model. In plasticity like formulations, the damage evolution is controlled by a hardening function but this requires additional phenomenological parameters that must be found through experimentation, which is expensive, time consuming, and virtually no one does. On the other hand, micromechanics of damage formulations are able to predict both damage initiation and evolution without additional material properties.
Creep continuum damage mechanics
When mechanical structures are exposed to temperatures exceeding one-third of the melting temperature of the material of construction, time-dependent deformation (creep) and associated material degradation mechanisms become dominant modes of structural failure. While these deformation and damage mechanisms originate at the microscale where discrete processes dominate, practical application of failure theories to macroscale components is most readily achieved using the formalism of continuum mechanics. In this context, microscopic damage is idealized as a continuous state variable defined at all points within a structure. State equations are defined which govern the time evolution of damage. These equations may be readily integrated into finite element codes to analyze the damage evolution in complex 3D structures and calculate how long a component may safely be used before failure occurs.
Lumped damage state variable
L. M. Kachanov and Y. N. Rabotnov suggested the following evolution equations for the creep strain ε and a lumped damage state variable ω:
where is the creep strain rate, is the creep-rate multiplier, is the applied stress, is the creep stress exponent of the material of interest, is the rate of damage accumulation, is the damage-rate multiplier, and is the damage stress exponent.
In this simple case, the strain rate is governed by power-law creep with the stress enhanced by the damage state variable as damage accumulates. The damage term ω is interpreted as a distributed loss of load bearing area which results in an increased local stress at the microscale. The time to failure is determined by integrating the damage evolution equation from an initial undamaged state to a specified critical damage . If is taken to be 1, this results in the following prediction for a structure loaded under a constant uniaxial stress :
Model parameters and n are found by fitting the creep strain rate equation at zero damage to minimum creep rate measurements. Model parameters and m are found by fitting the above equation to creep rupture life data.
Mechanistically informed damage state variables
While easy to apply, the lumped damage model proposed by Kachanov and Robotnov is limited by the fact that the damage state variable cannot be directly tied to a specific mechanism of strain and damage evolution. Correspondingly, extrapolation of the model beyond the original dataset of test data is not justified. This limitation was remedied by researchers such as A.C.F. Cocks, M.F. Ashby, and B.F. Dyson, who proposed mechanistically informed strain and damage evolution equations. Extrapolation using such equations is justified if the dominant damage mechanism remains the same at the conditions of interest.
Void-growth by power-law creep
In the power-law creep regime, global deformation is controlled by glide and climb of dislocations. If internal voids are present within the microstructure, global structural continuity requires that the voids must both elongate and expand laterally, further reducing the local section. When cast in the damage mechanics formalism, the growth of internal voids by power-law creep can be represented by the following equations.
where is the creep-rate multiplier, is the applied stress, n is the creep stress exponent, is the average initial void radius, and d is the grain size.
Void-growth by boundary diffusion
At very high temperature and/or low stresses, void growth on grain boundaries is primarily controlled by the diffusive flux of vacancies along the grain boundary. As matter diffuses away from the void and plates onto the adjacent grain boundaries, a roughly spherical void is maintained by rapid diffusion of vacancies along the surface of the void. When cast in the damage mechanics formalism, the growth of internal voids by boundary diffusion can be represented by the following equations.
where is the creep-rate multiplier, is the applied stress, is the center-to-center void spacing, is the grain size, is the grain-boundary diffusion coefficient, is the grain boundary thickness, is the atomic volume, is the Boltzmann constant, and is the absolute temperatures. It is noted that factors present in are very similar to the Coble creep pre-factors due to the similarity of the two mechanisms.
Precipitate coarsening
Many modern steels and alloys are designed such that precipitates will precipitate either within the matrix or along grain boundaries during casting. These precipitates restrict dislocation motion and, if present on grain boundaries, grain boundary sliding during creep. Many precipitates are not thermodynamically stable and grow via diffusion when exposed to elevated temperatures. As the precipitates coarsen, their ability to restrict dislocation motion decreases as the average spacing between particles increases, thus decreasing the required Orowan stress for bowing. In the case of grain boundary precipitates, precipitate growth means that fewer grain boundaries are impeded from grain boundary sliding. When cast into the damage mechanics formalism, precipitation coarsening and its effect on strain rate may be represented by the following equations.
where is the creep-rate multiplier, is the applied stress, is the creep-rate stress exponent, is a parameter linking the precipitation damage to the strain rate, determines the rate of precipitate coarsening.
Combining damage mechanisms
Multiple damage mechanism can be combined to represent a broader range of phenomena. For instance, if both void-growth by power-law creep and precipitate coarsening are relevant mechanisms, the following combined set of equations may be used:
Note that both damage mechanisms are included in the creep strain rate equation. The precipitate coarsening damage mechanisms influences the void-growth damage mechanism as the void-growth mechanism depends on the global strain rate. The precipitate growth mechanisms is only time and temperature dependent and hence does not depend on the void-growth damage .
Multiaxial effects
The preceding equations are valid under uniaxial tension only. When a multiaxial state of stress is present in the system, each equation must be adapted so that the driving multiaxial stress is considered. For void-growth by power-law creep, the relevant stress is the von Mises stress as this drives the global creep deformation; however, for void-growth by boundary diffusion, the maximum principal stress drives the vacancy flux.
See also
Lumped damage mechanics
Failure analysis
Critical plane analysis
References
Continuum mechanics
Materials degradation
Mechanical failure | Damage mechanics | [
"Physics",
"Materials_science",
"Engineering"
] | 1,653 | [
"Continuum mechanics",
"Classical mechanics",
"Materials science",
"Mechanical engineering",
"Materials degradation",
"Mechanical failure"
] |
12,101,027 | https://en.wikipedia.org/wiki/Holstein%E2%80%93Primakoff%20transformation | In quantum mechanics, the Holstein–Primakoff transformation is a mapping from boson creation and annihilation operators to the spin operators, effectively truncating their infinite-dimensional Fock space to finite-dimensional subspaces.
One important aspect of quantum mechanics is the occurrence of—in general—non-commuting operators which represent observables, quantities that can be measured.
A standard example of a set of such operators are the three components of the angular momentum operators, which are crucial in many quantum systems.
These operators are complicated, and one would like to find a simpler representation, which can be used to generate approximate calculational schemes.
The transformation was developed in 1940 by Theodore Holstein, a graduate student at the time, and Henry Primakoff. This method has found widespread applicability and has been extended in many different directions.
There is a close link to other methods of boson mapping of operator algebras: in particular, the (non-Hermitian) Dyson–Maleev technique, and to a lesser extent the Jordan–Schwinger map. There is, furthermore, a close link to the theory of (generalized) coherent states in Lie algebras.
Description
The basic idea can be illustrated for the basic example of spin operators of quantum mechanics.
For any set of right-handed orthogonal axes, define the components of this vector operator as , and , which are mutually noncommuting, i.e., and its cyclic permutations.
In order to uniquely specify the states of a spin, one may diagonalise any set of commuting operators. Normally one uses the SU(2) Casimir operators and , which leads to
states with the quantum numbers ,
The projection quantum number takes on all the values .
Consider a single particle of spin (i.e., look at a single irreducible representation of SU(2)). Now take the state with maximal projection , the extremal weight state as a vacuum for a set of boson operators, and each subsequent state with lower projection quantum number as a boson excitation of the previous one,
Each additional boson then corresponds to a decrease of in the spin projection. Thus, the spin raising and lowering operators
and , so that , correspond (in the sense detailed below) to the bosonic annihilation and creation operators, respectively.
The precise relations between the operators must be chosen to ensure the correct commutation relations for the spin operators, such that they act on a finite-dimensional space, unlike the original Fock space.
The resulting Holstein–Primakoff transformation can be written as
The transformation is particularly useful in the case where is large, when the square roots can be expanded as Taylor series, to give an expansion in decreasing powers of .
Alternatively to a Taylor expansion there has been recent progress with a resummation of the series that made expressions possible that are polynomial in bosonic operators but still mathematically exact (on the physical subspace). The first method develops a resummation method that is exact for spin , while the latter employs a Newton series (a finite difference) expansion with an identical result, as shown below
While the expression above is not exact for spins higher than 1/2 it is an improvement over the Taylor series. Exact expressions also exist for higher spins and include terms. Much like the result above also for the expressions of higher spins and therefore the resummation is hermitian.
There also exists a non-Hermitian Dyson–Maleev (by Freeman Dyson and S.V. Maleev) variant realization J is related to the above and valid for all spins,
satisfying the same commutation relations and characterized by the same Casimir invariant.
The technique can be further extended to the Witt algebra, which is the centerless Virasoro algebra.
See also
Spin wave
Jordan–Wigner transformation
Jordan–Schwinger transformation
Bogoliubov–Valatin transformation
Klein transformation
References
Quantum mechanics | Holstein–Primakoff transformation | [
"Physics"
] | 801 | [
"Theoretical physics",
"Quantum mechanics"
] |
4,276,393 | https://en.wikipedia.org/wiki/Group%20with%20operators | In abstract algebra, a branch of mathematics, a group with operators or Ω-group is an algebraic structure that can be viewed as a group together with a set Ω that operates on the elements of the group in a special way.
Groups with operators were extensively studied by Emmy Noether and her school in the 1920s. She employed the concept in her original formulation of the three Noether isomorphism theorems.
Definition
A group with operators can be defined as a group together with an action of a set on :
that is distributive relative to the group law:
For each , the application is then an endomorphism of G. From this, it results that a Ω-group can also be viewed as a group G with an indexed family of endomorphisms of G.
is called the operator domain. The associate endomorphisms are called the homotheties of G.
Given two groups G, H with same operator domain , a homomorphism of groups with operators from to is a group homomorphism satisfying
for all and
A subgroup S of G is called a stable subgroup, -subgroup or -invariant subgroup if it respects the homotheties, that is
for all and
Category-theoretic remarks
In category theory, a group with operators can be defined as an object of a functor category GrpM where M is a monoid (i.e. a category with one object) and Grp denotes the category of groups. This definition is equivalent to the previous one, provided is a monoid (if not, we may expand it to include the identity and all compositions).
A morphism in this category is a natural transformation between two functors (i.e., two groups with operators sharing same operator domain M ). Again we recover the definition above of a homomorphism of groups with operators (with f the component of the natural transformation).
A group with operators is also a mapping
where is the set of group endomorphisms of G.
Examples
Given any group G, (G, ∅) is trivially a group with operators
Given a module M over a ring R, R acts by scalar multiplication on the underlying abelian group of M, so (M, R) is a group with operators.
As a special case of the above, every vector space over a field K is a group with operators (V, K).
Applications
The Jordan–Hölder theorem also holds in the context of groups with operators. The requirement that a group have a composition series is analogous to that of compactness in topology, and can sometimes be too strong a requirement. It is natural to talk about "compactness relative to a set", i.e. talk about composition series where each (normal) subgroup is an operator-subgroup relative to the operator set X, of the group in question.
See also
Group action
Notes
References
Group actions (mathematics)
Universal algebra | Group with operators | [
"Physics",
"Mathematics"
] | 596 | [
"Fields of abstract algebra",
"Universal algebra",
"Group actions",
"Symmetry"
] |
4,277,015 | https://en.wikipedia.org/wiki/Captopril%20challenge%20test |
The captopril challenge test (CCT) is a non-invasive medical test that measures the change in renin plasma-levels in response to administration of captopril, an angiotensin converting enzyme inhibitor. It is used to assist in the diagnosis of renal artery stenosis. It is not generally considered a useful test for children, and more suitable options are available for adult cases.
Procedure
Plasma concentration of renin is measured prior to and following the administration of captopril. The CCT is considered positive if the renin levels increase substantially or the baseline renin level is abnormally high.
In adults
CCT in adults is known to have high sensitivity, but a low specificity.
Subtraction angiography is considered a more suitable test for renal artery stenosis in adults.
See also
Captopril suppression test - used to diagnose primary aldosteronism
References
Blood tests
Dynamic endocrine function tests | Captopril challenge test | [
"Chemistry"
] | 190 | [
"Blood tests",
"Chemical pathology"
] |
4,277,086 | https://en.wikipedia.org/wiki/Wigner%E2%80%93Weyl%20transform | In quantum mechanics, the Wigner–Weyl transform or Weyl–Wigner transform (after Hermann Weyl and Eugene Wigner) is the invertible mapping between functions in the quantum phase space formulation and Hilbert space operators in the Schrödinger picture.
Often the mapping from functions on phase space to operators is called the Weyl transform or Weyl quantization, whereas the inverse mapping, from operators to functions on phase space, is called the Wigner transform. This mapping was originally devised by Hermann Weyl in 1927 in an attempt to map symmetrized classical phase space functions to operators, a procedure known as Weyl quantization. It is now understood that Weyl quantization does not satisfy all the properties one would require for consistent quantization and therefore sometimes yields unphysical answers. On the other hand, some of the nice properties described below suggest that if one seeks a single consistent procedure mapping functions on the classical phase space to operators, the Weyl quantization is the best option: a sort of normal coordinates of such maps. (Groenewold's theorem asserts that no such map can have all the ideal properties one would desire.)
Regardless, the Weyl–Wigner transform is a well-defined integral transform between the phase-space and operator representations, and yields insight into the workings of quantum mechanics. Most importantly, the Wigner quasi-probability distribution is the Wigner transform of the quantum density matrix, and, conversely, the density matrix is the Weyl transform of the Wigner function.
In contrast to Weyl's original intentions in seeking a consistent quantization scheme, this map merely amounts to a change of representation within quantum mechanics; it need not connect "classical" with "quantum" quantities. For example, the phase-space function may depend explicitly on the reduced Planck constant ħ, as it does in some familiar cases involving angular momentum. This invertible representation change then allows one to express quantum mechanics in phase space, as was appreciated in the 1940s by Hilbrand J. Groenewold and José Enrique Moyal.
In more generality, Weyl quantization is studied in cases where the phase space is a symplectic manifold, or possibly a Poisson manifold. Related structures include the Poisson–Lie groups and Kac–Moody algebras.
Definition of the Weyl quantization of a general observable
The following explains the Weyl transformation on the simplest, two-dimensional Euclidean phase space. Let the coordinates on phase space be , and let be a function defined everywhere on phase space. In what follows, we fix operators P and Q satisfying the canonical commutation relations, such as the usual position and momentum operators in the Schrödinger representation. We assume that the exponentiated operators and constitute an irreducible representation of the Weyl relations, so that the Stone–von Neumann theorem (guaranteeing uniqueness of the canonical commutation relations) holds.
Basic formula
The Weyl transform (or Weyl quantization) of the function is given by the following operator in Hilbert space,
Throughout, ħ is the reduced Planck constant.
It is instructive to perform the and integrals in the above formula first, which has the effect of computing the ordinary Fourier transform of the function , while leaving the operator . In that case, the Weyl transform can be written as
.
We may therefore think of the Weyl map as follows: We take the ordinary Fourier transform of the function , but then when applying the Fourier inversion formula, we substitute the quantum operators and for the original classical variables and , thus obtaining a "quantum version of ."
A less symmetric form, but handy for applications, is the following,
In the position representation
The Weyl map may then also be expressed in terms of the integral kernel matrix elements of this operator,
Inverse map
The inverse of the above Weyl map is the Wigner map (or Wigner transform), which was introduced by Eugene Wigner, which takes the operator back to the original phase-space kernel function ,
For example, the Wigner map of the oscillator thermal distribution operator is
If one replaces in the above expression with an arbitrary operator, the resulting function may depend on the reduced Planck constant , and may well describe quantum-mechanical processes, provided it is properly composed through the star product, below.
In turn, the Weyl map of the Wigner map is summarized by Groenewold's formula,
Weyl quantization of polynomial observables
While the above formulas give a nice understanding of the Weyl quantization of a very general observable on phase space, they are not very convenient for computing on simple observables, such as those that are polynomials in and . In later sections, we will see that on such polynomials, the Weyl quantization represents the totally symmetric ordering of the noncommuting operators and .
For example, the Wigner map of the quantum angular-momentum-squared operator L2 is not just the classical angular momentum squared, but it further contains an offset term , which accounts for the nonvanishing angular momentum of the ground-state Bohr orbit.
Properties
Weyl quantization of polynomials
The action of the Weyl quantization on polynomial functions of and is completely determined by the following symmetric formula:
for all complex numbers and . From this formula, it is not hard to show that the Weyl quantization on a function of the form gives the average of all possible orderings of factors of and factors of :where , and is the set of permutations on N elements.
For example, we have
While this result is conceptually natural, it is not convenient for computations when and are large. In such cases, we can use instead McCoy's formula
This expression gives an apparently different answer for the case of from the totally symmetric expression above. There is no contradiction, however, since the canonical commutation relations allow for more than one expression for the same operator. (The reader may find it instructive to use the commutation relations to rewrite the totally symmetric formula for the case of in terms of the operators , , and and verify the first expression in McCoy's formula with .)
It is widely thought that the Weyl quantization, among all quantization schemes, comes as close as possible to mapping the Poisson bracket on the classical side to the commutator on the quantum side. (An exact correspondence is impossible, in light of Groenewold's theorem.) For example, Moyal showed the
Theorem: If is a polynomial of degree at most 2 and is an arbitrary polynomial, then we have .
Weyl quantization of general functions
If is a real-valued function, then its Weyl-map image is self-adjoint.
If is an element of Schwartz space, then is trace-class.
More generally, is a densely defined unbounded operator.
The map is one-to-one on the Schwartz space (as a subspace of the square-integrable functions).
See also
Canonical commutation relation
Deformation quantization
Heisenberg group
Moyal bracket
Weyl algebra
Functor
Pseudo-differential operator
Wigner quasi-probability distribution
Stone–von Neumann theorem
Phase space formulation of quantum mechanics
Kontsevich quantization formula
Gabor–Wigner transform
Oscillator representation
References
Further reading
(Sections I to IV of this article provide an overview over the Wigner–Weyl transform, the Wigner quasiprobability distribution, the phase space formulation of quantum mechanics and the example of the quantum harmonic oscillator.)
Terence Tao's 2012 notes on Weyl ordering
Mathematical quantization
Mathematical physics
Foundational quantum physics
Concepts in physics | Wigner–Weyl transform | [
"Physics",
"Mathematics"
] | 1,589 | [
"Applied mathematics",
"Theoretical physics",
"Foundational quantum physics",
"Quantum mechanics",
"Mathematical quantization",
"nan",
"Mathematical physics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.