id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
8,155,146 | https://en.wikipedia.org/wiki/Hedetniemi%27s%20conjecture | In graph theory, Hedetniemi's conjecture, formulated by Stephen T. Hedetniemi in 1966, concerns the connection between graph coloring and the tensor product of graphs. This conjecture states that
Here denotes the chromatic number of an undirected finite graph .
The inequality χ(G × H) ≤ min {χ(G), χ(H)} is easy: if G is k-colored, one can k-color G × H by using the same coloring for each copy of G in the product; symmetrically if H is k-colored. Thus, Hedetniemi's conjecture amounts to the assertion that tensor products cannot be colored with an unexpectedly small number of colors.
A counterexample to the conjecture was discovered by (see ), thus disproving the conjecture in general.
Known cases
Any graph with a nonempty set of edges requires at least two colors; if G and H are not 1-colorable, that is, they both contain an edge, then their product also contains an edge, and is hence not 1-colorable either. In particular, the conjecture is true when G or H is a bipartite graph, since then its chromatic number is either 1 or 2.
Similarly, if two graphs G and H are not 2-colorable, that is, not bipartite, then both contain a cycle of odd length. Since the product of two odd cycle graphs contains an odd cycle, the product G × H is not 2-colorable either. In other words, if G × H is 2-colorable, then at least one of G and H must be 2-colorable as well.
The next case was proved long after the conjecture's statement, by : if the product G × H is 3-colorable, then one of G or H must also be 3-colorable. In particular, the conjecture is true whenever G or H is 4-colorable (since then the inequality χ(G × H) ≤ min {χ(G), χ(H)} can only be strict when G × H is 3-colorable).
In the remaining cases, both graphs in the tensor product are at least 5-chromatic and progress has only been made for very restricted situations.
Weak Hedetniemi Conjecture
The following function (known as the Poljak-Rödl function) measures how low the chromatic number of products of n-chromatic graphs can be.
Hedetniemi's conjecture is then equivalent to saying that .
The Weak Hedetniemi Conjecture instead states merely that the function f(n) is unbounded.
In other words, if the tensor product of two graphs can be colored with few colors, this should imply some bound on the chromatic number of one of the factors.
The main result of , independently improved by Poljak, James H. Schmerl, and Zhu, states that if the function f(n) is bounded, then it is bounded by at most 9.
Thus a proof of Hedetniemi's conjecture for 10-chromatic graphs would already imply the Weak Hedetniemi Conjecture for all graphs.
Multiplicative graphs
The conjecture is studied in the more general context of graph homomorphisms, especially because of interesting relations to the category of graphs (with graphs as objects and homomorphisms as arrows). For any fixed graph K, one considers graphs G that admit a homomorphism to K, written G → K. These are also called K-colorable graphs. This generalizes the usual notion of graph coloring, since it follows from definitions that a k-coloring is the same as a Kk-coloring (a homomorphism into the complete graph on k vertices).
A graph K is called multiplicative if for any graphs G, H, the fact that G × H → K holds implies that G → K or H → K holds.
As with classical colorings, the reverse implication always holds: if G (or H, symmetrically) is K-colorable, then G × H is easily K-colored by using the same values independently of H.
Hedetniemi's conjecture is then equivalent to the statement that each complete graph is multiplicative.
The above known cases are equivalent to saying that K1, K2, and K3 are multiplicative.
The case of K4 is widely open.
On the other hand, the proof of has been generalized by to show that all cycle graphs are multiplicative.
Later, proved more generally that all circular cliques Kn/k with n/k < 4 are multiplicative.
In terms of the circular chromatic number χc, this means that if , then . has shown that square-free graphs are multiplicative.
Examples of non-multiplicative graphs can be constructed from two graphs G and H that are not comparable in the homomorphism order (that is, neither G→H nor H→G holds). In this case, letting K=G×H, we trivially have G×H→K, but neither G nor H can admit a homomorphism into K, since composed with the projection K→H or K→G it would give a contradiction.
Exponential graph
Since the tensor product of graphs is the category-theoretic product in the category of graphs (with graphs as objects and homomorphisms as arrows), the conjecture can be rephrased in terms of the following construction on graphs K and G.
The exponential graph is the graph with all functions as vertices (not only homomorphisms) and two functions f,g adjacent when
f(v) is adjacent to g(v') in K, for all adjacent vertices v,v ' of G.
In particular, there is a loop at a function f (it is adjacent to itself) if and only if the function gives a homomorphism from G to K.
Seen differently, there is an edge between f and g whenever the two functions define a homomorphism from G × K2 (the bipartite double cover of G) to K.
The exponential graph is the exponential object in the category of graphs. This means homomorphisms from G × H to a graph K correspond to homomorphisms from H to KG.
Moreover, there is a homomorphism given by .
These properties allow to conclude that the multiplicativity of K is equivalent to the statement:
either G or KG is K-colorable, for every graph G.
In other words, Hedetniemi's conjecture can be seen as a statement on exponential graphs: for every integer k, the graph KkG is either k-colorable, or it contains a loop (meaning G is k-colorable).
One can also see the homomorphisms as the hardest instances of Hedetniemi's conjecture: if the product G × H was a counterexample, then G × KkG would also be a counterexample.
Generalizations
Generalized to directed graphs, the conjecture has simple counterexamples, as observed by . Here, the chromatic number of a directed graph is just the chromatic number of the underlying graph, but the tensor product has exactly half the number of edges (for directed edges g→g' in G and h→h' in H, the tensor product G × H has only one edge, from (g,h) to (g',h'), while the product of the underlying undirected graphs would have an edge between (g,h') and (g',h) as well).
However, the Weak Hedetniemi Conjecture turns out to be equivalent in the directed and undirected settings .
The problem cannot be generalized to infinite graphs: gave an example of two infinite graphs, each requiring an uncountable number of colors, such that their product can be colored with only countably many colors. proved that in the constructible universe, for every infinite cardinal , there exist a pair of graphs of chromatic number greater than , such that their product can still be colored with only countably many colors.
Related problems
A similar equality for the cartesian product of graphs was proven by and rediscovered several times afterwards.
An exact formula is also known for the lexicographic product of graphs.
introduced two stronger conjectures involving unique colorability.
References
Primary sources
.
.
.
.
.
.
.
.
.
.
.
.
Surveys and secondary sources
.
.
.
.
.
.
External links
A breakthrough in graph theory, Numberphile
Graph products
Graph coloring
Disproved conjectures | Hedetniemi's conjecture | [
"Mathematics"
] | 1,768 | [
"Graph coloring",
"Mathematical relations",
"Graph theory"
] |
8,156,507 | https://en.wikipedia.org/wiki/Fixation%20%28histology%29 | In the fields of histology, pathology, and cell biology, fixation is the preservation of biological tissues from decay due to autolysis or putrefaction. It terminates any ongoing biochemical reactions and may also increase the treated tissues' mechanical strength or stability. Tissue fixation is a critical step in the preparation of histological sections, its broad objective being to preserve cells and tissue components and to do this in such a way as to allow for the preparation of thin, stained sections. This allows the investigation of the tissues' structure, which is determined by the shapes and sizes of such macromolecules (in and around cells) as proteins and nucleic acids.
Purposes
In performing their protective role, fixatives denature proteins by coagulation, by forming additive compounds, or by a combination of coagulation and additive processes. A compound that adds chemically to macromolecules stabilizes structure most effectively if it is able to combine with parts of two different macromolecules, an effect known as cross-linking.
Fixation of tissue is done for several reasons. One reason is to kill the tissue so that postmortem decay (autolysis and putrefaction) is prevented.
Fixation preserves biological material (tissue or cells) as close to its natural state as possible in the process of preparing tissue for examination. To achieve this, several conditions usually must be met.
First, a fixative usually acts to disable intrinsic biomolecules—particularly proteolytic enzymes—which otherwise digest or damage the sample.
Second, a fixative typically protects a sample from extrinsic damage. Fixatives are toxic to most common microorganisms (bacteria in particular) that might exist in a tissue sample or which might otherwise colonize the fixed tissue. In addition, many fixatives chemically alter the fixed material to make it less palatable (either indigestible or toxic) to opportunistic microorganisms.
Finally, fixatives often alter the cells or tissues on a molecular level to increase their mechanical strength or stability. This increased strength and rigidity can help preserve the morphology (shape and structure) of the sample as it is processed for further analysis.
Even the most careful fixation does alter the sample and introduce artifacts that can interfere with interpretation of cellular ultrastructure. A prominent example is the bacterial mesosome, which was thought to be an organelle in gram-positive bacteria in the 1970s, but was later shown by new techniques developed for electron microscopy to be simply an artifact of chemical fixation. Standardization of fixation and other tissue processing procedures takes this introduction of artifacts into account, by establishing what procedures introduce which kinds of artifacts. Researchers who know what types of artifacts to expect with each tissue type and processing technique can accurately interpret sections with artifacts, or choose techniques that minimize artifacts in areas of interest.
Choosing a fixative procedure
Fixation is usually the first stage in a multistep process to prepare a sample of biological material for microscopy or other analysis. Therefore, the choice of fixative and fixation protocol may depend on the additional processing steps and final analyses that are planned. For example, immunohistochemistry uses antibodies that bind to a specific protein target. Prolonged fixation can chemically mask these targets and prevent antibody binding. In these cases, a 'quick fix' method using cold formalin for around 24 hours is typically used. Methanol (100%) can also be used for quick fixation, and that time can vary depending on the biological material. For example, MDA-MB 231 human breast cancer cells can be fixed for only 3 minutes with cold methanol (-20 °C). For enzyme localization studies, the tissues should either be pre-fixed lightly only, or post-fixed after the enzyme activity product has formed.
Types of fixation and processes
There are generally three types of fixation processes depending on the sample that needs to be fixed.
Heat fixation
Heat fixation is used for the fixation of single cell organisms, most commonly bacteria and archaea. The organisms are typically mixed with water or physiological saline which helps to evenly spread out the sample. Once diluted, the sample is spread onto a microscope slide. This diluted bacteria sample is commonly referred to as a smear after it is placed on a slide. After a smear has dried at room temperature, the slide is gripped by tongs or a clothespin and passed through the flame of a Bunsen burner several times to heat-kill and adhere the organism to the slide. A microincinerating device can also be used. After heating, samples are typically stained and then imaged using a microscope. Heat fixation generally preserves overall morphology but not internal structures. Heat denatures the proteolytic enzyme and prevents autolysis. Heat fixation cannot be used in the capsular stain method as heat fixation will shrink or destroy the capsule (glycocalyx) and cannot be seen in stains.
Immersion
Immersion can be used to fix histological samples from a single cell to an entire organism. The sample of tissue is immersed in fixative solution for a set period of time. The fixative solution must have a volume at least 10 times greater than the volume of the tissue. In order for fixation to be successful, the fixative must diffuse throughout the entire tissue, so tissue size and density, as well as type of fixative must be considered. This is a common technique for cellular applications, but can be used for larger tissues as well. Using a larger sample means it must be immersed longer for the fixative to reach the deeper tissue.
Perfusion
Perfusion is the passage of fluid through the blood vessels or natural channels of an organ or organism. In tissue fixation via perfusion, the fixative is pumped into the circulatory system, usually through a needle inserted into the left ventricle. This can be done via ultrasound guidance, or by opening the chest cavity of the subject. The fixative is injected into the heart with the injection volume matching the typical cardiac output. Using the innate circulatory system, the fixative is distributed throughout the entire body, and the tissue doesn't die until it is fixed. When this method is used, a drainage port must also be added somewhere in the circulatory system to account for the addition of the volume of the fixative and buffer, this is typically done in the right atrium. The fixative is pumped into the circulatory system until it has replaced all of the blood. Using perfusion has the advantage of preserving morphology, but the disadvantages are that the subject dies and the volume of fixative needed for larger organisms is high, potentially raising costs. It is possible to decrease the necessary volume of fluid to perform a perfusion fixation by pinching off arteries that feed tissues not of interest to the research involved. Perfusion fixation is commonly used to image brain, lung, and kidney tissues in rodents, and is also used in performing autopsies in humans.
Chemical fixation
In both immersion and perfusion fixation processes, chemical fixatives are used to preserve structures in a state (both chemically and structurally) as close to living tissue as possible. This requires a chemical fixative.
Crosslinking fixatives – aldehydes
Crosslinking fixatives act by creating covalent chemical bonds between proteins in tissue. This anchors soluble proteins to the cytoskeleton, and lends additional rigidity to the tissue. Preservation of transient or fine cytoskeletal structure such as contractions during embryonic differentiation waves is best achieved by a pretreatment using microwaves before the addition of a cross linking fixative.
The most commonly used fixative in histology is formaldehyde. It is usually used as a 10% neutral buffered formalin (NBF), that is approx. 3.7%–4.0% formaldehyde in phosphate buffer, pH 7. Since formaldehyde is a gas at room temperature, formalin – formaldehyde gas dissolved in water (~37% w/v) – is used when making the former fixative. Formaldehyde fixes tissue by cross-linking the proteins, primarily the residues of the basic amino acid lysine. Its effects are reversible by excess water and it avoids formalin pigmentation. Paraformaldehyde is also commonly used and will depolymerize back to formalin when heated, also making it an effective fixative. Other benefits to paraformaldehyde include long term storage and good tissue penetration. It is particularly good for immunohistochemistry techniques. The formaldehyde vapor can also be used as a fixative for cell smears.
Another popular aldehyde for fixation is glutaraldehyde. It operates similarly to formaldehyde, causing the deformation of proteins' α-helices. However glutaraldehyde is a larger molecule than formaldehyde, and so permeates membranes more slowly. Consequently, glutaraldehyde fixation on thicker tissue samples can be difficult; this can be troubleshot by reducing the size of the tissue sample. One of the advantages of glutaraldehyde fixation is that it may offer a more rigid or tightly linked fixed product—its greater length and two aldehyde groups allow it to 'bridge' and link more distant pairs of protein molecules. It causes rapid and irreversible changes, is well suited for electron microscopy, works well at 4 °C, and gives the best overall cytoplasmic and nuclear detail. It is, however, not ideal for immunohistochemistry staining.
Some fixation protocols call for a combination of formaldehyde and glutaraldehyde so that their respective strengths complement one another.
These crosslinking fixatives, especially formaldehyde, tend to preserve the secondary structure of proteins and may also preserve most tertiary structure.
Precipitating fixatives – alcohols
Precipitating (or denaturing) fixatives act by reducing the solubility of protein molecules and often by disrupting the hydrophobic interactions that give many proteins their tertiary structure. The precipitation and aggregation of proteins is a very different process from the crosslinking that occurs with aldehyde fixatives.
The most common precipitating fixatives are ethanol and methanol. They are commonly used to fix frozen sections and smears. Acetone is also used and has been shown to produce better histological preservation than frozen sections when employed in the Acetone Methylbenzoate Xylene (AMEX) technique.
Protein-denaturing methanol, ethanol and acetone are rarely used alone for fixing blocks unless studying nucleic acids.
Acetic acid is a denaturant that is sometimes used in combination with the other precipitating fixatives, such as Davidson's AFA. The alcohols, by themselves, are known to cause considerable shrinkage and hardening of tissue during fixation while acetic acid alone is associated with tissue swelling; combining the two may result in better preservation of tissue morphology.
Oxidizing agents
The oxidizing fixatives can react with the side chains of proteins and other biomolecules, allowing the formation of crosslinks that stabilize tissue structure. However they cause extensive denaturation despite preserving fine cell structure and are used mainly as secondary fixatives.
Osmium tetroxide is often used as a secondary fixative when samples are prepared for electron microscopy. (It is not used for light microscopy as it penetrates thick sections of tissue very poorly.)
Potassium dichromate, chromic acid, and potassium permanganate all find use in certain specific histological preparations.
Mercurials
Mercurials such as B-5 and Zenker's fixative have an unknown mechanism that increases staining brightness and give excellent nuclear detail. Despite being fast, mercurials penetrate poorly and produce tissue shrinkage. Their best application is for fixation of hematopoietic and reticuloendothelial tissues. Also note that since they contain mercury, care must be taken with disposal.
Picrates
Picrates penetrate tissue well to react with histones and basic proteins to form crystalline picrates with amino acids and precipitate all proteins. It is a good fixative for connective tissue, preserves glycogen well, and extracts lipids to give superior results to formaldehyde in immunostaining of biogenic and polypeptide hormones However, it causes a loss of basophils unless the specimen is thoroughly washed following fixation.
HOPE fixative
Hepes-glutamic acid buffer-mediated organic solvent protection effect (HOPE) gives formalin-like morphology, excellent preservation of protein antigens for immunohistochemistry and enzyme histochemistry, good RNA and DNA yields and absence of crosslinking proteins.
See also
Karnovsky fixative
References
External links
Fixing specimens for making permanent slides
Fixation strategies and formulations for immunohistochemical staining
Pathology
Histology
Biotechnology | Fixation (histology) | [
"Chemistry",
"Biology"
] | 2,713 | [
"Pathology",
"Biotechnology",
"Histology",
"nan",
"Microscopy"
] |
8,157,026 | https://en.wikipedia.org/wiki/Tarjan%27s%20algorithm | Tarjan's algorithm may refer to one of several algorithms attributed to Robert Tarjan, including:
Tarjan's strongly connected components algorithm
Tarjan's off-line lowest common ancestors algorithm
Tarjan's algorithm for finding bridges in an undirected graph
Tarjan's algorithm for finding simple circuits in a directed graph
See also
List of algorithms
References
Algorithms
Mathematics-related lists | Tarjan's algorithm | [
"Mathematics"
] | 79 | [
"Applied mathematics",
"Algorithms",
"Mathematical logic"
] |
15,642,481 | https://en.wikipedia.org/wiki/Furaneol | Furaneol, or strawberry furanone, is an organic compound used in the flavor and perfume industry. It is formally a derivative of furan. It is a white or colorless solid that is soluble in water and organic solvents.
Odor and occurrence
Although malodorous at high concentrations, it exhibits a sweet strawberry aroma when dilute. It is found in strawberries and a variety of other fruits and it is partly responsible for the smell of fresh pineapple.
It is also an important component of the odours of buckwheat, and tomato. Furaneol accumulation during ripening has been observed in strawberries and can reach a high concentration of 37 μg/g.
Furaneol acetate
The acetate ester of furaneol, also known as caramel acetate and strawberry acetate, is also popular with flavorists to achieve a fatty toffee taste and it is used in traces in perfumery to add a sweet gourmand note.
Stereoisomerism
Furaneol has two enantiomers, (R)-(+)-furaneol and (S)-(−)-furaneol. The (R)-form is mainly responsible for the smell.
Biosynthesis
It is one of several products from the dehydration of glucose. Its immediate biosynthetic precursor is the glucoside, derived from dehydration of sucrose.
References
Flavors
Enones
Enols
Dihydrofurans
Sweet-smelling chemicals | Furaneol | [
"Chemistry"
] | 313 | [
"Enols",
"Functional groups"
] |
15,647,049 | https://en.wikipedia.org/wiki/Ramaria%20formosa | Ramaria formosa, commonly known as the pinkish coral mushroom, salmon coral, beautiful clavaria, handsome clavaria, yellow-tipped- or pink coral fungus, is a coral fungus found in Europe. Similar forms collected in North America are considered to represent a different species.
It is a pinkish, much-branched coral-shape reaching some high. It is widely held to be mildly poisonous if consumed, giving rise to acute gastrointestinal symptoms of nausea, vomiting, diarrhea and colicky pain.
Taxonomy
The fungus was initially described by Christian Hendrik Persoon in 1797 as Clavaria formosa. In 1821, Elias Magnus Fries sanctioned the genus name Clavaria, and treated Ramaria as a section of Clavaria. It was placed in its current genus by French mycologist Lucien Quélet in 1888. Synonyms have resulted from transfers of the fungus to the now obsolete genera Merisma by Harald Othmar Lenz in 1831, and to Corallium by Gotthold Hahn in 1883.
The generic name is derived from Latin rāmus 'branch', while the specific epithet comes from the Latin formōsus 'beautiful'. Common names include salmon coral, beautiful clavaria, handsome clavaria, yellow-tipped- or pink coral fungus. There is some confusion over its classification as there is evidence the binomial name has been applied loosely to any coral fungus fitting the description, and thus the collections from North America are now considered to be a different species.
Description
The fruit body of Ramaria formosa grows to a height of and width of ; it is a many-branched coral-like structure, the yellow-tipped pinkish branches arising from a thick base. Terminal branches are less than in diameter. The flesh is white, with pink in the middle, or pale orange. It may turn wine-coloured or blackish when bruised. Old specimens fade so the original colour is hard to distinguish. The smell is unpleasant and taste bitter.
The spores have a cylindrical to elliptical shape, and measure 8–15 by 4–6 μm. The spore surface features small warts that are arranged in confluent lines. Basidia (spore-bearing cells) are club-shaped, measuring 40–60 by 7–10 μm Clamp connections are present in the hyphae.
Similar species
There are several other Ramaria species with yellow-tipped, salmon-coloured branches, including R. leptoformosa, R. neoformosa, R. raveneliana and R. rubricarnata. These are distinguished from R. formosa most reliably using microscopic characteristics. One guide recommends that all old coral fungi should be avoided for consumption.
Distribution and habitat
Fruiting in autumn, Ramaria formosa is associated with beech and is found in Europe. In Cyprus, the fungus is thought to form mycorrhizal associations with golden oak (Quercus alnifolia).
Toxicity
Consumption of the fungus results in acute gastrointestinal symptoms of nausea, vomiting, colicky abdominal pain and diarrhea. The toxins responsible are unknown to date. It has been reported as edible if the acrid tips are removed.
References
External links
Gomphaceae
Poisonous fungi
Fungi described in 1797
Fungi of Asia
Fungi of Europe
Fungi of North America
Fungus species | Ramaria formosa | [
"Biology",
"Environmental_science"
] | 682 | [
"Poisonous fungi",
"Fungi",
"Toxicology",
"Fungus species"
] |
15,648,386 | https://en.wikipedia.org/wiki/Fluoromethylidyne | Fluoromethylidyne is not a stable chemical species but a metastable radical containing one highly reactive carbon atom bound to one fluorine atom with the formula CF. The carbon atom has a lone-pair and a single unpaired (radical) electron in the ground state.
Ground-state fluoromethylidyne radicals can be produced by the ultraviolet photodissociation of dibromodifluoromethane at 248 nanometer wavelength.
It readily and irreversibly dimerises to difluoroacetylene, also known as difluoroethyne, perfluoroacetylene, or di- or perfluoroethylyne. Under certain conditions it can hexamerise to hexafluorobenzene.
See also
Carbyne
References
Free radicals
Reactive intermediates | Fluoromethylidyne | [
"Chemistry",
"Biology"
] | 176 | [
"Free radicals",
"Organic compounds",
"Senescence",
"Biomolecules",
"Physical organic chemistry",
"Reactive intermediates",
"Organic compound stubs",
"Organic chemistry stubs"
] |
462,138 | https://en.wikipedia.org/wiki/Particle%20in%20a%20one-dimensional%20lattice | In quantum mechanics, the particle in a one-dimensional lattice is a problem that occurs in the model of a periodic crystal lattice. The potential is caused by ions in the periodic structure of the crystal creating an electromagnetic field so electrons are subject to a regular potential inside the lattice. It is a generalization of the free electron model, which assumes zero potential inside the lattice.
Problem definition
When talking about solid materials, the discussion is mainly around crystals – periodic lattices. Here we will discuss a 1D lattice of positive ions. Assuming the spacing between two ions is , the potential in the lattice will look something like this:
The mathematical representation of the potential is a periodic function with a period . According to Bloch's theorem, the wavefunction solution of the Schrödinger equation when the potential is periodic, can be written as:
where is a periodic function which satisfies . It is the Bloch factor with Floquet exponent which gives rise to the band structure of the energy spectrum of the Schrödinger equation with a periodic potential like the Kronig–Penney potential or a cosine function as it was shown in 1928 by Strutt. The solutions can be given with the help of the Mathieu functions.
When nearing the edges of the lattice, there are problems with the boundary condition. Therefore, we can represent the ion lattice as a ring following the Born–von Karman boundary conditions. If is the length of the lattice so that , then the number of ions in the lattice is so big, that when considering one ion, its surrounding is almost linear, and the wavefunction of the electron is unchanged. So now, instead of two boundary conditions we get one circular boundary condition:
If is the number of ions in the lattice, then we have the relation: . Replacing in the boundary condition and applying Bloch's theorem will result in a quantization for :
Kronig–Penney model
The Kronig–Penney model (named after Ralph Kronig and William Penney) is a simple, idealized quantum-mechanical system that consists of an infinite periodic array of rectangular potential barriers.
The potential function is approximated by a rectangular potential:
Using Bloch's theorem, we only need to find a solution for a single period, make sure it is continuous and smooth, and to make sure the function is also continuous and smooth.
Considering a single period of the potential:
We have two regions here. We will solve for each independently:
Let E be an energy value above the well (E>0)
For :
For :
To find u(x) in each region, we need to manipulate the electron's wavefunction:
And in the same manner:
To complete the solution we need to make sure the probability function is continuous and smooth, i.e.:
And that and are periodic:
These conditions yield the following matrix:
For us to have a non-trivial solution, the determinant of the matrix must be 0. This leads us to the following expression:
To further simplify the expression, we perform the following approximations:
The expression will now be:
For energy values inside the well (E < 0), we get:
with and .
Following the same approximations as above (), we arrive at
with the same formula for P as in the previous case .
Band gaps in the Kronig–Penney model
In the previous paragraph, the only variables not determined by the parameters of the physical system are the energy E and the crystal momentum k. By picking a value for E, one can compute the right hand side, and then compute k by taking the of both sides. Thus, the expression gives rise to the dispersion relation.
The right hand side of the last expression above can sometimes be greater than 1 or less than –1, in which case there is no value of k that can make the equation true. Since , that means there are certain values of E for which there are no eigenfunctions of the Schrödinger equation. These values constitute the band gap.
Thus, the Kronig–Penney model is one of the simplest periodic potentials to exhibit a band gap.
Kronig–Penney model: alternative solution
An alternative treatment to a similar problem is given. Here we have a delta periodic potential:
is some constant, and is the lattice constant (the spacing between each site). Since this potential is periodic, we could expand it as a Fourier series:
where
The wave-function, using Bloch's theorem, is equal to where is a function that is periodic in the lattice, which means that we can expand it as a Fourier series as well:
Thus the wave function is:
Putting this into the Schrödinger equation, we get:
or rather:
Now we recognize that:
Plug this into the Schrödinger equation:
Solving this for we get:
We sum this last equation over all values of to arrive at:
Or:
Conveniently, cancels out and we get:
Or:
To save ourselves some unnecessary notational effort we define a new variable:
and finally our expression is:
Now, is a reciprocal lattice vector, which means that a sum over is actually a sum over integer multiples of :
We can juggle this expression a little bit to make it more suggestive (use partial fraction decomposition):
If we use a nice identity of a sum of the cotangent function (Equation 18) which says:
and plug it into our expression we get to:
We use the sum of and then, the product of (which is part of the formula for the sum of ) to arrive at:
This equation shows the relation between the energy (through ) and the wave-vector, , and as you can see, since the left hand side of the equation can only range from to then there are some limits on the values that (and thus, the energy) can take, that is, at some ranges of values of the energy, there is no solution according to these equation, and thus, the system will not have those energies: energy gaps. These are the so-called band-gaps, which can be shown to exist in any shape of periodic potential (not just delta or square barriers).
For a different and detailed calculation of the gap formula (i.e. for the gap between bands) and the level splitting of eigenvalues of the one-dimensional Schrödinger equation see Müller-Kirsten. Corresponding results for the cosine potential (Mathieu equation) are also given in detail in this reference.
Finite lattice
In some cases, the Schrödinger equation can be solved analytically on a one-dimensional lattice of finite length using the theory of periodic differential equations. The length of the lattice is assumed to be , where is the potential period and the number of periods is a positive integer. The two ends of the lattice are at and , where determines the point of termination. The wavefunction vanishes outside the interval .
The eigenstates of the finite system can be found in terms of the Bloch states of an infinite system with the same periodic potential. If there is a band gap between two consecutive energy bands of the infinite system, there is a sharp distinction between two types of states in the finite lattice. For each energy band of the infinite system, there are bulk states whose energies depend on the length but not on the termination . These states are standing waves constructed as a superposition of two Bloch states with momenta and , where is chosen so that the wavefunction vanishes at the boundaries. The energies of these states match the energy bands of the infinite system.
For each band gap, there is one additional state. The energies of these states depend on the point of termination but not on the length . The energy of such a state can lie either at the band edge or within the band gap. If the energy is within the band gap, the state is a surface state localized at one end of the lattice, but if the energy is at the band edge, the state is delocalized across the lattice.
See also
Empty lattice approximation
Nearly free electron model
Crystal structure
References
External links
"The Kronig–Penney Model" by Michael Croucher, an interactive calculation of 1d periodic potential band structure using Mathematica, from The Wolfram Demonstrations Project.
Condensed matter physics
Electronics concepts
Quantum models | Particle in a one-dimensional lattice | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,712 | [
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Quantum models",
"Condensed matter physics",
"Matter"
] |
462,396 | https://en.wikipedia.org/wiki/Baryogenesis | In physical cosmology, baryogenesis (also known as baryosynthesis) is the physical process that is hypothesized to have taken place during the early universe to produce baryonic asymmetry, i.e. the imbalance of matter (baryons) and antimatter (antibaryons) in the observed universe.
One of the outstanding problems in modern physics is the predominance of matter over antimatter in the universe. The universe, as a whole, seems to have a nonzero positive baryon number density. Since it is assumed in cosmology that the particles we see were created using the same physics we measure today, it would normally be expected that the overall baryon number should be zero, as matter and antimatter should have been created in equal amounts. A number of theoretical mechanisms are proposed to account for this discrepancy, namely identifying conditions that favour symmetry breaking and the creation of normal matter (as opposed to antimatter). This imbalance has to be exceptionally small, on the order of 1 in every (≈) particles a small fraction of a second after the Big Bang. After most of the matter and antimatter was annihilated, what remained was all the baryonic matter in the current universe, along with a much greater number of bosons. Experiments reported in 2010 at Fermilab, however, seem to show that this imbalance is much greater than previously assumed. These experiments involved a series of particle collisions and found that the amount of generated matter was approximately 1% larger than the amount of generated antimatter. The reason for this discrepancy is not yet known.
Most grand unified theories explicitly break the baryon number symmetry, which would account for this discrepancy, typically invoking reactions mediated by very massive X bosons or massive Higgs bosons (). The rate at which these events occur is governed largely by the mass of the intermediate or particles, so by assuming these reactions are responsible for the majority of the baryon number seen today, a maximum mass can be calculated above which the rate would be too slow to explain the presence of matter today. These estimates predict that a large volume of material will occasionally exhibit a spontaneous proton decay, which has not been observed. Therefore, the imbalance between matter and antimatter remains a mystery.
Baryogenesis theories are based on different descriptions of the interaction between fundamental particles. Two main theories are electroweak baryogenesis (Standard Model), which would occur during the electroweak phase transition, and the GUT baryogenesis, which would occur during or shortly after the grand unification epoch. Quantum field theory and statistical physics are used to describe such possible mechanisms.
Baryogenesis is followed by primordial nucleosynthesis, when atomic nuclei began to form.
Background
The majority of ordinary matter in the universe is found in atomic nuclei, which are made of neutrons and protons. These nucleons are made up of smaller particles called quarks, and antimatter equivalents for each are predicted to exist by the Dirac equation in 1928. Since then, each kind of antiquark has been experimentally verified. Hypotheses investigating the first few instants of the universe predict a composition with an almost equal number of quarks and antiquarks. Once the universe expanded and cooled to a critical temperature of approximately , quarks combined into normal matter and antimatter and proceeded to annihilate up to the small initial asymmetry of about one part in five billion, leaving the matter around us. Free and separate individual quarks and antiquarks have never been observed in experiments—quarks and antiquarks are always found in groups of three (baryons), or bound in quark–antiquark pairs (mesons). Likewise, there is no experimental evidence that there are any significant concentrations of antimatter in the observable universe.
There are two main interpretations for this disparity: either the universe began with a small preference for matter (total baryonic number of the universe different from zero), or the universe was originally perfectly symmetric, but somehow a set of phenomena contributed to a small imbalance in favour of matter over time. The second point of view is preferred, although there is no clear experimental evidence indicating either of them to be the correct one.
GUT baryogenesis under Sakharov conditions
In 1967, Andrei Sakharov proposed a set of three necessary conditions that a baryon-generating interaction must satisfy to produce matter and antimatter at different rates. These conditions were inspired by the recent discoveries of the cosmic microwave background and CP-violation in the neutral kaon system. The three necessary "Sakharov conditions" are:
Baryon number violation.
C-symmetry and CP-symmetry violation.
Interactions out of thermal equilibrium.
Baryon number violation is a necessary condition to produce an excess of baryons over anti-baryons. But C-symmetry violation is also needed so that the interactions which produce more baryons than anti-baryons will not be counterbalanced by interactions which produce more anti-baryons than baryons. CP-symmetry violation is similarly required because otherwise equal numbers of left-handed baryons and right-handed anti-baryons would be produced, as well as equal numbers of left-handed anti-baryons and right-handed baryons. Finally, the interactions must be out of thermal equilibrium, since otherwise CPT symmetry would assure compensation between processes increasing and decreasing the baryon number.
Currently, there is no experimental evidence of particle interactions where the conservation of baryon number is broken perturbatively: this would appear to suggest that all observed particle reactions have equal baryon number before and after. Mathematically, the commutator of the baryon number quantum operator with the (perturbative) Standard Model hamiltonian is zero: . However, the Standard Model is known to violate the conservation of baryon number only non-perturbatively: a global U(1) anomaly. To account for baryon violation in baryogenesis, such events (including proton decay) can occur in Grand Unification Theories (GUTs) and supersymmetric (SUSY) models via hypothetical massive bosons such as the X boson.
The second condition – violation of CP-symmetry – was discovered in 1964 (direct CP-violation, that is violation of CP-symmetry in a decay process, was discovered later, in 1999). Due to CPT symmetry, violation of CP-symmetry demands violation of time inversion symmetry, or T-symmetry.
In the out-of-equilibrium decay scenario, the last condition states that the rate of a reaction which generates baryon-asymmetry must be less than the rate of expansion of the universe. In this situation the particles and their corresponding antiparticles do not achieve thermal equilibrium due to rapid expansion decreasing the occurrence of pair-annihilation.
In the Standard Model
The Standard Model can incorporate baryogenesis, though the amount of net baryons (and leptons) thus created may not be sufficient to account for the present baryon asymmetry. There is a required one excess quark per billion quark-antiquark pairs in the early universe in order to provide all the observed matter in the universe. This insufficiency has not yet been explained, theoretically or otherwise.
Baryogenesis within the Standard Model requires the electroweak symmetry breaking to be a first-order cosmological phase transition, since otherwise sphalerons wipe off any baryon asymmetry that happened up to the phase transition. Beyond this, the remaining amount of baryon non-conserving interactions is negligible.
The phase transition domain wall breaks the P-symmetry spontaneously, allowing for CP-symmetry violating interactions to break C-symmetry on both its sides. Quarks tend to accumulate on the broken phase side of the domain wall, while anti-quarks tend to accumulate on its unbroken phase side. Due to CP-symmetry violating electroweak interactions, some amplitudes involving quarks are not equal to the corresponding amplitudes involving anti-quarks, but rather have opposite phase (see CKM matrix and Kaon); since time reversal takes an amplitude to its complex conjugate, CPT-symmetry is conserved in this entire process.
Though some of their amplitudes have opposite phases, both quarks and anti-quarks have positive energy, and hence acquire the same phase as they move in space-time. This phase also depends on their mass, which is identical but depends both on flavor and on the Higgs VEV which changes along the domain wall. Thus certain sums of amplitudes for quarks have different absolute values compared to those of anti-quarks. In all, quarks and anti-quarks may have different reflection and transmission probabilities through the domain wall, and it turns out that more quarks coming from the unbroken phase are transmitted compared to anti-quarks.
Thus there is a net baryonic flux through the domain wall. Due to sphaleron transitions, which are abundant in the unbroken phase, the net anti-baryonic content of the unbroken phase is wiped off as anti-baryons are transformed into leptons. However, sphalerons are rare enough in the broken phase as not to wipe off the excess of baryons there. In total, there is net creation of baryons (as well as leptons).
In this scenario, non-perturbative electroweak interactions (i.e. the sphaleron) are responsible for the B-violation, the perturbative electroweak Lagrangian is responsible for the CP-violation, and the domain wall is responsible for the lack of thermal equilibrium and the P-violation; together with the CP-violation it also creates a C-violation in each of its sides.
Matter content in the universe
The central question to baryogenesis is what causes the preference for matter over antimatter in the universe, as well as the magnitude of this asymmetry. An important quantifier is the asymmetry parameter, given by
where and refer to the number density of baryons and antibaryons respectively and is the number density of cosmic background radiation photons.
According to the Big Bang model, matter decoupled from the cosmic background radiation (CBR) at a temperature of roughly kelvin, corresponding to an average kinetic energy of / () = . After the decoupling, the total number of CBR photons remains constant. Therefore, due to space-time expansion, the photon density decreases. The photon density at equilibrium temperature is given by
with as the Boltzmann constant, as the Planck constant divided by and as the speed of light in vacuum, and as Apéry's constant. At the current CBR photon temperature of , this corresponds to a photon density of around 411 CBR photons per cubic centimeter.
Therefore, the asymmetry parameter , as defined above, is not the "best" parameter. Instead, the preferred asymmetry parameter uses the entropy density ,
because the entropy density of the universe remained reasonably constant throughout most of its evolution. The entropy density is
with and as the pressure and density from the energy density tensor , and as the effective number of degrees of freedom for "massless" particles at temperature (in so far as holds),
for bosons and fermions with and degrees of freedom at temperatures and respectively. At the present epoch, .
Ongoing research efforts
Ties to dark matter
A possible explanation for the cause of baryogenesis is the decay reaction of B-mesogenesis. This phenomenon suggests that in the early universe, particles such as the B-meson decay into a visible Standard Model baryon as well as a dark antibaryon that is invisible to current observation techniques. The process begins by assuming a massive, long-lived, scalar particle that exists in the early universe before Big Bang nucleosynthesis. The exact behavior of is as yet unknown, but it is assumed to decay into b quarks and antiquarks in conditions outside of thermal equilibrium, thus satisfying one Sakharov condition. These b quarks form into B-mesons, which immediately hadronize into oscillating CP-violating states, thus satisfying another Sakharov condition. These oscillating mesons then decay down into the baryon-dark antibaryon pair previously mentioned, , where is the parent B-meson, is the dark antibaryon, is the visible baryon, and is any extra light meson daughters required to satisfy other conservation laws in this particle decay. If this process occurs fast enough, the CP-violation effect gets carried over to the dark matter sector. However, this contradicts (or at least challenges) the last Sakharov condition, since the expected matter preference in the visible universe is balanced by a new antimatter preference in the dark matter of the universe and total baryon number is conserved.
B-mesogenesis results in missing energy between the initial and final states of the decay process, which, if recorded, could provide experimental evidence for dark matter. Particle laboratories equipped with B-meson factories such as Belle and BaBar are extremely sensitive to B-meson decays involving missing energy and currently have the capability to detect the channel. The LHC is also capable of searching for this interaction since it produces several orders of magnitude more B-mesons than Belle or BaBar, but there are more challenges from the decreased control over B-meson initial energy in the accelerator.
See also
Affleck–Dine mechanism
Anthropic principle
Big Bang
Chronology of the universe
CP violation
Leptogenesis (physics)
Lepton
References
Articles
Textbooks
Preprints
Particle physics
Baryons
Unsolved problems in physics
Big Bang
Concepts in astronomy
Antimatter
Physical cosmological concepts | Baryogenesis | [
"Physics",
"Astronomy"
] | 2,892 | [
"Physical cosmological concepts",
"Antimatter",
"Cosmogony",
"Concepts in astrophysics",
"Concepts in astronomy",
"Big Bang",
"Unsolved problems in physics",
"Particle physics",
"Matter"
] |
462,421 | https://en.wikipedia.org/wiki/Oxygen%20toxicity | Oxygen toxicity is a condition resulting from the harmful effects of breathing molecular oxygen () at increased partial pressures. Severe cases can result in cell damage and death, with effects most often seen in the central nervous system, lungs, and eyes. Historically, the central nervous system condition was called the Paul Bert effect, and the pulmonary condition the Lorrain Smith effect, after the researchers who pioneered the discoveries and descriptions in the late 19th century. Oxygen toxicity is a concern for underwater divers, those on high concentrations of supplemental oxygen, and those undergoing hyperbaric oxygen therapy.
The result of breathing increased partial pressures of oxygen is hyperoxia, an excess of oxygen in body tissues. The body is affected in different ways depending on the type of exposure. Central nervous system toxicity is caused by short exposure to high partial pressures of oxygen at greater than atmospheric pressure. Pulmonary and ocular toxicity result from longer exposure to increased oxygen levels at normal pressure. Symptoms may include disorientation, breathing problems, and vision changes such as myopia. Prolonged exposure to above-normal oxygen partial pressures, or shorter exposures to very high partial pressures, can cause oxidative damage to cell membranes, collapse of the alveoli in the lungs, retinal detachment, and seizures. Oxygen toxicity is managed by reducing the exposure to increased oxygen levels. Studies show that, in the long term, a robust recovery from most types of oxygen toxicity is possible.
Protocols for avoidance of the effects of hyperoxia exist in fields where oxygen is breathed at higher-than-normal partial pressures, including underwater diving using compressed breathing gases, hyperbaric medicine, neonatal care and human spaceflight. These protocols have resulted in the increasing rarity of seizures due to oxygen toxicity, with pulmonary and ocular damage being largely confined to the problems of managing premature infants.
In recent years, oxygen has become available for recreational use in oxygen bars. The US Food and Drug Administration has warned those who have conditions such as heart or lung disease not to use oxygen bars. Scuba divers use breathing gases containing up to 100% oxygen, and should have specific training in using such gases.
Classification
The effects of oxygen toxicity may be classified by the organs affected, producing three principal forms:
Central nervous system, characterised by convulsions followed by unconsciousness, occurring under hyperbaric conditions;
Pulmonary (lungs), characterised by difficulty in breathing and pain within the chest, occurring when breathing increased pressures of oxygen for extended periods;
Ocular (retinopathic conditions), characterised by alterations to the eyes, occurring when breathing increased pressures of oxygen for extended periods.
Central nervous system oxygen toxicity can cause seizures, brief periods of rigidity followed by convulsions and unconsciousness, and is of concern to divers who encounter greater than atmospheric pressures. Pulmonary oxygen toxicity results in damage to the lungs, causing pain and difficulty in breathing. Oxidative damage to the eye may lead to myopia or partial detachment of the retina. Pulmonary and ocular damage are most likely to occur when supplemental oxygen is administered as part of a treatment, particularly to newborn infants, but are also a concern during hyperbaric oxygen therapy.
Oxidative damage may occur in any cell in the body but the effects on the three most susceptible organs will be the primary concern. It may also be implicated in damage to red blood cells (haemolysis), the liver, heart, endocrine glands (adrenal glands, gonads, and thyroid), or kidneys, and general damage to cells.
In unusual circumstances, effects on other tissues may be observed: it is suspected that during spaceflight, high oxygen concentrations may contribute to bone damage. Hyperoxia can also indirectly cause carbon dioxide narcosis in patients with lung ailments such as chronic obstructive pulmonary disease or with central respiratory depression. Hyperventilation of atmospheric air at atmospheric pressures does not cause oxygen toxicity, because sea-level air has a partial pressure of oxygen of whereas toxicity does not occur below .
Signs and symptoms
Central nervous system
Central nervous system oxygen toxicity manifests as symptoms such as visual changes (especially tunnel vision), ringing in the ears (tinnitus), nausea, twitching (especially of the face), behavioural changes (irritability, anxiety, confusion), and dizziness. This may be followed by a tonic–clonic seizure consisting of two phases: intense muscle contraction occurs for several seconds (tonic phase); followed by rapid spasms of alternate muscle relaxation and contraction producing convulsive jerking (clonic phase). The seizure ends with a period of unconsciousness (the postictal state). The onset of seizure depends upon the partial pressure of oxygen in the breathing gas and exposure duration. However, exposure time before onset is unpredictable, as tests have shown a wide variation, both amongst individuals, and in the same individual from day to day. In addition, many external factors, such as underwater immersion, exposure to cold, and exercise will decrease the time to onset of central nervous system symptoms. Decrease of tolerance is closely linked to retention of carbon dioxide. Other factors, such as darkness and caffeine, increase tolerance in test animals, but these effects have not been proven in humans.
Lungs
Exposure to oxygen pressures greater than 0.5 bar, such as during diving, oxygen prebreathing prior to flight, or hyperbaric therapy is associated with the onset of pulmonary toxicity symptoms, also referred to as chronic oxygen toxicity. Pulmonary toxicity symptoms result from an inflammation that starts in the airways leading to the lungs and then spreads into the lungs (tracheobronchial tree). The symptoms appear in the upper chest region (substernal and carinal regions). This begins as a mild tickle on inhalation and progresses to frequent coughing. If breathing increased partial pressures of oxygen continues, subjects experience a mild burning on inhalation along with uncontrollable coughing and occasional shortness of breath (dyspnea). Physical findings related to pulmonary toxicity have included bubbling sounds heard through a stethoscope (bubbling rales), fever, and increased blood flow to the lining of the nose (hyperaemia of the nasal mucosa). Initially, there is an exudative phase that results in Pulmonary edema. An increase in the width of the interstitial space may be seen in histological examination. X-rays of the lungs show little change in the short term, but extended exposure leads to increasing diffuse shadowing throughout both lungs. Pulmonary function measurements are reduced, as indicated by a reduction in the amount of air that the lungs can hold (vital capacity) and changes in expiratory function and lung elasticity. Lung diffusing capacity decreases leading eventually to hypoxaemia. Tests in animals have indicated a variation in tolerance similar to that found in central nervous system toxicity, as well as significant variations between species. When the exposure to oxygen above is intermittent, it permits the lungs to recover and delays the onset of toxicity. A similar progression is common to all mammalian species. If death from hypoxaemia has not occurred after exposure for several days a proliferative phase occurs, developing a chronic thickening of the alveolar membrane and a decrement in lung diffusing capacity. These changes are mostly reversible on return to normoxia, but the time required for complete recovery is not known.
Eyes
In premature babies, signs of damage to the eye (retinopathy of prematurity, or ROP) are observed via an ophthalmoscope as a demarcation between the vascularised and non-vascularised regions of an infant's retina. The degree of this demarcation is used to designate four stages: (I) the demarcation is a line; (II) the demarcation becomes a ridge; (III) growth of new blood vessels occurs around the ridge; (IV) the retina begins to detach from the inner wall of the eye (choroid).
Causes
Oxygen toxicity is caused by hyperoxia, exposure to oxygen at partial pressures greater than those to which the body is normally exposed. This occurs in three principal settings: underwater diving, hyperbaric oxygen therapy, and the provision of supplemental oxygen, in critical care, and for long-term treatment of chronic disorders, and particularly to premature infants. In each case, the risk factors are markedly different.
Under normal or reduced ambient pressures, the effects of hyperoxia are initially restricted to the lungs, which are directly exposed, but after prolonged exposure or at hyperbaric pressures, other organs can be at risk. At normal partial pressures of inhaled oxygen, most of the oxygen transported in the blood is carried by haemoglobin, but the amount of dissolved oxygen will increase at partial pressures of arterial oxygen exceeding , when oxyhemoglobin saturation is nearly complete. At higher concentrations the effects of hyperoxia are more widespread in the body tissues beyond the lungs.
Central nervous system toxicity
Exposures, from minutes to a few hours, to partial pressures of oxygen above about —about eight times normal atmospheric partial pressure—are usually associated with central nervous system oxygen toxicity, also known as acute oxygen toxicity, and are most likely to occur among patients undergoing hyperbaric oxygen therapy and divers. Since sea level atmospheric pressure is about , central nervous system toxicity can only occur under hyperbaric conditions, where ambient pressure is above normal. Divers breathing air at depths beyond face an increasing risk of an oxygen toxicity "hit" (seizure). Divers breathing a gas mixture enriched with oxygen, such as nitrox, similarly increase the risk of a seizure at shallower depths, should they descend below the maximum operating depth accepted for the mixture.
CNS toxicity is aggravated by a high partial pressure of carbon dioxide, stress, fatigue, and cold, all of which are much more likely in diving than in hyperbaric therapy.
Lung toxicity
The lungs and the remainder of the respiratory tract are exposed to the highest concentration of oxygen in the human body and are therefore the first organs to show chronic toxicity. Pulmonary toxicity occurs only with exposure to partial pressures of oxygen greater than , corresponding to an oxygen fraction of 50% at normal atmospheric pressure. The earliest signs of pulmonary toxicity begin with evidence of tracheobronchitis, or inflammation of the upper airways, after an asymptomatic period between 4 and 22 hours at greater than 95% oxygen, with some studies suggesting symptoms usually begin after approximately 14 hours at this level of oxygen.
At partial pressures of oxygen of —100% oxygen at 2 to 3 times atmospheric pressure—these symptoms may begin as early as 3 hours into exposure to oxygen. Experiments on rats breathing oxygen at pressures between suggest that pulmonary manifestations of oxygen toxicity may not be the same for normobaric conditions as they are for hyperbaric conditions. Evidence of decline in lung function as measured by pulmonary function testing can occur as quickly as 24 hours of continuous exposure to 100% oxygen, with evidence of diffuse alveolar damage and the onset of acute respiratory distress syndrome usually occurring after 48 hours on 100% oxygen. Breathing 100% oxygen also eventually leads to collapse of the alveoli (atelectasis), while—at the same partial pressure of oxygen—the presence of significant partial pressures of inert gases, typically nitrogen, will prevent this effect.
Preterm newborns are known to be at higher risk for bronchopulmonary dysplasia with extended exposure to high concentrations of oxygen. Other groups at higher risk for oxygen toxicity are patients on mechanical ventilation with exposure to levels of oxygen greater than 50%, and patients exposed to chemicals that increase risk for oxygen toxicity such the chemotherapeutic agent bleomycin. Therefore, current guidelines for patients on mechanical ventilation in intensive care recommend keeping oxygen concentration less than 60%. Likewise, divers who undergo treatment of decompression sickness are at increased risk of oxygen toxicity as treatment entails exposure to long periods of oxygen breathing under hyperbaric conditions, in addition to any oxygen exposure during the dive.
Ocular toxicity
Prolonged exposure to high inspired fractions of oxygen causes damage to the retina. Damage to the developing eye of infants exposed to high oxygen fraction at normal pressure has a different mechanism and effect from the eye damage experienced by adult divers under hyperbaric conditions. Hyperoxia may be a contributing factor for the disorder called retrolental fibroplasia or retinopathy of prematurity (ROP) in infants. In preterm infants, the retina is often not fully vascularised. Retinopathy of prematurity occurs when the development of the retinal vasculature is arrested and then proceeds abnormally. Associated with the growth of these new vessels is fibrous tissue (scar tissue) that may contract to cause retinal detachment. Supplemental oxygen exposure, while a risk factor, is not the main risk factor for development of this disease. Restricting supplemental oxygen use does not necessarily reduce the rate of retinopathy of prematurity, and may raise the risk of hypoxia-related systemic complications.
has occurred in closed circuit oxygen rebreather divers with prolonged exposures. It also occurs frequently in those undergoing repeated hyperbaric oxygen therapy. This is due to an increase in the refractive power of the lens, since axial length and keratometry readings do not reveal a corneal or length basis for a myopic shift. It is usually reversible with time.
A possible side effect of hyperbaric oxygen therapy is the initial or further development of cataracts, which are an increase in opacity of the lens of the eye which reduces visual acuity, and can eventually result in blindness. This is a rare event, associated with lifetime exposure to raised oxygen concentration, and may be under-reported as it develops very slowly, and cataracts are a common disorder of advanced age. The cause is not fully understood, but evidence suggests that raised oxygen levels at the lens may be caused by deterioration of the vitreous humour due to age, and this causes degradation of lens crystallins by cross-linking, forming aggregates capable of scattering light. This may be an end-state development of the more commonly observed myopic shift associated with hyperbaric treatment.
Mechanism
The biochemical basis for the toxicity of oxygen is the partial reduction of oxygen by one or two electrons to form reactive oxygen species, which are natural by-products of the normal metabolism of oxygen and have important roles in cell signalling. One species produced by the body, the superoxide anion (), is possibly involved in iron acquisition. Higher than normal concentrations of oxygen lead to increased levels of reactive oxygen species. Oxygen is necessary for cell metabolism, and the blood supplies it to all parts of the body. When oxygen is breathed at high partial pressures, a hyperoxic condition will rapidly spread, with the most vascularised tissues being most vulnerable. During times of environmental stress, levels of reactive oxygen species can increase dramatically, which can damage cell structures and produce oxidative stress.
While all the reaction mechanisms of these species within the body are not yet fully understood, one of the most reactive products of oxidative stress is the hydroxyl radical (), which can initiate a damaging chain reaction of lipid peroxidation in the unsaturated lipids within cell membranes. High concentrations of oxygen also increase the formation of other free radicals, such as nitric oxide, peroxynitrite, and trioxidane, which harm DNA and other biomolecules. Although the body has many antioxidant systems such as glutathione that guard against oxidative stress, these systems are eventually overwhelmed at very high concentrations of free oxygen, and the rate of cell damage exceeds the capacity of the systems that prevent or repair it. Cell damage and cell death then result.
Diagnosis
Diagnosis of central nervous system oxygen toxicity in divers prior to seizure is difficult as the symptoms of visual disturbance, ear problems, dizziness, confusion and nausea can be due to many factors common to the underwater environment such as narcosis, congestion and coldness. However, these symptoms may be helpful in diagnosing the first stages of oxygen toxicity in patients undergoing hyperbaric oxygen therapy. In either case, unless there is a prior history of epilepsy or tests indicate hypoglycaemia, a seizure occurring in the setting of breathing oxygen at partial pressures greater than suggests a diagnosis of oxygen toxicity.
Diagnosis of bronchopulmonary dysplasia in newborn infants with breathing difficulties is difficult in the first few weeks. However, if the infant's breathing does not improve during this time, blood tests and x-rays may be used to confirm bronchopulmonary dysplasia. In addition, an echocardiogram can help to eliminate other possible causes such as congenital heart defects or pulmonary arterial hypertension.
The diagnosis of retinopathy of prematurity in infants is typically suggested by the clinical setting. Prematurity, low birth weight, and a history of oxygen exposure are the principal indicators, while no hereditary factors have been shown to yield a pattern.
Differential diagnosis
Clinical diagnosis can be confirmed with arterial oxygen levels.
A number of other conditions can be confused with oxygen toxicity, these include:
Carbon monoxide poisoning
Cerebrovascular event (stroke)
Envenomation or toxin ingestion
Hypercapnia (Carbon dioxide narcosis)
Hyperventilation
Hypoglycemia
Infection
Migraine
Multiple sclerosis
Seizure disorder (epilepsy)
Prevention
The prevention of oxygen toxicity depends entirely on the setting. Both underwater and in space, proper precautions can eliminate the most pernicious effects. Premature infants commonly require supplemental oxygen to treat complications of preterm birth. In this case prevention of bronchopulmonary dysplasia and retinopathy of prematurity must be carried out without compromising a supply of oxygen adequate to preserve the infant's life.
Underwater
Oxygen toxicity is a catastrophic hazard in scuba diving, because a seizure results in high risk of death by drowning. The seizure may occur suddenly and with no warning symptoms. The effects are sudden convulsions and unconsciousness, during which victims can lose their regulator and drown. One of the advantages of a full-face diving mask is prevention of regulator loss in the event of a seizure. Mouthpiece retaining straps are a relatively inexpensive alternative with a similar but less effective function. As there is an increased risk of central nervous system oxygen toxicity on deep dives, long dives and dives where oxygen-rich breathing gases are used, divers are taught to calculate a maximum operating depth for oxygen-rich breathing gases, and cylinders containing such mixtures should be clearly marked with that depth.
The risk of seizure appears to be a function of dose – a cumulative combination of partial pressure and duration. The threshold for oxygen partial pressure below which seizures never occur has not been established, and may depend on many variables, some of them personal. The risk to a specific person can vary considerably depending on individual sensitivity, level of exercise, and carbon dioxide retention, which is influenced by work of breathing.
In some diver training courses for modes of diving in which exposure may reach levels with significant risk, divers are taught to plan and monitor what is called the 'oxygen clock' of their dives. This is a notional alarm clock, which ticks more quickly at increased oxygen pressure and is set to activate at the maximum single exposure limit recommended in the National Oceanic and Atmospheric Administration Diving Manual. For the following partial pressures of oxygen the limits are: 45 minutes at , 120 minutes at , 150 minutes at , 180 minutes at and 210 minutes at , but it is impossible to predict with any reliability whether or when toxicity symptoms will occur. Many nitrox-capable dive computers calculate an oxygen loading and can track it across multiple dives. The aim is to avoid activating the alarm by reducing the partial pressure of oxygen in the breathing gas or by reducing the time spent breathing gas of greater oxygen partial pressure. As the partial pressure of oxygen increases with the fraction of oxygen in the breathing gas and the depth of the dive, the diver obtains more time on the oxygen clock by diving at a shallower depth, by breathing a less oxygen-rich gas, or by shortening the duration of exposure to oxygen-rich gases. This function is provided by some technical diving decompression computers and rebreather control and monitoring hardware.
Diving below on air would expose a diver to increasing danger of oxygen toxicity as the partial pressure of oxygen exceeds , so a gas mixture should be used which contains less than 21% oxygen (termed a hypoxic mixture). Increasing the proportion of nitrogen is not viable, since it would produce a strongly narcotic mixture. However, helium is not narcotic, and a usable mixture may be blended either by completely replacing nitrogen with helium (the resulting mix is called heliox), or by replacing part of the nitrogen with helium, producing a trimix.
Pulmonary oxygen toxicity is an entirely avoidable event while diving. The limited duration and naturally intermittent nature of most diving makes this a relatively rare (and even then, reversible) complication for divers. Established guidelines enable divers to calculate when they are at risk of pulmonary toxicity. In saturation diving it can be avoided by limiting the oxygen content of gas in living areas to below 0.4 bar.
Screening
The intention of screening using an oxygen tolerance test is to identify divers with low tolerance to high partial pressures of hyperbaric oxygen who may be more prone to oxygen convulsions during diving operations or during hyperbaric treatment for decompression sickness. The value of this test has been questioned, and statistical studies have shown low incidence of seizures during standard hyperbaric treatment schedules, so some navies have discontinued its use, though an others continue to require the test for all candidate divers.
The variability in tolerance and other variable factors such as workload have resulted in the U.S. Navy abandoning screening for oxygen tolerance. Of the 6,250 oxygen-tolerance tests performed between 1976 and 1997, only 6 episodes of oxygen toxicity were observed (0.1%).
The oxygen tolerance test used by the Indian Navy, which follows recommendations of the US Navy and US National Oceanic and Atmospheric Administration, is to breathe 100% oxygen delivered by BIBS mask at an ambient pressure of 2.8 bar absolute (18 msw) for 30 minutes, at rest in a dry hyperbaric chamber. No symptoms of CNS oxygen toxicity may be observed by the attendant.
Hyperbaric setting
The presence of a fever or a history of seizure is a relative contraindication to hyperbaric oxygen treatment. The schedules used for treatment of decompression illness allow for periods of breathing air rather than 100% oxygen (air breaks) to reduce the chance of seizure or lung damage. The U.S. Navy uses treatment tables based on periods alternating between 100% oxygen and air. For example, USN table 6 requires 75 minutes (three periods of 20 minutes oxygen/5 minutes air) at an ambient pressure of , equivalent to a depth of . This is followed by a slow reduction in pressure to over 30 minutes on oxygen. The patient then remains at that pressure for a further 150 minutes, consisting of two periods of 15 minutes air/60 minutes oxygen, before the pressure is reduced to atmospheric over 30 minutes on oxygen.
Vitamin E and selenium were proposed and later rejected as a potential method of protection against pulmonary oxygen toxicity. There is however some experimental evidence in rats that vitamin E and selenium aid in preventing in vivo lipid peroxidation and free radical damage, and therefore prevent retinal changes following repetitive hyperbaric oxygen exposures.
Normobaric setting
Bronchopulmonary dysplasia is reversible in the early stages by use of break periods on lower pressures of oxygen, but it may eventually result in irreversible lung injury if allowed to progress to severe damage. One or two days of exposure without oxygen breaks are needed to cause such damage.
Retinopathy of prematurity is largely preventable by screening. Current guidelines require that all babies of less than 32 weeks gestational age or having a birth weight less than should be screened for retinopathy of prematurity at least every two weeks. The National Cooperative Study in 1954 showed a causal link between supplemental oxygen and retinopathy of prematurity, but subsequent curtailment of supplemental oxygen caused an increase in infant mortality. To balance the risks of hypoxia and retinopathy of prematurity, modern protocols now require monitoring of blood oxygen levels in premature infants receiving oxygen.
Careful titration of dosage to minimise delivered concentration while achieving the desired level of oxygenation will both minimise the risk of oxygen toxicity damage and the amount of oxygen used for long term therapy. A typical target for oxygen saturation when receiving oxygen therapy, would be in the range of 91-95%, in both term and preterm infants.
Hypobaric setting
In low-pressure environments oxygen toxicity may be avoided since the toxicity is caused by high partial pressure of oxygen, not by high oxygen fraction. This is illustrated by the use of pure oxygen in spacesuits, which must operate at low pressure, and a high
oxygen fraction and cabin pressure lower than normal atmospheric pressure in early spacecraft, for example, the Gemini and Apollo spacecraft. In such applications as extra-vehicular activity, high-fraction oxygen is non-toxic, even at breathing mixture fractions approaching 100%, because the oxygen partial pressure is not allowed to chronically exceed .
Management
During hyperbaric oxygen therapy, the patient will usually breathe 100% oxygen from a mask while inside a hyperbaric chamber pressurised with air to about . Seizures during the therapy are managed by removing the mask from the patient, thereby dropping the partial pressure of oxygen inspired below .
A seizure underwater requires that the diver be brought to the surface as soon as practicable. Although for many years the recommendation has been not to raise the diver during the seizure itself, owing to the danger of arterial gas embolism (AGE), there is some evidence that the glottis does not fully obstruct the airway. This has led to the current recommendation by the Diving Committee of the Undersea and Hyperbaric Medical Society that a diver should be raised during the seizure's clonic (convulsive) phase if the regulator is not in the diver's mouth—as the danger of drowning is then greater than that of AGE—but the ascent should be delayed until the end of the clonic phase otherwise. Rescuers ensure that their own safety is not compromised during the convulsive phase. They then ensure that where the victim's air supply is established it is maintained, and carry out a controlled buoyant lift. Lifting an unconscious body is taught by most recreational diver training agencies as an advanced skill, and for professional divers it is a basic skill, as it is one of the primary functions of the standby diver. Upon reaching the surface, emergency services are always contacted as there is a possibility of further complications requiring medical attention. If symptoms develop other than a seizure underwater the diver should immediately switch to a gas with a lower oxygen fraction or ascend to a shallower depth if decompression obligations allow. If a chamber is available at the surface, surface decompression is a recommended option. The U.S. Navy has published procedures for completing decompression stops where a recompression chamber is not immediately available. Some dive computers will recalculate decompression requirements for alternative mixtures provided the actual gas setting is activated.
The occurrence of symptoms of bronchopulmonary dysplasia or acute respiratory distress syndrome is treated by lowering the fraction of oxygen administered, along with a reduction in the periods of exposure and an increase in the break periods where normal air is supplied. Where supplemental oxygen is required for treatment of another disease (particularly in infants), a ventilator may be needed to ensure that the lung tissue remains inflated. Reductions in pressure and exposure will be made progressively, and medications such as bronchodilators and pulmonary surfactants may be used.
Divers manage the risk of pulmonary damage by limiting exposure to levels shown to be generally acceptable by experimental evidence, using a system of accumulated s which are based on exposure time at specified partial pressures. In the event of emergency treatment for decompression illness, it may be necessary to exceed normal exposure limits to manage more critical symptoms.
Retinopathy of prematurity may regress spontaneously, but should the disease progress beyond a threshold (defined as five contiguous or eight cumulative hours of stage 3 retinopathy of prematurity), both cryosurgery and laser surgery have been shown to reduce the risk of blindness as an outcome. Where the disease has progressed further, techniques such as scleral buckling and vitrectomy surgery may assist in re-attaching the retina.
Repetitive exposure
Repeated exposure to potentially toxic oxygen concentrations in breathing gas is fairly common in hyperbaric activity, particularly in hyperbaric medicine, saturation diving, underwater habitats, and repetitive decompression diving. Research at the National Oceanic and Atmospheric Administration (NOAA) by R.W. Hamilton and others determined acceptable levels of exposure for single and repeated exposures. A distinction is made between acceptable exposure for acute and chronic toxicity, but these are really the extremes of a possible continuous range of exposures. A further distinction can be made between routine exposure and exposure required for emergency treatment, where a higher risk of oxygen toxicity may be justified to achieve a reduction of a more critical injury, particularly when in a relatively safe controlled and monitored environment.
The Repex (repetitive exposure) method, developed in 1988, allows oxygen toxicity dosage to be calculated using a single dose value equivalent to 1 minute of 100% oxygen at atmospheric pressure called an Oxygen Tolerance Unit (OTU), and is used to avoid toxic effects over several days of operational exposure. Some dive computers will automatically track the dosage based on measured depth and selected gas mixture. The limits allow a greater exposure when the person has not been exposed recently, and daily allowable dose decreases with an increase in consecutive days with exposure. These values may not be fully supported by current data.
A more recent proposal uses a simple power equation, Toxicity Index (TI) = t2 × PO2c, where t is time and c is the power term. This was derived from the chemical reactions producing reactive oxygen or nitrogen species, and has been shown to give good predictions for CNS toxicity with c = 6.8 and for pulmonary toxicity for c = 4.57.
For pulmonary toxicity, time is in hours, and PO2 in atmospheres absolute, TI should be limited to 250.
For CNS toxicity, time is in minutes, PO2 in atmospheres absolute, and a TI of 26,108 indicates a 1% risk.
Prognosis
Although the convulsions caused by central nervous system oxygen toxicity may lead to incidental injury to the victim, it remained uncertain for many years whether damage to the nervous system following the seizure could occur and several studies searched for evidence of such damage. An overview of these studies by Bitterman in 2004 concluded that following removal of breathing gas containing high fractions of oxygen, no long-term neurological damage from the seizure remains.
The majority of infants who have survived following an incidence of bronchopulmonary dysplasia will eventually recover near-normal lung function, since lungs continue to grow during the first 5–7 years and the damage caused by bronchopulmonary dysplasia is to some extent reversible (even in adults). However, they are likely to be more susceptible to respiratory infections for the rest of their lives and the severity of later infections is often greater than that in their peers.
Retinopathy of prematurity (ROP) in infants frequently regresses without intervention and eyesight may be normal in later years. Where the disease has progressed to the stages requiring surgery, the outcomes are generally good for the treatment of stage 3 ROP, but are much worse for the later stages. Although surgery is usually successful in restoring the anatomy of the eye, damage to the nervous system by the progression of the disease leads to comparatively poorer results in restoring vision. The presence of other complicating diseases also reduces the likelihood of a favourable outcome.
Provision of supplementary oxygen remains of life-saving importance in critical care, and can increase survival in some chronic conditions, but hyperoxia and the formation of reactive oxygen species is involved in the pathogenesis of several life-threatening diseases. The toxic effects of hyperoxia are particularly prevalent in the pulmonary compartment, and cerebral and coronary circulations are at risk when vascular changes occur. Long-term hyperoxia harms the immune responses and susceptibility to infectious complications and tissue injury are increased.
Epidemiology
The incidence of central nervous system toxicity among divers has decreased since the Second World War, as protocols have developed to limit exposure and partial pressure of oxygen inspired. In 1947, Donald recommended limiting the depth allowed for breathing pure oxygen to , which equates to an oxygen partial pressure of . Over time this limit has been reduced, until today a limit of during a recreational dive and during shallow decompression stops is generally recommended, though military divers using oxygen rebreathers may operate to greater depths for limited periods, at greater risk. Oxygen toxicity has now become a rare occurrence other than when caused by equipment malfunction and human error. Historically, the U.S. Navy has refined its Navy Diving Manual air and mixed gas tables to reduce oxygen toxicity incidents. Between 1995 and 1999, reports showed 405 surface-supported dives using the helium–oxygen tables; of these, oxygen toxicity symptoms were observed on 6 dives (1.5%). As a result, the U.S. Navy in 2000 modified the schedules and conducted field tests of 150 dives, none of which produced symptoms of oxygen toxicity. Revised tables were published in 2001.
The variability in tolerance and other variable factors such as workload have resulted in the U.S. Navy abandoning screening for oxygen tolerance. Of the 6,250 oxygen-tolerance tests performed between 1976 and 1997, only 6 episodes of oxygen toxicity were observed (0.1%).
Central nervous system oxygen toxicity among patients undergoing hyperbaric oxygen therapy is rare, and is influenced by a number of a factors: individual sensitivity and treatment protocol; and probably therapy indication and equipment used. A study by Welslau in 1996 reported 16 incidents out of a population of 107,264 patients (0.015%), while Hampson and Atik in 2003 found a rate of 0.03%. Yildiz, Ay and Qyrdedi, in a summary of 36,500 patient treatments between 1996 and 2003, reported only 3 oxygen toxicity incidents, giving a rate of 0.008%. A later review of over 80,000 patient treatments revealed an even lower rate: 0.0024%. The reduction in incidence may be partly due to use of a mask rather than a hood to deliver oxygen as there is less dead space in a mask.
The overall risk of CNS toxicity may be as high as 1 in 2000 to 3000 treatments. but it varies with the pressure and may be as high as 1 in 200 at higher pressure treatment schedules of 2.8 to 3.0 ATA, or as low as 1 in 10,000 for schedules at 2 ATA or less.
Bronchopulmonary dysplasia is among the most common complications of prematurely born infants and its incidence has grown as the survival of extremely premature infants has increased. Nevertheless, the severity has decreased as better management of supplemental oxygen has resulted in the disease now being related mainly to factors other than hyperoxia.
In 1997 a summary of studies of neonatal intensive care units in industrialised countries showed that up to 60% of low birth weight babies developed retinopathy of prematurity, which rose to 72% in extremely low birth weight babies, defined as less than at birth. However, severe outcomes are much less frequent: for very low birth weight babies—those less than at birth—the incidence of blindness was found to be no more than 8%.
Administration of supplemental oxygen is extensively and effectively used in emergency and intensive care medicine, but the reactive oxygen species caused by excessive oxygenation tend to cause a vicious cycle of tissue injury, characterized by cell damage, cell death, and inflammation, mostly in the lungs, which can exacerbate problems of tissue oxygenation for which the supplemental oxygen was intended as a treatment. Similar problems can occur in oxygen therapy for chronic conditions which involve hypoxia. Careful titration of oxygen supply to minimise the excess to physiological need also reduces pulmonary hyperoxic exposure to the reasonably practicable minimum. The incidence of pulmonary symptoms of oxygen toxicity is about 5%, and some drugs can increase the risk, such as the chemotherapeutic agent bleomycin.
History
Central nervous system toxicity was first described by Paul Bert in 1878. He showed that oxygen was toxic to insects, arachnids, myriapods, molluscs, earthworms, fungi, germinating seeds, birds, and other animals. Central nervous system toxicity may be referred to as the "Paul Bert effect".
Pulmonary oxygen toxicity was first described by J. Lorrain Smith in 1899 when he noted central nervous system toxicity and discovered in experiments in mice and birds that had no effect but of oxygen was a pulmonary irritant. Pulmonary toxicity may be referred to as the "Lorrain Smith effect". The first recorded human exposure was undertaken in 1910 by Bornstein when two men breathed oxygen at for 30 minutes, while he went on to 48 minutes with no symptoms. In 1912, Bornstein developed cramps in his hands and legs while breathing oxygen at for 51 minutes. Smith then went on to show that intermittent exposure to a breathing gas with less oxygen permitted the lungs to recover and delayed the onset of pulmonary toxicity.
Albert R. Behnke et al. in 1935 were the first to observe visual field contraction (tunnel vision) on dives between and . During World War II, Donald and Yarbrough et al. performed over 2,000 experiments on oxygen toxicity to support the initial use of closed circuit oxygen rebreathers. Naval divers in the early years of oxygen rebreather diving developed a mythology about a monster called "Oxygen Pete", who lurked in the bottom of the Admiralty Experimental Diving Unit "wet pot" (a water-filled hyperbaric chamber) to catch unwary divers. They called having an oxygen toxicity attack "getting a Pete".
In the decade following World War II, Lambertsen et al. made further discoveries on the effects of breathing oxygen under pressure and methods of prevention. Their work on intermittent exposures for extension of oxygen tolerance and on a model for prediction of pulmonary oxygen toxicity based on pulmonary function are key documents in the development of standard operating procedures when breathing increased pressures of oxygen. Lambertsen's work showing the effect of carbon dioxide in decreasing time to onset of central nervous system symptoms has influenced work from current exposure guidelines to future breathing apparatus design.
Retinopathy of prematurity was not observed before World War II, but with the availability of supplemental oxygen in the decade following, it rapidly became one of the principal causes of infant blindness in developed countries. By 1960 the use of oxygen had become identified as a risk factor and its administration restricted. The resulting fall in retinopathy of prematurity was accompanied by a rise in infant mortality and hypoxia-related complications. Since then, more sophisticated monitoring and diagnosis have established protocols for oxygen use which aim to balance between hypoxic conditions and problems of retinopathy of prematurity.
Bronchopulmonary dysplasia was first described by Northway in 1967, who outlined the conditions that would lead to the diagnosis. This was later expanded by Bancalari and in 1988 by Shennan, who suggested the need for supplemental oxygen at 36 weeks could predict long-term outcomes. Nevertheless, Palta et al. in 1998 concluded that radiographic evidence was the most accurate predictor of long-term effects.
Bitterman et al. in 1986 and 1995 showed that darkness and caffeine would delay the onset of changes to brain electrical activity in rats. In the years since, research on central nervous system toxicity has centred on methods of prevention and safe extension of tolerance. Sensitivity to central nervous system oxygen toxicity has been shown to be affected by factors such as circadian rhythm, drugs, age, and gender. In 1988, Hamilton et al. wrote procedures for the National Oceanic and Atmospheric Administration to establish oxygen exposure limits for habitat operations. Even today, models for the prediction of pulmonary oxygen toxicity do not explain all the results of exposure to high partial pressures of oxygen.
Society and culture
Recreational scuba divers commonly breathe nitrox containing up to 40% oxygen, while technical divers use pure oxygen or nitrox containing up to 80% oxygen to accelerate decompression. Divers who breathe oxygen fractions greater than of air (21%) need to be educated on the dangers of oxygen toxicity and how to manage the risk. To buy nitrox, a diver may be required to show evidence of relevant qualification.
Since the late 1990s the recreational use of oxygen has been promoted by oxygen bars, where customers breathe oxygen through a nasal cannula. Claims have been made that this reduces stress, increases energy, and lessens the effects of hangovers and headaches, despite the lack of any scientific evidence to support them. There are also devices on sale that offer "oxygen massage" and "oxygen detoxification" with claims of removing body toxins and reducing body fat. The American Lung Association has stated "there is no evidence that oxygen at the low flow levels used in bars can be dangerous to a normal person's health", but the U.S. Center for Drug Evaluation and Research cautions that people with heart or lung disease need their supplementary oxygen carefully regulated and should not use oxygen bars.
Victorian society had a fascination for the rapidly expanding field of science. In "Dr. Ox's Experiment", a short story written by Jules Verne in 1872, the eponymous doctor uses electrolysis of water to separate oxygen and hydrogen. He then pumps the pure oxygen throughout the town of Quiquendone, causing the normally tranquil inhabitants and their animals to become aggressive and plants to grow rapidly. An explosion of the hydrogen and oxygen in Dr Ox's factory brings his experiment to an end. Verne summarised his story by explaining that the effects of oxygen described in the tale were his own invention (they are not in any way supported by empirical evidence). There is also a brief episode of oxygen intoxication in his "From the Earth to the Moon".
See also
References
Sources
Revised version of Donald's articles also available as:
Further reading
External links
The following external sites contain resources specific to particular topics:
2008 Divers Alert Network Technical Diving Conference – Video of "Oxygen Toxicity" lecture by Dr. Richard Vann (free download, mp4, 86MB).
– Discussion of the effects of breathing oxygen on the respiratory system.
– Clinical overview with references.
Underwater diving medicine
Element toxicology
Intensive care medicine
Oxygen
Respiratory diseases
Neurobiological brain disorders
Underwater diving disorders | Oxygen toxicity | [
"Chemistry"
] | 8,918 | [
"Biology and pharmacology of chemical elements",
"Element toxicology"
] |
463,044 | https://en.wikipedia.org/wiki/Addition%20polymer | In polymer chemistry, an addition polymer is a polymer that forms by simple linking of monomers without the co-generation of other products. Addition polymerization differs from condensation polymerization, which does co-generate a product, usually water. Addition polymers can be formed by chain polymerization, when the polymer is formed by the sequential addition of monomer units to an active site in a chain reaction, or by polyaddition, when the polymer is formed by addition reactions between species of all degrees of polymerization. Addition polymers are formed by the addition of some simple monomer units repeatedly. Generally polymers are unsaturated compounds like alkenes, alkalines etc. The addition polymerization mainly takes place in free radical mechanism. The free radical mechanism of addition polymerization completed by three steps i.e. Initiation of free radical, Chain propagation, Termination of chain.
Polyolefins
Many common addition polymers are formed from unsaturated monomers (usually having a C=C double bond). The most prevalent addition polymers are polyolefins, i.e. polymers derived by the conversion of olefins (alkenes) to long-chain alkanes. The stoichiometry is simple:
n RCH=CH2 → [RCH-CH2]n
This conversion can be induced by a variety of catalysts including free radicals, acids, carbanions and metal complexes.
Examples of such polyolefins are polyethenes, polypropylene, PVC, Teflon, Buna rubbers, polyacrylates, polystyrene, and PCTFE.
Copolymers
When two or more types of monomers undergo addition polymerization, the resulting polymer is an addition copolymer. Saran wrap, formed from polymerization of vinyl chloride and vinylidene chloride, is an addition copolymer.
Ring-opening polymerization
Ring-opening polymerization is an additive process but tends to give condensation-like polymers but follows the stoichiometry of addition polymerization. For example, polyethylene glycol is formed by opening ethylene oxide rings:
HOCH2CH2OH + n C2H4O → HO(CH2CH2O)n+1H
Nylon 6 (developed to thwart the patent on nylon 6,6) is produced by addition polymerization, but chemically resembles typical polyamides.
Further contrasts with condensation polymers
One universal distinction between polymerization types is development of molecular weight by the different modes of propagation. Addition polymers form high molecular weight chains rapidly, with much monomer remaining. Since addition polymerization has rapidly growing chains and free monomer as its reactants, and condensation polymerization occurs in step-wise fashion between monomers, dimers, and other smaller growing chains, the effect of a polymer molecule's current size on a continuing reaction is profoundly different in these two cases. This has important effects on the distribution of molecular weights, or polydispersity, in the finished polymer.
Biodegradation
Addition polymers are generally chemically inert, involving strong C-C bonds. For this reason they are non-biodegradable and difficult to recycle. In contrast, condensation polymers tend to be more readily bio-degradable because their backbones contain weaker bonds.
History
The first useful addition polymer was made by accident in 1933 by ICI chemists Reginald Gibson and Eric Fawcett. They were carrying out a series of experiments that involved reacting organic compounds under high temperatures and high pressures. They set up an experiment to react ethene with benzaldehyde in the hope of producing a ketone. They left the reaction vessel overnight, and the next morning they found a small amount of a white waxy solid. It was shown later that this solid was polyethylene.
The term "addition polymerization" is deprecated by IUPAC (International Union of Pure and Applied Chemistry) which recommends the alternative term chain polymerization.
References
Polymer chemistry
Polymerization reactions | Addition polymer | [
"Chemistry",
"Materials_science",
"Engineering"
] | 836 | [
"Polymerization reactions",
"Polymer chemistry",
"Materials science"
] |
463,271 | https://en.wikipedia.org/wiki/Salbutamol | Salbutamol, also known as albuterol and sold under the brand name Ventolin among others, is a medication that opens up the medium and large airways in the lungs. It is a short-acting β2 adrenergic receptor agonist that causes relaxation of airway smooth muscle. It is used to treat asthma, including asthma attacks and exercise-induced bronchoconstriction, as well as chronic obstructive pulmonary disease (COPD). It may also be used to treat high blood potassium levels. Salbutamol is usually used with an inhaler or nebulizer, but it is also available in a pill, liquid, and intravenous solution. Onset of action of the inhaled version is typically within 15 minutes and lasts for two to six hours.
Common side effects include shakiness, headache, fast heart rate, dizziness, and feeling anxious. Serious side effects may include worsening bronchospasm, irregular heartbeat, and low blood potassium levels. It can be used during pregnancy and breastfeeding, but safety is not entirely clear.
Salbutamol was patented in 1966 in Britain and became commercially available in the UK in 1969. It was approved for medical use in the United States in 1982. It is on the World Health Organization's List of Essential Medicines. Salbutamol is available as a generic medication. In 2022, it was the seventh most commonly prescribed medication in the United States, with more than 59million prescriptions.
Medical uses
Salbutamol is typically used to treat bronchospasm (due to any cause—allergic asthma or exercise-induced), as well as chronic obstructive pulmonary disease. It is also one of the most common medicines used in rescue inhalers (short-term bronchodilators to alleviate asthma attacks).
As a β2 agonist, salbutamol also has use in obstetrics. Intravenous salbutamol can be used as a tocolytic to relax the uterine smooth muscle to delay premature labor. While preferred over agents such as atosiban and ritodrine, its role has largely been replaced by the calcium channel blocker nifedipine, which is more effective and better tolerated.
Salbutamol has been used to treat acute hyperkalemia, as it stimulates potassium flow into cells, thus lowering the potassium in the blood.
Two recent studies have suggested that salbutamol reduces the symptoms of newborns and adolescents with myasthenia gravis and transient neonatal myasthenia gravis.
Adverse effects
The most common side effects are fine tremor, anxiety, headache, muscle cramps, dry mouth, and palpitation. Other symptoms may include tachycardia, arrhythmia, flushing of the skin, myocardial ischemia (rare), and disturbances of sleep and behaviour. Rarely occurring, but of importance, are allergic reactions of paradoxical bronchospasms, urticaria (hives), angioedema, hypotension, and collapse. High doses or prolonged use may cause hypokalemia, which is of concern especially in patients with kidney failure and those on certain diuretics and xanthine derivatives.
Salbutamol metered dose inhalers have been described as the "single biggest source of carbon emissions from NHS medicines prescribing" due to the propellants used in the inhalers. Dry powder inhalers are recommended as a low-carbon alternative.
Pharmacology
The tertiary butyl group in salbutamol makes it more selective for β2 receptors, which are the predominant receptors on the bronchial smooth muscles. Activation of these receptors causes adenylyl cyclase to convert ATP to cAMP, beginning the signalling cascade that ends with the inhibition of myosin phosphorylation and lowering the intracellular concentration of calcium ions (myosin phosphorylation and calcium ions are necessary for muscle contractions). The increase in cAMP also inhibits inflammatory cells in the airway, such as basophils, eosinophils, and most especially mast cells, from releasing inflammatory mediators and cytokines. Salbutamol and other β2 receptor agonists also increase the conductance of channels sensitive to calcium and potassium ions, leading to hyperpolarization and relaxation of bronchial smooth muscles.
Salbutamol is either filtered out by the kidneys directly or is first metabolized into the 4′-O-sulfate, which is excreted in the urine.
Chemistry
Salbutamol is sold as a racemic mixture. The (R)-(−)-enantiomer (CIP nomenclature) is shown in the image at right (top), and is responsible for the pharmacologic activity; the (S)-(+)-enantiomer (bottom) blocks metabolic pathways associated with elimination of itself and of the pharmacologically active enantiomer (R). The slower metabolism of the (S)-(+)-enantiomer also causes it to accumulate in the lungs, which can cause airway hyperreactivity and inflammation. Potential formulation of the R form as an enantiopure drug is complicated by the fact that the stereochemistry is not stable, but rather the compound undergoes racemization within a few days to weeks, depending on pH.
The direct separation of Salbutamol enantiomers and the control of enantiomeric purity has been described by thin-layer chromatography.
History
Salbutamol was discovered in 1966, by a research team led by David Jack at the Allen and Hanburys laboratory (now a subsidiary of Glaxo) in Ware, Hertfordshire, England, and was launched as Ventolin in 1969.
The 1972 Munich Olympics were the first Olympics where anti-doping measures were deployed, and at that time β2 agonists were considered to be stimulants with high risk of abuse for doping. Inhaled salbutamol was banned from those games, but by 1986 was permitted (although oral β2 agonists were not). After a steep rise in the number of athletes taking β2 agonists for asthma in the 1990s, Olympic athletes were required to provide proof that they had asthma in order to be allowed to use inhaled β2 agonists.
In February 2020, the U.S. Food and Drug Administration (FDA) approved the first generic of an albuterol sulfate inhalation aerosol for the treatment or prevention of bronchospasm in people four years of age and older with reversible obstructive airway disease and the prevention of exercise-induced bronchospasm in people four years of age and older. The FDA granted approval of the generic albuterol sulfate inhalation aerosol to Perrigo Pharmaceutical.
In April 2020, the FDA approved the first generic of Proventil HFA (albuterol sulfate) metered dose inhaler, 90 μg per inhalation, for the treatment or prevention of bronchospasm in patients four years of age and older who have reversible obstructive airway disease, as well as the prevention of exercise-induced bronchospasm in this age group. The FDA granted approval of this generic albuterol sulfate inhalation aerosol to Cipla Limited.
Society and culture
In 2020, generic versions were approved in the United States.
Names
Salbutamol is the international nonproprietary name (INN) while albuterol is the United States Adopted Name (USAN). The drug is usually manufactured and distributed as the sulfate salt (salbutamol sulfate).
It was first sold by Allen & Hanburys (UK) under the brand name Ventolin, and has been used for the treatment of asthma ever since. The drug is marketed under many names worldwide.
Misuse
Albuterol and other beta-2 adrenergic agonists are used by some recreational bodybuilders.
Doping
there was no evidence that an increase in physical performance occurs after inhaling salbutamol, but there are various reports for benefit when delivered orally or intravenously. In spite of this, salbutamol required "a declaration of Use in accordance with the International Standard for Therapeutic Use Exemptions" under the 2010 WADA prohibited list. This requirement was relaxed when the 2011 list was published to permit the use of "salbutamol (maximum 1600 micrograms over 24 hours) and salmeterol when taken by inhalation in accordance with the manufacturers' recommended therapeutic regimen."
Abuse of the drug may be confirmed by detection of its presence in plasma or urine, typically exceeding 1,000 ng/mL. The window of detection for urine testing is on the order of just 24 hours, given the relatively short elimination half-life of the drug, estimated at between 5 and 6 hours following oral administration of 4 mg.
Research
Salbutamol has been studied in subtypes of congenital myasthenic syndrome associated with mutations in Dok-7.
It has also been tested in a trial aimed at treatment of spinal muscular atrophy; it is speculated to modulate the alternative splicing of the SMN2 gene, increasing the amount of the SMN protein, the deficiency of which is regarded as a cause of the disease.
Albuterol increases energy expenditure by 10-15 percent at a therapeutic dose for asthma and around 25 percent at a higher, oral dose. In several human studies, albuterol increased lean body mass, reduced fat mass, and caused lipolysis; it has been studied for use as an anti-obesity and anti-muscle wasting medication when taken orally.
Veterinary use
Salbutamol's low toxicity makes it safe for other animals and thus is the medication of choice for treating acute airway obstruction in most species. It is usually used to treat bronchospasm or coughs in cats and dogs and used as a bronchodilator in horses with recurrent airway obstruction; it can also be used in emergencies to treat asthmatic cats.
Toxic effects require an extremely high dose, and most overdoses are due to dogs chewing on and puncturing an inhaler or nebulizer vial.
References
Antiasthmatic drugs
Beta-adrenergic agonists
Chemical substances for emergency medicine
Drugs developed by GSK plc
Drugs developed by Merck & Co.
Phenols
Phenylethanolamines
Racemic mixtures
Sympathomimetic amines
Tert-butyl compounds
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Salbutamol | [
"Chemistry"
] | 2,218 | [
"Racemic mixtures",
"Chemical substances for emergency medicine",
"Stereochemistry",
"Chemical mixtures",
"Chemicals in medicine"
] |
463,407 | https://en.wikipedia.org/wiki/Taxiway | A taxiway is a path for aircraft at an airport connecting runways with aprons, hangars, terminals and other facilities. They mostly have a hard surface such as asphalt or concrete, although smaller general aviation airports sometimes use gravel or grass.
Most airports do not have a specific speed limit for taxiing (though some do). There is a general rule on safe speed based on obstacles. Operators and aircraft manufacturers might have limits. Typical taxi speeds are 20–30 knots (37–56 km/h; 23–35 mph).
High-speed exit
Busy airports typically construct high-speed or rapid-exit taxiways to allow aircraft to leave the runway at higher speeds. This allows the aircraft to vacate the runway quicker, permitting another to land or take off in a shorter interval of time. This is accomplished by reducing the angle the exiting taxiway intercepts the runway at to 30 degrees, instead of 90 degrees, thus increasing the speed at which the aircraft can exit the runway onto the taxiway.
Markings
Normal Centerline A single continuous yellow line, in width.
Enhanced Centerline The enhanced taxiway center line marking consists of a parallel line of yellow dashes on either side of the taxiway centerline. Taxiway centerlines are enhanced for 150 feet (46 m) before a runway holding position marking. The enhanced taxiway centerline is standard at all FAR Part 139 certified airports in the US.
Taxiway Edge Markings Used to define the edge of the taxiway when the edge does not correspond with the edge of the pavement.
Continuous markings consist of a continuous double yellow line, with each line being at least in width, spaced apart. They divide the taxiway edge from the shoulder or some other abutting paved surface not intended for use by aircraft.
Dashed markings define the edge of a taxiway on a paved surface where the adjoining pavement to the taxiway edge is intended for use by aircraft, e.g., an apron. These markings consist of a broken double yellow line, with each line being at least in width, spaced apart (edge to edge). These lines are 15 feet (4.6 m) in length with 25 foot (7.6 m) gaps.
Taxi Shoulder Markings Taxiways, holding bays, and aprons are sometimes provided with paved shoulders to prevent blast and water erosion. Shoulders are not intended for use by aircraft, and may be unable to carry the aircraft load. Taxiway shoulder markings are yellow lines perpendicular to the taxiway edge, from taxiway edge to pavement edge, about 3 metres.
Surface Painted Taxiway Direction Signs Yellow background with a black inscription, provided when it is not possible to provide taxiway direction signs at intersections, or when necessary to supplement such signs. These markings are located on either side of the taxiway.
Surface Painted Location Signs Black background with a yellow inscription and yellow and black border. Where necessary, these markings supplement location signs located alongside the taxiway and assist the pilot in confirming the designation of the taxiway on which the aircraft is located. These markings are located on the right side of the centerline.
Geographic Position Markings These markings are located at points along low visibility taxi routes (when Runway visual range is below 1200 feet (370 m)). They are positioned to the left of the taxiway centerline in the direction of taxiing. Black inscription centered on pink circle with black inner and white outer ring. If the pavement is a light colour then the border is white with a black outer ring.
Runway Holding Position Markings These show where an aircraft should stop when approaching a runway from a taxiway. They consist of four yellow lines, two solid and two dashed, spaced six or twelve inches (15 or 30 cm) apart, and extending across the width of the taxiway or runway. The solid lines are always on the side where the aircraft is to hold. There are three locations where runway holding position markings are encountered: Runway holding position markings on taxiways; runway holding position markings on runways; taxiways located in runway approach areas.
Holding Position Markings for Instrument Landing System (ILS) These consist of two yellow solid lines spaced two feet (60 cm) apart connected by pairs of solid lines spaced ten feet (3 metres) apart extending across the width of the taxiway.
Holding Position Markings for Taxiway/Taxiway Intersections These consist of a single dashed line extending across the width of the taxiway.
Surface Painted Holding Position Signs Red background signs with a white inscription to supplement the signs located at the holding position.
The taxiways are given alphanumeric identification. These taxiway IDs are shown on black and yellow signboards along the taxiways.
Signs
Airport guidance signs provide direction and information to taxiing aircraft and airport vehicles. Smaller airports may have few or no signs, relying instead on airport diagrams and charts.
There are two classes of signage at airports, with several types of each:
Operational guidance signs
Location signs – yellow on black background. Identifies the runway or taxiway the aircraft is currently on or is entering.
Direction/Runway exit signs – black on yellow. Identifies the intersecting taxiways the aircraft is approaching, with an arrow indicating the direction to turn.
Destination signs – black on yellow, similar to the direction signs. They indicate a destination at the airport and always have an arrow showing the direction of the taxi route to that destination. When the inscription for two or more destinations having a common taxi route are placed on a sign, the destinations are separated by an interpunct (•) and only one arrow is used.
Stop Bar signs – white on blue background. The designation consists of the letter S followed by designation of the taxiway on which the Stop Bar is positioned. This sign is not standard.
Other – many airports use conventional traffic signs such as stop and yield signs throughout the airport.
Mandatory instruction signs
Mandatory instruction signs are white on red. They show entrances to runways or critical areas. Vehicles and aircraft are required to stop at these signs until the control tower gives clearance to proceed.
Runway signs – White text on a red background. These signs identify a runway intersection ahead, e.g., runway 12-30 in the photo above.
Frequency change signs – Usually a stop sign and an instruction to change to another frequency. These signs are used at airports with different areas of ground control.
Holding position signs – A single solid yellow bar across a taxiway indicates a position where ground control may require a stop. If two solid yellow bars and two dashed yellow bars are encountered, this indicates a holding position for a runway intersection ahead; runway holding lines must never be crossed without permission. At some airports, a line of red lights across a taxiway is used during low visibility operations to indicate holding positions. An "interrupted ladder" type marking with an "ILS" sign in white on red indicates a holding position before an ILS critical area.
Lights
For night operations, taxiways at major airports are equipped with lights, although many small airports are not equipped with taxiway lighting.
Taxiway Edge Lights: used to outline the edges of taxiways during periods of darkness or restricted visibility conditions. These fixtures may be elevated or in-pavement and emit blue light normally. Where a four-way intersection crosses, the light at the centre of the crossing may be omnidirectional and emit yellow light. Where a road for ground vehicles only meets a taxiway or at an end of usable service area for a ramp or taxiway, the light at the edge of the road or the final taxiway edge light may emit red light. Taxiway edge lights are spaced at a minimum of 50 to a maximum of 200 feet apart. On straightaways, the spacing is typically 200 feet. These lights can be closer together at taxiway intersections.
Taxiway Centerline Lights: They are steady burning and emit green light located along the taxiway centerline. Where a taxiway crosses a runway, or where a "lead-off" taxiway centreline leads off of a runway to join a taxiway, these lights will alternate yellow and green. Taxiway Centerline Lights are spaced at either 50 or 100 foot intervals depending on the minimum authorized visibility. On curved taxiway segments, Taxiway Centerline Lights may be required to be closer together.
Clearance Bar Lights: Three in-pavement steady-burning yellow lights installed at holding positions on taxiways
Runway Guard Lights: Either a pair of elevated flashing yellow lights installed on either side of the taxiway, or a row of in-pavement yellow lights installed across the entire taxiway, at the runway holding position marking at taxiway/runway intersections.
Stop Bar Lights: A row of red, unidirectional, steady-burning in-pavement lights installed across the entire taxiway at the runway holding position, and elevated steady-burning red lights on each side used in low visibility conditions (below 1,200 ft RVR). A controlled stop bar is operated in conjunction with the taxiway centerline lead-on lights which extend from the stop bar toward the runway. Following the ATC clearance to proceed, the stop bar is turned off and the lead-on lights are turned on.
See also
Aviation
Runway
Pavement Classification Number (PCN)
References
External links
Airport infrastructure
https://www.faa.gov/documentLibrary/media/Advisory_Circular/150-5340-30J.pdf | Taxiway | [
"Engineering"
] | 1,896 | [
"Airport infrastructure",
"Aerospace engineering"
] |
463,408 | https://en.wikipedia.org/wiki/Airframe | The mechanical structure of an aircraft is known as the airframe. This structure is typically considered to include the fuselage, undercarriage, empennage and wings, and excludes the propulsion system.
Airframe design is a field of aerospace engineering that combines aerodynamics, materials technology and manufacturing methods with a focus on weight, strength and aerodynamic drag, as well as reliability and cost.
History
Modern airframe history began in the United States during the Wright Flyer's maiden flight, showing the potential of fixed-wing designs in aircraft.
In 1912 the Deperdussin Monocoque pioneered the light, strong and streamlined monocoque fuselage formed of thin plywood layers over a circular frame, achieving .
First World War
Many early developments were spurred by military needs during World War I. Well known aircraft from that era include the Dutch designer Anthony Fokker's combat aircraft for the German Empire's , and U.S. Curtiss flying boats and the German/Austrian Taube monoplanes. These used hybrid wood and metal structures.
By the 1915/16 timeframe, the German Luft-Fahrzeug-Gesellschaft firm had devised a fully monocoque all-wood structure with only a skeletal internal frame, using strips of plywood laboriously "wrapped" in a diagonal fashion in up to four layers, around concrete male molds in "left" and "right" halves, known as Wickelrumpf (wrapped-body) construction - this first appeared on the 1916 LFG Roland C.II, and would later be licensed to Pfalz Flugzeugwerke for its D-series biplane fighters.
In 1916 the German Albatros D.III biplane fighters featured semi-monocoque fuselages with load-bearing plywood skin panels glued to longitudinal longerons and bulkheads; it was replaced by the prevalent stressed skin structural configuration as metal replaced wood. Similar methods to the Albatros firm's concept were used by both Hannoversche Waggonfabrik for their light two-seat CL.II through CL.V designs, and by Siemens-Schuckert for their later Siemens-Schuckert D.III and higher-performance D.IV biplane fighter designs. The Albatros D.III construction was of much less complexity than the patented LFG Wickelrumpf concept for their outer skinning.
German engineer Hugo Junkers first flew all-metal airframes in 1915 with the all-metal, cantilever-wing, stressed-skin monoplane Junkers J 1 made of steel. It developed further with lighter weight duralumin, invented by Alfred Wilm in Germany before the war; in the airframe of the Junkers D.I of 1918, whose techniques were adopted almost unchanged after the war by both American engineer William Bushnell Stout and Soviet aerospace engineer Andrei Tupolev, proving to be useful for aircraft up to 60 meters in wingspan by the 1930s.
Between World wars
The J 1 of 1915, and the D.I fighter of 1918, were followed in 1919 by the first all-metal transport aircraft, the Junkers F.13 made of Duralumin as the D.I had been; 300 were built, along with the first four-engine, all-metal passenger aircraft, the sole Zeppelin-Staaken E-4/20. Commercial aircraft development during the 1920s and 1930s focused on monoplane designs using Radial engines. Some were produced as single copies or in small quantity such as the Spirit of St. Louis flown across the Atlantic by Charles Lindbergh in 1927. William Stout designed the all-metal Ford Trimotors in 1926.
The Hall XFH naval fighter prototype flown in 1929 was the first aircraft with a riveted metal fuselage : an aluminium skin over steel tubing, Hall also pioneered flush rivets and butt joints between skin panels in the Hall PH flying boat also flying in 1929. Based on the Italian Savoia-Marchetti S.56, the 1931 Budd BB-1 Pioneer experimental flying boat was constructed of corrosion-resistant stainless steel assembled with newly developed spot welding by U.S. railcar maker Budd Company.
The original Junkers corrugated duralumin-covered airframe philosophy culminated in the 1932-origin Junkers Ju 52 trimotor airliner, used throughout World War II by the Nazi German Luftwaffe for transport and paratroop needs. Andrei Tupolev's designs in Joseph Stalin's Soviet Union designed a series of all-metal aircraft of steadily increasing size culminating in the largest aircraft of its era, the eight-engined Tupolev ANT-20 in 1934, and Donald Douglas' firms developed the iconic Douglas DC-3 twin-engined airliner in 1936. They were among the most successful designs to emerge from the era through the use of all-metal airframes.
In 1937, the Lockheed XC-35 was specifically constructed with cabin pressurization to undergo extensive high-altitude flight tests, paving the way for the Boeing 307 Stratoliner, which would be the first aircraft with a pressurized cabin to enter commercial service.
Second World War
During World War II, military needs again dominated airframe designs. Among the best known were the US C-47 Skytrain, B-17 Flying Fortress, B-25 Mitchell and P-38 Lightning, and British Vickers Wellington that used a geodesic construction method, and Avro Lancaster, all revamps of original designs from the 1930s. The first jets were produced during the war but not made in large quantity.
Due to wartime scarcity of aluminium, the de Havilland Mosquito fighter-bomber was built from wood—plywood facings bonded to a balsawood core and formed using molds to produce monocoque structures, leading to the development of metal-to-metal bonding used later for the de Havilland Comet and Fokker F27 and F28.
Postwar
Postwar commercial airframe design focused on airliners, on turboprop engines, and then on jet engines. The generally higher speeds and tensile stresses of turboprops and jets were major challenges. Newly developed aluminium alloys with copper, magnesium and zinc were critical to these designs.
Flown in 1952 and designed to cruise at Mach 2 where skin friction required its heat resistance, the Douglas X-3 Stiletto was the first titanium aircraft but it was underpowered and barely supersonic; the Mach 3.2 Lockheed A-12 and SR-71 were also mainly titanium, as was the cancelled Boeing 2707 Mach 2.7 supersonic transport.
Because heat-resistant titanium is hard to weld and difficult to work with, welded nickel steel was used for the Mach 2.8 Mikoyan-Gurevich MiG-25 fighter, first flown in 1964; and the Mach 3.1 North American XB-70 Valkyrie used brazed stainless steel honeycomb panels and titanium but was cancelled by the time it flew in 1964.
A computer-aided design system was developed in 1969 for the McDonnell Douglas F-15 Eagle, which first flew in 1974 alongside the Grumman F-14 Tomcat and both used boron fiber composites in the tails; less expensive carbon fiber reinforced polymer were used for wing skins on the McDonnell Douglas AV-8B Harrier II, F/A-18 Hornet and Northrop Grumman B-2 Spirit.
Modern era
The vertical stabilizer of the Airbus A310-300, first flown in 1985, was the first carbon-fiber primary structure used in a commercial aircraft; composites are increasingly used since in Airbus airliners: the horizontal stabilizer of the A320 in 1987 and A330/A340 in 1994, and the center wing-box and aft fuselage of the A380 in 2005.
The Cirrus SR20, type certificated in 1998, was the first widely produced general aviation aircraft manufactured with all-composite construction, followed by several other light aircraft in the 2000s.
The Boeing 787, first flown in 2009, was the first commercial aircraft with 50% of its structure weight made of carbon-fiber composites, along with 20% aluminium and 15% titanium: the material allows for a lower-drag, higher wing aspect ratio and higher cabin pressurization; the competing Airbus A350, flown in 2013, is 53% carbon-fiber by structure weight. It has a one-piece carbon fiber fuselage, said to replace "1,200 sheets of aluminium and 40,000 rivets."
The 2013 Bombardier CSeries have a dry-fiber resin transfer infusion wing with a lightweight aluminium-lithium alloy fuselage for damage resistance and repairability, a combination which could be used for future narrow-body aircraft. In 2016, the Cirrus Vision SF50 became the first certified light jet made entirely from carbon-fiber composites.
In February 2017, Airbus installed a 3D printing machine for titanium aircraft structural parts using electron beam additive manufacturing from Sciaky, Inc.
Safety
Airframe production has become an exacting process. Manufacturers operate under strict quality control and government regulations. Departures from established standards become objects of major concern.
A landmark in aeronautical design, the world's first jet airliner, the de Havilland Comet, first flew in 1949. Early models suffered from catastrophic airframe metal fatigue, causing a series of widely publicised accidents. The Royal Aircraft Establishment investigation at Farnborough Airport founded the science of aircraft crash reconstruction. After 3000 pressurisation cycles in a specially constructed pressure chamber, airframe failure was found to be due to stress concentration, a consequence of the square shaped windows. The windows had been engineered to be glued and riveted, but had been punch riveted only. Unlike drill riveting, the imperfect nature of the hole created by punch riveting may cause the start of fatigue cracks around the rivet.
The Lockheed L-188 Electra turboprop, first flown in 1957 became a costly lesson in controlling oscillation and planning around metal fatigue. Its 1959 crash of Braniff Flight 542 showed the difficulties that the airframe industry and its airline customers can experience when adopting new technology.
The incident bears comparison with the Airbus A300 crash on takeoff of the American Airlines Flight 587 in 2001, after its vertical stabilizer broke away from the fuselage, called attention to operation, maintenance and design issues involving composite materials that are used in many recent airframes. The A300 had experienced other structural problems but none of this magnitude.
Alloys for airframe components
As the twentieth century progressed, aluminum became an essential metal in aircraft. The cylinder block of the engine that powered the Wright brothers’ plane at Kitty Hawk in 1903 was a one-piece casting in an aluminum alloy containing 8% copper; aluminum propeller blades appeared as early as 1907; and aluminum covers, seats, cowlings, cast brackets, and similar parts were common by the beginning of the First World War. In 1916, L. Brequet designed a reconnaissance bomber that marked the initial use of aluminum in the working structure of an airplane. By war’s end, the Allies and Germany employed aluminum alloys for the structural framework of fuselage and wing assemblies.
The aircraft airframe has been the most demanding application for aluminum alloys; to chronicle the development of the high-strength alloys is also to record the development of airframes. Duralumin, the first high-strength, heat treatable aluminum alloy, was employed initially for the framework of rigid airships, by Germany and the Allies during World War I. Duralumin was an aluminum-copper-magnesium alloy; it was originated in Germany and developed in the United States as Alloy 17S-T (2017-T4). It was utilized primarily as sheet and plate.
Alloy 7075-T6 (70,000-psi yield strength), an Al-Zn-Mg-Cu alloy, was introduced in 1943. Since then, most aircraft structures have been specified in alloys of this type. The first aircraft designed in 7075-T6 was the Navy’s P2V patrol bomber. A higher-strength alloy in the same series, 7178-T6 (78,000-psi yield strength), was developed in 1951; it has not generally displaced 7075-T6, which has superior fracture toughness.
Alloy 7178-T6 is used primarily in structural members where performance is critical under compressive loading.
Alloy 7079-T6 was introduced in the United States in 1954. In forged sections over 3 in. thick, it provides higher strength and greater transverse ductility than 7075-T6. It now is available in sheet, plate, extrusions, and forgings.
Alloy X7080-T7, with higher resistance to stress corrosion than 7079-T6, is being developed for thick parts. Because it is relatively insensitive to quenching rate, good strengths with low quenching stresses can be produced in thick sections.
Cladding of aluminum alloys was developed initially to increase the corrosion resistance of 2017-T4 sheet and thus to reduce aluminum aircraft maintenance requirements. The coating on 2017 sheet - and later on 2024-T3 - consisted of commercial-purity aluminum metallurgically bonded to one or both surfaces of the sheet.
Electrolytic protection, present under wet or moist conditions, is based on the appreciably higher electrode potential of commercial-purity aluminum compared to alloy 2017 or 2024 in the T3 or T4 temper. When 7075-T6 and other Al-Zn-Mg-Cu alloys appeared, an aluminum-zinc cladding alloy 7072 was developed to provide a relative electrode potential sufficient to protect the new strong alloys.
However, the high-performance aircraft designed since 1945 have made extensive use of skin structures machined from thick plate and extrusions, precluding the use of alclad exterior skins. Maintenance requirements increased as a result, and these stimulated research and development programs seeking higher-strength alloys with improved resistance to corrosion without cladding.
Aluminum alloy castings traditionally have been used in nonstructural airplane hardware, such as pulley brackets, quadrants, doublers, clips and ducts. They also have been employed extensively in complex valve bodies of hydraulic control systems. The philosophy of some aircraft manufacturers still is to specify castings only in places where failure of the part cannot cause loss of the airplane. Redundancy in cable and hydraulic control systems permits the use of castings.
Casting technology has made great advances in the last decade. Time-honored alloys such as 355 and 356 have been modified to produce higher levels of strength and ductility. New alloys such as 354, A356, A357, 359 and Tens 50 were developed for premium-strength castings. The high strength is accompanied by enhanced structural integrity and performance reliability.
Electric resistance spot and seam welding are used to join secondary structures, such as fairings, engine cowls, and doublers, to bulkheads and skins. Difficulties in quality control have resulted in low utilization of electric resistance welding for primary structure.
Ultrasonic welding offers some economic and quality-control advantages for production joining, particularly for thin sheet. However, the method has not yet been developed extensively in the aerospace industry.
Adhesive bonding is a common method of joining in both primary and secondary structures. Its selection is dependent on the design philosophy of the aircraft manufacturer. It has proven satisfactory in attaching stiffeners, such as hat sections to sheet, and face sheets to honeycomb cores. Also, adhesive bonding has withstood adverse exposures such as sea-water immersion and atmospheres.
Fusion welded aluminum primary structures in airplanes are virtually nonexistent, because the high-strength alloys utilized have low weldability and low weld-joint efficiencies. Some of the alloys, such as 2024-T4, also have their corrosion resistance lowered in the heat-affected zone if left in the as-welded condition.
The improved welding processes and higher-strength weldable alloys developed during the past decade offer new possibilities for welded primary structures. For example, the weldability and strength of alloys 2219 and 7039, and the brazeability and strength of X7005, open new avenues for design and manufacture of aircraft structures.
Light aircraft
Light aircraft have airframes primarily of all-aluminum semi-monocoque construction, however, a few light planes have tubular truss load-carrying construction with fabric or aluminum skin, or both.
Aluminum skin is normally of the minimum practical thickness: 0.015 to 0.025 in. Although design strength requirements are relatively low, the skin needs moderately high yield strength and hardness to minimize ground damage from stones, debris, mechanics’ tools, and general handling. Other primary factors involved in selecting an alloy for this application are corrosion resistance, cost, and appearance. Alloys 6061-T6 and alclad 2024-T3 are the primary choices.
Skin sheet on light airplanes of recent design and construction generally is alclad 2024-T3. The internal structure comprises stringers, spars, bulkheads, chord members, and various attaching fittings made of aluminum extrusions, formed sheet, forgings, and castings.
The alloys most used for extruded members are 2024-T4 for sections less than 0.125 in. thick and for general application, and 2014-T6 for thicker, more highly stressed sections. Alloy 6061-T6 has considerable application for extrusions requiring thin sections and excellent corrosion resistance. Alloy 2014-T6 is the primary forging alloy, especially for landing gear and hydraulic cylinders. Alloy 6061-T6 and its forging counterpart 6151-T6 often are utilized in miscellaneous fittings for reasons of economy and increased corrosion performance, when the parts are not highly stressed.
Alloys 356-T6 and A356-T6 are the primary casting alloys employed for brackets, bellcranks, pulleys, and various fittings. Wheels are produced in these alloys as permanent mold or sand castings. Die castings in alloy A380 also are satisfactory for wheels for light aircraft.
For low-stressed structure in light aircraft, alloys 3003-H12, H14, and H16; 5052-O, H32, H34, and H36; and 6061-T4 and T6 are sometimes employed. These alloys are also primary selections for fuel, lubricating oil, and hydraulic oil tanks, piping, and instrument tubing and brackets, especially where welding is required. Alloys 3003, 6061, and 6951 are utilized extensively in brazed heat exchangers and hydraulic accessories. Recently developed alloys, such as 5086, 5454, 5456, 6070, and the new weldable aluminum-magnesium-zinc alloys, offer strength advantages over those previously mentioned.
Sheet assembly of light aircraft is accomplished predominantly with rivets of alloys 2017-T4, 2117-T4, or 2024-T4. Self-tapping sheet metal screws are available in aluminum alloys, but cadmium-plated steel screws are employed more commonly to obtain higher shear strength and driveability. Alloy 2024-T4 with an anodic coating is standard for aluminum screws, bolts, and nuts made to military specifications. Alloy 6262-T9, however, is superior for nuts, because of its virtual immunity to stress-corrosion cracking.
See also
Longeron
Former
Chord (aeronautics)
Aircraft fairing
Vertical stabilizer
References
Further reading
Aircraft components
Structural system | Airframe | [
"Technology",
"Engineering"
] | 4,011 | [
"Structural system",
"Structural engineering",
"Building engineering"
] |
463,721 | https://en.wikipedia.org/wiki/Space%20group | In mathematics, physics and chemistry, a space group is the symmetry group of a repeating pattern in space, usually in three dimensions. The elements of a space group (its symmetry operations) are the rigid transformations of the pattern that leave it unchanged. In three dimensions, space groups are classified into 219 distinct types, or 230 types if chiral copies are considered distinct. Space groups are discrete cocompact groups of isometries of an oriented Euclidean space in any number of dimensions. In dimensions other than 3, they are sometimes called Bieberbach groups.
In crystallography, space groups are also called the crystallographic or Fedorov groups, and represent a description of the symmetry of the crystal. A definitive source regarding 3-dimensional space groups is the International Tables for Crystallography .
History
Space groups in 2 dimensions are the 17 wallpaper groups which have been known for several centuries, though the proof that the list was complete was only given in 1891, after the much more difficult classification of space groups had largely been completed.
In 1879 the German mathematician Leonhard Sohncke listed the 65 space groups (called Sohncke groups) whose elements preserve the chirality. More accurately, he listed 66 groups, but both the Russian mathematician and crystallographer Evgraf Fedorov and the German mathematician Arthur Moritz Schoenflies noticed that two of them were really the same. The space groups in three dimensions were first enumerated in 1891 by Fedorov (whose list had two omissions (I3d and Fdd2) and one duplication (Fmm2)), and shortly afterwards in 1891 were independently enumerated by Schönflies (whose list had four omissions (I3d, Pc, Cc, ?) and one duplication (P21m)). The correct list of 230 space groups was found by 1892 during correspondence between Fedorov and Schönflies. later enumerated the groups with a different method, but omitted four groups (Fdd2, I2d, P21d, and P21c) even though he already had the correct list of 230 groups from Fedorov and Schönflies; the common claim that Barlow was unaware of their work is incorrect. describes the history of the discovery of the space groups in detail.
Elements
The space groups in three dimensions are made from combinations of the 32 crystallographic point groups with the 14 Bravais lattices, each of the latter belonging to one of 7 lattice systems. What this means is that the action of any element of a given space group can be expressed as the action of an element of the appropriate point group followed optionally by a translation. A space group is thus some combination of the translational symmetry of a unit cell (including lattice centering), the point group symmetry operations of reflection, rotation and improper rotation (also called rotoinversion), and the screw axis and glide plane symmetry operations. The combination of all these symmetry operations results in a total of 230 different space groups describing all possible crystal symmetries.
The number of replicates of the asymmetric unit in a unit cell is thus the number of lattice points in the cell times the order of the point group. This ranges from 1 in the case of space group P1 to 192 for a space group like Fmm, the NaCl structure.
Elements fixing a point
The elements of the space group fixing a point of space are the identity element, reflections, rotations and improper rotations, including inversion points.
Translations
The translations form a normal abelian subgroup of rank 3, called the Bravais lattice (so named after French physicist Auguste Bravais). There are 14 possible types of Bravais lattice. The quotient of the space group by the Bravais lattice is a finite group which is one of the 32 possible point groups.
Glide planes
A glide plane is a reflection in a plane, followed by a translation parallel with that plane. This is noted by , , or , depending on which axis the glide is along. There is also the glide, which is a glide along the half of a diagonal of a face, and the glide, which is a fourth of the way along either a face or space diagonal of the unit cell. The latter is called the diamond glide plane as it features in the diamond structure. In 17 space groups, due to the centering of the cell, the glides occur in two perpendicular directions simultaneously, i.e. the same glide plane can be called b or c, a or b, a or c. For example, group Abm2 could be also called Acm2, group Ccca could be called Cccb. In 1992, it was suggested to use symbol e for such planes. The symbols for five space groups have been modified:
Screw axes
A screw axis is a rotation about an axis, followed by a translation along the direction of the axis. These are noted by a number, n, to describe the degree of rotation, where the number is how many operations must be applied to complete a full rotation (e.g., 3 would mean a rotation one third of the way around the axis each time). The degree of translation is then added as a subscript showing how far along the axis the translation is, as a portion of the parallel lattice vector. So, 21 is a twofold rotation followed by a translation of 1/2 of the lattice vector.
General formula
The general formula for the action of an element of a space group is
y = M.x + D
where M is its matrix, D is its vector, and where the element transforms point x into point y. In general, D = D (lattice) + , where is a unique function of M that is zero for M being the identity. The matrices M form a point group that is a basis of the space group; the lattice must be symmetric under that point group, but the crystal structure itself may not be symmetric under that point group as applied to any particular point (that is, without a translation). For example, the diamond cubic structure does not have any point where the cubic point group applies.
The lattice dimension can be less than the overall dimension, resulting in a "subperiodic" space group. For (overall dimension, lattice dimension):
(1,1): One-dimensional line groups
(2,1): Two-dimensional line groups: frieze groups
(2,2): Wallpaper groups
(3,1): Three-dimensional line groups; with the 3D crystallographic point groups, the rod groups
(3,2): Layer groups
(3,3): The space groups discussed in this article
Chirality
The 65 "Sohncke" space groups, not containing any mirrors, inversion points, improper rotations or glide planes, yield chiral crystals, not identical to their mirror image; whereas space groups that do include at least one of those give achiral crystals. Achiral molecules sometimes form chiral crystals, but chiral molecules always form chiral crystals, in one of the space groups that permit this.
Among the 65 Sohncke groups are 22 that come in 11 enantiomorphic pairs.
Combinations
Only certain combinations of symmetry elements are possible in a space group. Translations are always present, and the space group P1 has only translations and the identity element. The presence of mirrors implies glide planes as well, and the presence of rotation axes implies screw axes as well, but the converses are not true. An inversion and a mirror implies two-fold screw axes, and so on.
Notation
There are at least ten methods of naming space groups. Some of these methods can assign several different names to the same space group, so altogether there are many thousands of different names.
Number The International Union of Crystallography publishes tables of all space group types, and assigns each a unique number from 1 to 230. The numbering is arbitrary, except that groups with the same crystal system or point group are given consecutive numbers.
Hall notation
Space group notation with an explicit origin. Rotation, translation and axis-direction symbols are clearly separated and inversion centers are explicitly defined. The construction and format of the notation make it particularly suited to computer generation of symmetry information. For example, group number 3 has three Hall symbols: P 2y (P 1 2 1), P 2 (P 1 1 2), P 2x (P 2 1 1).
Schönflies notation The space groups with given point group are numbered by 1, 2, 3, ... (in the same order as their international number) and this number is added as a superscript to the Schönflies symbol for the point group. For example, groups numbers 3 to 5 whose point group is C2 have Schönflies symbols C, C, C.
Coxeter notation Spatial and point symmetry groups, represented as modifications of the pure reflectional Coxeter groups.
Geometric notation
A geometric algebra notation.
Classification systems
There are (at least) 10 different ways to classify space groups into classes. The relations between some of these are described in the following table. Each classification system is a refinement of the ones below it. To understand an explanation given here it may be necessary to understand the next one down.
gave another classification of the space groups, called a fibrifold notation, according to the fibrifold structures on the corresponding orbifold. They divided the 219 affine space groups into reducible and irreducible groups. The reducible groups fall into 17 classes corresponding to the 17 wallpaper groups, and the remaining 35 irreducible groups are the same as the cubic groups and are classified separately.
In other dimensions
Bieberbach's theorems
In n dimensions, an affine space group, or Bieberbach group, is a discrete subgroup of isometries of n-dimensional Euclidean space with a compact fundamental domain. proved that the subgroup of translations of any such group contains n linearly independent translations, and is a free abelian subgroup of finite index, and is also the unique maximal normal abelian subgroup. He also showed that in any dimension n there are only a finite number of possibilities for the isomorphism class of the underlying group of a space group, and moreover the action of the group on Euclidean space is unique up to conjugation by affine transformations. This answers part of Hilbert's eighteenth problem. showed that conversely any group that is the extension of Zn by a finite group acting faithfully is an affine space group. Combining these results shows that classifying space groups in n dimensions up to conjugation by affine transformations is essentially the same as classifying isomorphism classes for groups that are extensions of Zn by a finite group acting faithfully.
It is essential in Bieberbach's theorems to assume that the group acts as isometries; the theorems do not generalize to discrete cocompact groups of affine transformations of Euclidean space. A counter-example is given by the 3-dimensional Heisenberg group of the integers acting by translations on the Heisenberg group of the reals, identified with 3-dimensional Euclidean space. This is a discrete cocompact group of affine transformations of space, but does not contain a subgroup Z3.
Classification in small dimensions
This table gives the number of space group types in small dimensions, including the numbers of various classes of space group. The numbers of enantiomorphic pairs are given in parentheses.
Magnetic groups and time reversal
In addition to crystallographic space groups there are also magnetic space groups (also called two-color (black and white) crystallographic groups or Shubnikov groups). These symmetries contain an element known as time reversal. They treat time as an additional dimension, and the group elements can include time reversal as reflection in it. They are of importance in magnetic structures that contain ordered unpaired spins, i.e. ferro-, ferri- or antiferromagnetic structures as studied by neutron diffraction. The time reversal element flips a magnetic spin while leaving all other structure the same and it can be combined with a number of other symmetry elements. Including time reversal there are 1651 magnetic space groups in 3D . It has also been possible to construct magnetic versions for other overall and lattice dimensions (Daniel Litvin's papers, , ). Frieze groups are magnetic 1D line groups and layer groups are magnetic wallpaper groups, and the axial 3D point groups are magnetic 2D point groups. Number of original and magnetic groups by (overall, lattice) dimension:
Table of space groups in 2 dimensions (wallpaper groups)
Table of the wallpaper groups using the classification of the 2-dimensional space groups:
For each geometric class, the possible arithmetic classes are
None: no reflection lines
Along: reflection lines along lattice directions
Between: reflection lines halfway in between lattice directions
Both: reflection lines both along and between lattice directions
Table of space groups in 3 dimensions
Note: An e plane is a double glide plane, one having glides in two different directions. They are found in seven orthorhombic, five tetragonal and five cubic space groups, all with centered lattice. The use of the symbol e became official with .
The lattice system can be found as follows. If the crystal system is not trigonal then the lattice system is of the same type. If the crystal system is trigonal, then the lattice system is hexagonal unless the space group is one of the seven in the rhombohedral lattice system consisting of the 7 trigonal space groups in the table above whose name begins with R. (The term rhombohedral system is also sometimes used as an alternative name for the whole trigonal system.) The hexagonal lattice system is larger than the hexagonal crystal system, and consists of the hexagonal crystal system together with the 18 groups of the trigonal crystal system other than the seven whose names begin with R.
The Bravais lattice of the space group is determined by the lattice system together with the initial letter of its name, which for the non-rhombohedral groups is P, I, F, A or C, standing for the principal, body centered, face centered, A-face centered or C-face centered lattices. There are seven rhombohedral space groups, with initial letter R.
Derivation of the crystal class from the space group
Leave out the Bravais type
Convert all symmetry elements with translational components into their respective symmetry elements without translation symmetry (Glide planes are converted into simple mirror planes; Screw axes are converted into simple axes of rotation)
Axes of rotation, rotoinversion axes and mirror planes remain unchanged.
References
English translation:
External links
International Union of Crystallography
Point Groups and Bravais Lattices
Bilbao Crystallographic Server
Space Group Info (old)
Space Group Info (new)
Crystal Lattice Structures: Index by Space Group
Full list of 230 crystallographic space groups
Interactive 3D visualization of all 230 crystallographic space groups
The Geometry Center: 2.1 Formulas for Symmetries in Cartesian Coordinates (two dimensions)
The Geometry Center: 10.1 Formulas for Symmetries in Cartesian Coordinates (three dimensions)
Symmetry
Crystallography
Finite groups
Discrete groups
Molecular geometry | Space group | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 3,113 | [
"Symmetry",
"Mathematical structures",
"Molecular geometry",
"Molecules",
"Stereochemistry",
"Finite groups",
"Materials science",
"Crystallography",
"Algebraic structures",
"Condensed matter physics",
"Geometry",
"Matter"
] |
464,331 | https://en.wikipedia.org/wiki/Y-intercept | In analytic geometry, using the common convention that the horizontal axis represents a variable and the vertical axis represents a variable , a -intercept or vertical intercept is a point where the graph of a function or relation intersects the -axis of the coordinate system. As such, these points satisfy .
Using equations
If the curve in question is given as the -coordinate of the -intercept is found by calculating . Functions which are undefined at have no -intercept.
If the function is linear and is expressed in slope-intercept form as , the constant term is the -coordinate of the -intercept.
Multiple -intercepts
Some 2-dimensional mathematical relationships such as circles, ellipses, and hyperbolas can have more than one -intercept. Because functions associate -values to no more than one -value as part of their definition, they can have at most one -intercept.
-intercepts
Analogously, an -intercept is a point where the graph of a function or relation intersects with the -axis. As such, these points satisfy . The zeros, or roots, of such a function or relation are the -coordinates of these -intercepts.
Functions of the form have at most one -intercept, but may contain multiple -intercepts. The -intercepts of functions, if any exist, are often more difficult to locate than the -intercept, as finding the -intercept involves simply evaluating the function at .
In higher dimensions
The notion may be extended for 3-dimensional space and higher dimensions, as well as for other coordinate axes, possibly with other names. For example, one may speak of the -intercept of the current–voltage characteristic of, say, a diode. (In electrical engineering, is the symbol used for electric current.)
See also
Regression intercept
References
Elementary mathematics
Functions and mappings | Y-intercept | [
"Mathematics"
] | 363 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical objects",
"Elementary mathematics",
"Mathematical relations"
] |
465,001 | https://en.wikipedia.org/wiki/700%20%28number%29 | 700 (seven hundred) is the natural number following 699 and preceding 701.
It is the sum of four consecutive primes (167 + 173 + 179 + 181), the perimeter of a Pythagorean triangle (75 + 308 + 317) and a Harshad number.
Integers from 701 to 799
Nearly all of the palindromic integers between 700 and 800 (i.e. nearly all numbers in this range that have both the hundreds and units digit be 7) are used as model numbers for Boeing Commercial Airplanes.
700s
701 = prime number, sum of three consecutive primes (229 + 233 + 239), Chen prime, Eisenstein prime with no imaginary part
702 = 2 × 33 × 13, pronic number, nontotient, Harshad number
703 = 19 × 37, the 37th triangular number, a hexagonal number, smallest number requiring 73 fifth powers for Waring representation, Kaprekar number, area code for Northern Virginia along with 571, a number commonly found in the formula for body mass index
704 = 26 × 11, Harshad number, lazy caterer number , area code for the Charlotte, NC area.
705 = 3 × 5 × 47, sphenic number, smallest Bruckman-Lucas pseudoprime
706 = 2 × 353, nontotient, Smith number
707 = 7 × 101, sum of five consecutive primes (131 + 137 + 139 + 149 + 151), palindromic number, number of lattice paths from (0,0) to (5,5) with steps (0,1), (1,0) and, when on the diagonal, (1,1).
708 = 22 × 3 × 59, number of partitions of 28 that do not contain 1 as a part
709 = prime number; happy number. It is the seventh in the series 2, 3, 5, 11, 31, 127, 709 where each number is the nth prime with n being the number preceding it in the series, therefore, it is a prime index number.
710s
710 = 2 × 5 × 71, sphenic number, nontotient, number of forests with 11 vertices
711 = 32 × 79, Harshad number, number of planar Berge perfect graphs on 7 nodes. Also the phone number of Telecommunications Relay Service, commonly used by the deaf and hard-of-hearing.
712 = 23 × 89, refactorable number, sum of the first twenty-one primes, totient sum for first 48 integers. It is the largest known number such that it and its 8th power (66,045,000,696,445,844,586,496) have no common digits.
713 = 23 × 31, Blum integer, main area code for Houston, TX. In Judaism there are 713 letters on a Mezuzah scroll.
714 = 2 × 3 × 7 × 17, sum of twelve consecutive primes (37 + 41 + 43 + 47 + 53 + 59 + 61 + 67 + 71 + 73 + 79 + 83), nontotient, balanced number, member of Ruth–Aaron pair (either definition); area code for Orange County, California.
Flight 714 to Sidney is a Tintin graphic novel.
714 is the badge number of Sergeant Joe Friday.
715 = 5 × 11 × 13, sphenic number, pentagonal number, pentatope number ( binomial coefficient ), Harshad number, member of Ruth-Aaron pair (either definition)
The product of 714 and 715 is the product of the first 7 prime numbers (2, 3, 5, 7, 11, 13, and 17)
716 = 22 × 179, area code for Buffalo, NY
717 = 3 × 239, palindromic number
718 = 2 × 359, area code for Brooklyn, NY and Bronx, NY
719 = prime number, factorial prime (6! − 1), Sophie Germain prime, safe prime, sum of seven consecutive primes (89 + 97 + 101 + 103 + 107 + 109 + 113), Chen prime, Eisenstein prime with no imaginary part
720s
720 = 24 × 32 × 5.
6 factorial, highly composite number, Harshad number in every base from binary to decimal, highly totient number.
two round angles (= 2 × 360).
five gross (= 500 duodecimal, 5 × 144).
241-gonal number.
721 = 7 × 103, sum of nine consecutive primes (61 + 67 + 71 + 73 + 79 + 83 + 89 + 97 + 101), centered hexagonal number, smallest number that is the difference of two positive cubes in two ways,
722 = 2 × 192, nontotient, number of odd parts in all partitions of 15, area of a square with diagonal 38
G.722 is a freely available file format for audio file compression. The files are often named with the extension "722".
723 = 3 × 241, side length of an almost-equilateral Heronian triangle
724 = 22 × 181, sum of four consecutive primes (173 + 179 + 181 + 191), sum of six consecutive primes (107 + 109 + 113 + 127 + 131 + 137), nontotient, side length of an almost-equilateral Heronian triangle, the number of n-queens problem solutions for n = 10,
725 = 52 × 29, side length of an almost-equilateral Heronian triangle
726 = 2 × 3 × 112, pentagonal pyramidal number
727 = prime number, palindromic prime, lucky prime,
728 = 23 × 7 × 13, nontotient, Smith number, cabtaxi number, 728!! - 1 is prime, number of cubes of edge length 1 required to make a hollow cube of edge length 12, 72864 + 1 is prime, number of connected graphs on 5 labelled vertices
729 = 272 = 93 = 36.
the square of 27, and the cube of 9, the sixth power of three, and because of these properties, a perfect totient number.
centered octagonal number, Smith number
the number of times a philosopher's pleasure is greater than a tyrant's pleasure according to Plato in the Republic
the largest three-digit cube. (9 x 9 x 9)
the only three-digit sixth power. (3 x 3 x 3 x 3 x 3 x 3)
730s
730 = 2 × 5 × 73, sphenic number, nontotient, Harshad number, number of generalized weak orders on 5 points
731 = 17 × 43, sum of three consecutive primes (239 + 241 + 251), number of Euler trees with total weight 7
732 = 22 × 3 × 61, sum of eight consecutive primes (73 + 79 + 83 + 89 + 97 + 101 + 103 + 107), sum of ten consecutive primes (53 + 59 + 61 + 67 + 71 + 73 + 79 + 83 + 89 + 97), Harshad number, number of collections of subsets of {1, 2, 3, 4} that are closed under union and intersection
733 = prime number, emirp, balanced prime, permutable prime, sum of five consecutive primes (137 + 139 + 149 + 151 + 157)
734 = 2 × 367, nontotient, number of traceable graphs on 7 nodes
735 = 3 × 5 × 72, Harshad number, Zuckerman number, smallest number such that uses same digits as its distinct prime factors
736 = 25 × 23, centered heptagonal number, happy number, nice Friedman number since 736 = 7 + 36, Harshad number
737 = 11 × 67, palindromic number, blum integer.
738 = 2 × 32 × 41, Harshad number.
739 = prime number, strictly non-palindromic number, lucky prime, happy number, prime index prime
740s
740 = 22 × 5 × 37, nontotient, number of connected squarefree graphs on 9 nodes
741 = 3 × 13 × 19, sphenic number, 38th triangular number
742 = 2 × 7 × 53, sphenic number, decagonal number, icosahedral number. It is the smallest number that is one more than triple its reverse. Lazy caterer number . Number of partitions of 30 into divisors of 30.
743 = prime number, Sophie Germain prime, Chen prime, Eisenstein prime with no imaginary part
744 = 23 × 3 × 31, sum of four consecutive primes (179 + 181 + 191 + 193). It is the coefficient of the first degree term of the expansion of Klein's j-invariant, and the zeroth degree term of the Laurent series of the J-invariant. Furthermore, 744 = 3 × 248 where 248 is the dimension of the Lie algebra E8.
745 = 5 × 149 = 24 + 36, number of non-connected simple labeled graphs covering 6 vertices
746 = 2 × 373 = 15 + 24 + 36 = 17 + 24 + 36, nontotient, number of non-normal semi-magic squares with sum of entries equal to 6
747 = 32 × 83 = , palindromic number.
748 = 22 × 11 × 17, nontotient, happy number, primitive abundant number
749 = 7 × 107, sum of three consecutive primes (241 + 251 + 257), blum integer
750s
750 = 2 × 3 × 53, enneagonal number.
751 = prime number, Chen prime, emirp
752 = 24 × 47, nontotient, number of partitions of 11 into parts of 2 kinds
753 = 3 × 251, blum integer
754 = 2 × 13 × 29, sphenic number, nontotient, totient sum for first 49 integers, number of different ways to divide a 10 × 10 square into sub-squares
755 = 5 × 151, number of vertices in a regular drawing of the complete bipartite graph K9,9.
756 = 22 × 33 × 7, sum of six consecutive primes (109 + 113 + 127 + 131 + 137 + 139), pronic number, Harshad number
757 = prime number, palindromic prime, sum of seven consecutive primes (97 + 101 + 103 + 107 + 109 + 113 + 127), happy number.
"The 757" is a local nickname for the Hampton Roads area in the U.S. state of Virginia, derived from the telephone area code that covers almost all of the metropolitan area
758 = 2 × 379, nontotient, prime number of measurement
759 = 3 × 11 × 23, sphenic number, sum of five consecutive primes (139 + 149 + 151 + 157 + 163), a q-Fibonacci number for q=3
760s
760 = 23 × 5 × 19, centered triangular number, number of fixed heptominoes.
761 = prime number, emirp, Sophie Germain prime, Chen prime, Eisenstein prime with no imaginary part, centered square number
762 = 2 × 3 × 127, sphenic number, sum of four consecutive primes (181 + 191 + 193 + 197), nontotient, Smith number, admirable number, number of 1's in all partitions of 25 into odd parts, see also Six nines in pi
763 = 7 × 109, sum of nine consecutive primes (67 + 71 + 73 + 79 + 83 + 89 + 97 + 101 + 103), number of degree-8 permutations of order exactly 2
764 = 22 × 191, telephone number
765 = 32 × 5 × 17, octagonal pyramidal number
a Japanese word-play for Namco;
766 = 2 × 383, centered pentagonal number, nontotient, sum of twelve consecutive primes (41 + 43 + 47 + 53 + 59 + 61 + 67 + 71 + 73 + 79 + 83 + 89)
767 = 13 × 59, Thabit number (28 × 3 − 1), palindromic number.
768 = 28 × 3, sum of eight consecutive primes (79 + 83 + 89 + 97 + 101 + 103 + 107 + 109)
769 = prime number, Chen prime, lucky prime, Proth prime
770s
770 = 2 × 5 × 7 × 11, nontotient, Harshad number
is prime
Famous room party in New Orleans hotel room 770, giving the name to a well known science fiction fanzine called File 770
Holds special importance in the Chabad-Lubavitch Hasidic movement.
771 = 3 × 257, sum of three consecutive primes in arithmetic progression (251 + 257 + 263). Since 771 is the product of the distinct Fermat primes 3 and 257, a regular polygon with 771 sides can be constructed using compass and straightedge, and can be written in terms of square roots.
772 = 22 × 193, 772!!!!!!+1 is prime
773 = prime number, Eisenstein prime with no imaginary part, tetranacci number, prime index prime, sum of the number of cells that make up the convex, regular 4-polytopes
774 = 2 × 32 × 43, nontotient, totient sum for first 50 integers, Harshad number
775 = 52 × 31, member of the Mian–Chowla sequence
776 = 23 × 97, refactorable number, number of compositions of 6 whose parts equal to q can be of q2 kinds
777 = 3 × 7 × 37, sphenic number, Harshad number, palindromic number, 3333 in senary (base 6) counting.
The numbers 3 and 7 are considered both "perfect numbers" under Hebrew tradition.
778 = 2 × 389, nontotient, Smith number
779 = 19 × 41, highly cototient number
780s
780 = 22 × 3 × 5 × 13, sum of four consecutive primes in a quadruplet (191, 193, 197, and 199); sum of ten consecutive primes (59 + 61 + 67 + 71 + 73 + 79 + 83 + 89 + 97 + 101), 39th triangular number,a hexagonal number, Harshad number
780 and 990 are the fourth smallest pair of triangular numbers whose sum and difference (1770 and 210) are also triangular.
781 = 11 × 71. 781 is the sum of powers of 5/repdigit in base 5 (11111), Mertens function(781) = 0, lazy caterer number
782 = 2 × 17 × 23, sphenic number, nontotient, pentagonal number, Harshad number, also, 782 gear used by U.S. Marines
783 = 33 × 29, heptagonal number
784 = 24 × 72 = 282 = , the sum of the cubes of the first seven positive integers, happy number
785 = 5 × 157, Mertens function(785) = 0, number of series-reduced planted trees with 6 leaves of 2 colors
786 = 2 × 3 × 131, sphenic number, admirable number. See also its use in Muslim numerological symbolism.
787 = prime number, sum of five consecutive primes (149 + 151 + 157 + 163 + 167), Chen prime, lucky prime, palindromic prime.
788 = 22 × 197, nontotient, number of compositions of 12 into parts with distinct multiplicities
789 = 3 × 263, sum of three consecutive primes (257 + 263 + 269), Blum integer
790s
790 = 2 × 5 × 79, sphenic number, nontotient, a Harshad number in bases 2, 7, 14 and 16, an aspiring number, the aliquot sum of 1574.
791 = 7 × 113, centered tetrahedral number, sum of the first twenty-two primes, sum of seven consecutive primes (101 + 103 + 107 + 109 + 113 + 127 + 131)
792 = 23 × 32 × 11, number of integer partitions of 21, binomial coefficient , Harshad number, sum of the nontriangular numbers between successive triangular numbers
793 = 13 × 61, Mertens function(793) = 0, star number, happy number
794 = 2 × 397 = 16 + 26 + 36, nontotient
795 = 3 × 5 × 53, sphenic number, Mertens function(795) = 0, number of permutations of length 7 with 2 consecutive ascending pairs
796 = 22 × 199, sum of six consecutive primes (113 + 127 + 131 + 137 + 139 + 149), Mertens function(796) = 0
797 = prime number, Chen prime, Eisenstein prime with no imaginary part, palindromic prime, two-sided prime, prime index prime.
798 = 2 × 3 × 7 × 19, Mertens function(798) = 0, nontotient, product of primes indexed by the prime exponents of 10!
799 = 17 × 47, smallest number with digit sum 25
References
Integers | 700 (number) | [
"Mathematics"
] | 3,658 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
465,128 | https://en.wikipedia.org/wiki/Chloroflexia | The Chloroflexia are a class of bacteria in the phylum Chloroflexota. Chloroflexia are typically filamentous, and can move about through bacterial gliding. It is named after the order Chloroflexales.
Etymology
The name "Chloroflexi" is a Neolatin plural of "Chloroflexus", which is the name of the first genus described. The noun is a combination of the Greek chloros (χλωρός) meaning "greenish-yellow" and the Latin flexus (of flecto) meaning "bent" to mean "a green bending". The name is not due to chlorine, an element confirmed as such in 1810 by Sir Humphry Davy and named after its pale green colour.
Taxonomy and molecular signatures
The Chloroflexia class is a group of deep branching photosynthetic bacteria (with the exception of Herpetosiphon and Kallotenue species) that currently consist of three orders: Chloroflexales, Herpetosiphonales, and Kallotenuales. The Herpetosiphonales and Kallotenuales each consist of a single genus within its own family, Herpetosiphonaceae (Herpetosiphon) and Kallotenuaceae (Kallotenue), respectively, whereas the Chloroflexales are more phylogenetically diverse.
Microscopic distinguishing characteristics
Members of the phylum Chloroflexota are monoderms and stain mostly Gram negative, whereas most bacteria species are diderms and stain Gram negative, with the Gram positive exceptions of the Bacillota (low GC Gram positives), Actinomycetota (high GC, Gram positives), and the Deinococcota (Gram positive, diderms with thick peptidoglycan).
Genetic distinguishing characteristics
Comparative genomic analysis has recently refined the taxonomy of the class Chloroflexia, dividing the Chloroflexales into the suborder Chloroflexineae consisting of the families Oscillachloridaceae and Chloroflexaceae, and the suborder Roseiflexineae containing family Roseiflexaceae. The revised taxonomy was based on the identification of a number of conserved signature indels (CSIs) which serve as highly reliable molecular markers of shared ancestry.
Physiological distinguishing characteristics
Additional support for the division of the Chloroflexales into two suborders is the observed differences in physiological characteristics where each suborder is characterized by distinct carotenoids, quinones, and fatty acid profiles that are consistently absent in the other suborder.
In addition to demarcating taxonomic ranks, CSIs may play a role in the unique characteristics of members within the clade: In particular, a four-amino-acid insert in the protein pyruvate flavodoxin/ferredoxin oxidoreductase, a protein which plays important roles in photosynthetic organisms, has been found exclusively among all members in the genus Chloroflexus, and is thought to play an important functional role.
Additional work has been done using CSIs to demarcate the phylogenetic position of Chloroflexia relative to other photosynthetic groups such as the Cyanobacteria. Chloroflexia shares a number of CSIs with Chlorobiota in the chlorophyll-synthesizing proteins. As the two lineages are not otherwise closely related, the interpretation is that the CSIs are the result of a horizontal gene transfer event between the two. Chloroflexia in turn acquired these proteins by another HGT from a "Clade C" marine cyanobacteria.
Phylogeny
Taxonomy
The currently accepted taxonomy is as follows:
Class Chloroflexia Gupta et al. 2013
Genus ?"Candidatus Chlorohelix" Tsuji et al. 2024
Genus ?"Dehalobium" Wu et al. 2002
Genus ?"Candidatus Lithoflexus" Saghai et al. 2020
Genus ?"Candidatus Sarcinithrix" Nierychlo et al. 2019
Order "Thermobaculales" Chuvochina et al. 2023
Family "Thermobaculaceae" Chuvochina et al. 2023
Genus "Thermobaculum" Botero et al. 2004
Order Kallotenuales] Cole et al. 2013
Family Kallotenuaceae Cole et al. 2013
Genus Kallotenue Cole et al. 2013
Order Herpetosiphonales Gupta et al. 2013
Family Herpetosiphonaceae Gupta et al. 2013
Genus "Candidatus Anthektikosiphon" Ward, Fischer & McGlynn 2020
Genus Herpetosiphon Holt & Lewin 1968
Order Chloroflexales Gupta et al. 2013
Suborder Roseiflexineae Gupta et al. 2013
Family Roseiflexaceae Gupta et al. 2013 ["Kouleotrichaceae" Mehrshad et al. 2018]
Genus ?Heliothrix Pierson et al. 1986
Genus "Kouleothrix" Kohno et al. 2002
Genus "Candidatus Ribeiella" Petriglieri et al. 2023
Genus Roseiflexus Hanada et al. 2002
Suborder Chloroflexineae Gupta et al. 2013
Family Chloroflexaceae Gupta et al. 2013
Genus ?"Candidatus Chloranaerofilum" Thiel et al. 2016
Genus Chloroflexus Pierson & Castenholz 1974 ["Chlorocrinis"Ward et al. 1998]
Family Oscillochloridaceae Gupta et al. 2013
Genus ?Chloronema ♪ Dubinina & Gorlenko 1975
Genus "Candidatus Chloroploca" Gorlenko et al. 2014
Genus Oscillochloris Gorlenko & Pivovarova 1989
Genus "Candidatus Viridilinea" Grouzdev et al. 2018
See also
List of bacteria genera
List of bacterial orders
Green sulfur bacteria
References
Further reading
External links
Phototrophic bacteria | Chloroflexia | [
"Chemistry",
"Biology"
] | 1,309 | [
"Bacteria",
"Photosynthesis",
"Phototrophic bacteria"
] |
6,188,564 | https://en.wikipedia.org/wiki/Ocean%20acoustic%20tomography | Ocean acoustic tomography is a technique used to measure temperatures and currents over large regions of the ocean. On ocean basin scales, this technique is also known as acoustic thermometry. The technique relies on precisely measuring the time it takes sound signals to travel between two instruments, one an acoustic source and one a receiver, separated by ranges of . If the locations of the instruments are known precisely, the measurement of time-of-flight can be used to infer the speed of sound, averaged over the acoustic path. Changes in the speed of sound are primarily caused by changes in the temperature of the ocean, hence the measurement of the travel times is equivalent to a measurement of temperature. A change in temperature corresponds to about change in sound speed. An oceanographic experiment employing tomography typically uses several source-receiver pairs in a moored array that measures an area of ocean.
Motivation
Seawater is an electrical conductor, so the oceans are opaque to electromagnetic energy (e.g., light or radar). The oceans are fairly transparent to low-frequency acoustics, however. The oceans conduct sound very efficiently, particularly sound at low frequencies, i.e., less than a few hundred hertz. These properties motivated Walter Munk and Carl Wunsch to suggest "acoustic tomography" for ocean measurement in the late 1970s. The advantages of the acoustical approach to measuring temperature are twofold. First, large areas of the ocean's interior can be measured by remote sensing. Second, the technique naturally averages over the small scale fluctuations of temperature (i.e., noise) that dominate ocean variability.
From its beginning, the idea of observations of the ocean by acoustics was married to estimation of the ocean's state using modern numerical ocean models and the techniques assimilating data into numerical models. As the observational technique has matured, so too have the methods of data assimilation and the computing power required to perform those calculations.
Multipath arrivals and tomography
One of the intriguing aspects of tomography is that it exploits the fact that acoustic signals travel along a set of generally stable ray paths. From a single transmitted acoustic signal, this set of rays gives rise to multiple arrivals at the receiver, the travel time of each arrival corresponding to a particular ray path. The earliest arrivals correspond to the deeper-traveling rays, since these rays travel where sound speed is greatest. The ray paths are easily calculated using computers ("ray tracing"), and each ray path can generally be identified with a particular travel time. The multiple travel times measure the sound speed averaged over each of the multiple acoustic paths. These measurements make it possible to infer aspects of the structure of temperature or current variations as a function of depth. The solution for sound speed, hence temperature, from the acoustic travel times is an inverse problem.
The integrating property of long-range acoustic measurements
Ocean acoustic tomography integrates temperature variations over large distances, that is, the measured travel times result from the accumulated effects of all the temperature variations along the acoustic path, hence measurements by the technique are inherently averaging. This is an important, unique property, since the ubiquitous small-scale turbulent and internal-wave features of the ocean usually dominate the signals in measurements at single points. For example, measurements by thermometers (i.e., moored thermistors or Argo drifting floats) have to contend with this 1-2 °C noise, so that large numbers of instruments are required to obtain an accurate measure of average temperature. For measuring the average temperature of ocean basins, therefore, the acoustic measurement is quite cost effective. Tomographic measurements also average variability over depth as well, since the ray paths cycle throughout the water column.
Reciprocal tomography
"Reciprocal tomography" employs the simultaneous transmissions between two acoustic transceivers. A "transceiver" is an instrument incorporating both an acoustic source and a receiver. The slight differences in travel time between the reciprocally-traveling signals are used to measure ocean currents, since the reciprocal signals travel with and against the current. The average of these reciprocal travel times is the measure of temperature, with the small effects from ocean currents entirely removed. Ocean temperatures are inferred from the sum of reciprocal travel times, while the currents are inferred from the difference of reciprocal travel times. Generally, ocean currents (typically ) have a much smaller effect on travel times than sound speed variations (typically ), so "one-way" tomography measures temperature to good approximation.
Applications
In the ocean, large-scale temperature changes can occur over time intervals from minutes (internal waves) to decades (oceanic climate change). Tomography has been employed to measure variability over this wide range of temporal scales and over a wide range of spatial scales. Indeed, tomography has been contemplated as a measurement of ocean climate using transmissions over antipodal distances.
Tomography has come to be a valuable method of ocean observation, exploiting the characteristics of long-range acoustic propagation to obtain synoptic measurements of average ocean temperature or current. One of the earliest applications of tomography in ocean observation occurred in 1988-9. A collaboration between groups at the Scripps Institution of Oceanography and the Woods Hole Oceanographic Institution deployed a six-element tomographic array in the abyssal plain of the Greenland Sea gyre to study deep water formation and the gyre circulation. Other applications include the measurement of ocean tides,
and the estimation of ocean mesoscale dynamics by combining tomography, satellite altimetry, and
in situ data with ocean dynamical models.
In addition to the decade-long measurements obtained in the North Pacific, acoustic thermometry has been employed to measure temperature changes of the upper layers of the Arctic Ocean basins, which continues to be an area of active interest. Acoustic thermometry was also recently been used to determine changes to global-scale ocean temperatures using data from acoustic pulses sent from one end of the Earth to the other.
Acoustic thermometry
Acoustic thermometry is an idea to observe the world's ocean basins, and the ocean climate in particular, using trans-basin acoustic transmissions. "Thermometry", rather than "tomography", has been used to indicate basin-scale or global scale measurements. Prototype measurements of temperature have been made in the North Pacific Basin and across the Arctic Basin.
Starting in 1983, John Spiesberger of the Woods Hole Oceanographic Institution, and Ted Birdsall and Kurt Metzger of the University of Michigan developed the use of sound to infer information about the ocean's large-scale temperatures, and in particular to attempt the detection of global warming in the ocean. This group transmitted sounds from Oahu that were recorded at about ten receivers stationed around the rim of the Pacific Ocean over distances of .
These experiments demonstrated that changes in temperature could be measured with an accuracy of about 20 millidegrees. Spiesberger et al. did not detect global warming. Instead they discovered that other natural climatic fluctuations, such as El Nino, were responsible in part
for substantial fluctuations in temperature that may have masked any slower and smaller trends that may have occurred from global warming.
The Acoustic Thermometry of Ocean Climate (ATOC) program was implemented in the North Pacific Ocean, with acoustic transmissions from 1996 through fall 2006. The measurements terminated when agreed-upon environmental protocols ended. The decade-long deployment of the acoustic source showed that the observations are sustainable on even a modest budget. The transmissions have been verified to provide an accurate measurement of ocean temperature on the acoustic paths, with uncertainties that are far smaller than any other approach to ocean temperature measurement.
Repeating earthquakes acting as naturally-occurring acoustic sources have also been used in acoustic thermometry, which may be particularly useful for inferring temperature variability in the deep ocean which is presently poorly sampled by in-situ instruments.
Acoustic transmissions and marine mammals
The ATOC project was embroiled in issues concerning the effects of acoustics on marine mammals (e.g. whales, porpoises, sea lions, etc.). Public discussion was complicated by technical issues from a variety of disciplines (physical oceanography, acoustics, marine mammal biology, etc.) that makes understanding the effects of acoustics on marine mammals difficult for the experts, let alone the general public. Many of the issues concerning acoustics in the ocean and their effects on marine mammals were unknown. Finally, there were a variety of public misconceptions initially, such as a confusion of the definition of sound levels in air vs. sound levels in water. If a given number of decibels in water are interpreted as decibels in air, the sound level will seem to be orders of magnitude larger than it really is - at one point the ATOC sound levels were erroneously interpreted as so loud the signals would kill 500,000 animals. The sound power employed, 250 W, was comparable those made by blue or fin whales, although those whales vocalize at much lower frequencies. The ocean carries sound so efficiently that sounds do not have to be that loud to cross ocean basins. Other factors in the controversy were the extensive history of activism where marine mammals are concerned, stemming from the ongoing whaling conflict, and the sympathy that much of the public feels toward marine mammals.
As a result of this controversy, the ATOC program conducted a $6 million study of the effects of the acoustic transmissions on a variety of marine mammals. The acoustic source was mounted on the bottom about a half mile deep, hence marine mammals, which are bound to the surface, were generally further than a half mile from the source. The source level was modest, less than the sound level of large whales, and the duty cycle was 2% (i.e., the sound is on only 2% of the day). After six years of study the official, formal conclusion from this study was that the ATOC transmissions have "no biologically significant effects".
Other acoustics activities in the ocean may not be so benign insofar as marine mammals are concerned. Various types of man-made sounds have been studied as potential threats to marine mammals, such as airgun shots for geophysical surveys, or transmissions by the U.S. Navy for various purposes. The actual threat depends on a variety of factors beyond noise levels: sound frequency, frequency and duration of transmissions, the nature of the acoustic signal (e.g., a sudden pulse, or coded sequence), depth of the sound source, directionality of the sound source, water depth and local topography, reverberation, etc.
Types of transmitted acoustic signals
Tomographic transmissions consist of long coded signals (e.g., "m-sequences") lasting 30 seconds or more. The frequencies employed range from 50 to 1000 Hz and source powers range from 100 to 250 W, depending on the particular goals of the measurements. With precise timing such as from GPS, travel times can be measured to a nominal accuracy of 1 millisecond. While these transmissions are audible near the source, beyond a range of several kilometers the signals are usually below ambient noise levels, requiring sophisticated spread-spectrum signal processing techniques to recover them.
See also
Acoustical oceanography
Ray tracing
SOFAR channel
SOSUS
Speed of sound
TOPEX/Poseidon satellite altimetry
Underwater acoustics
References
Further reading
B. D. Dushaw, 2013. "Ocean Acoustic Tomography" in Encyclopedia of Remote Sensing, E. G. Njoku, Ed., Springer, Springer-Verlag Berlin Heidelberg, 2013. .
W. Munk, P. Worcester, and C. Wunsch (1995). Ocean Acoustic Tomography. Cambridge: Cambridge University Press. .
P. F. Worcester, 2001: "Tomography," in Encyclopedia of Ocean Sciences, J. Steele, S. Thorpe, and K. Turekian, Eds., Academic Press Ltd., 2969–2986.
External links
Oceans toolbox for Matlab by Rich Pawlowicz.
Ocean Acoustics Lab (OAL) - the Woods Hole Oceanographic Institution.
The North Pacific Acoustic Laboratory (NPAL) - the Scripps Institution of Oceanography, La Jolla, CA.
Acoustic Thermometry of Ocean Climate - the Scripps Institution of Oceanography, La Jolla, CA.
Discovery of Sound in the Sea - DOSITS is an educational website concerned with acoustics in the ocean.
Sounds of acoustic signals employed for tomography - the DOSITS web page.
A day in the life of a tomography mooring - University of Washington, Seattle, WA.
Sounding Out the Ocean's Secrets - National Academy of Sciences.
Sound Measures the Ocean's Secrets - Acoustical Society of America.
The Acoustic Thermometry of Ocean Climate/Marine Mammal Research Program Cornell University Laboratory of Ornithology, Bioacoustics Research Program
Physical oceanography
Oceanographic instrumentation
Tomography
Ocean currents
Oceanographical terminology | Ocean acoustic tomography | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 2,614 | [
"Ocean currents",
"Oceanographic instrumentation",
"Applied and interdisciplinary physics",
"Measuring instruments",
"Physical oceanography",
"Fluid dynamics"
] |
6,188,802 | https://en.wikipedia.org/wiki/Leader%20%28spark%29 | In electromagnetism, a leader is a hot, highly conductive channel of plasma that plays a critical part during dielectric breakdown within a long electric spark.
Mechanism
When a gas is subjected to high voltage stress, the electric field is often quite non-uniform near one, or both, of the high voltage electrodes making up a spark gap. Breakdown initially begins with the formation of corona discharges near the electrode with the highest electrical stress. If the electrical field is further increased, longer length cold discharges (called streamers or burst corona) sporadically form near the stressed electrode. Streamers attract multiple electron avalanches into a single channel, propagating forward quickly via photon emission which leads to photoelectrons producing new avalanches. Streamers redistribute charge within the surrounding gas, temporarily forming regions of excess charge (space charges) in the regions surrounding the discharges.
If the electrical field is sufficiently high, the individual currents from multitudes of streamers combine to create a hot, highly conductive path that projects from the electrode, going some distance into the gap. The projecting channel of hot plasma is called a leader, and it can have an electrical conductivity approaching that of an electric arc. The leader effectively projects the electrical field from the nearby electrode further into the gap, similar to introducing a short length of wire into the gap. The tip of the conductive leader now forms a new region from which streamers can extend even further into the gap. As new streamer discharges feed the tip of the leader, the streamer currents help to keep the leader hot and conductive. Under sufficiently high voltages, the leader will continue to extend itself further into the gap, doing so in a series of jumps until the entire gap has been bridged. Although leaders are most often associated with the initial formative stages of a lightning stroke, they are characteristic of the development of all long sparks. In the case of a lightning leader, each extension (called a step leader) is typically 10 – 50 meters in length. The blue-white branching discharges from a Tesla Coil are also examples of leaders.
See also
Lightning
Breakdown voltage
Electron avalanche
Lightning
Electrical breakdown
External links
Kurt Feser, Hermann Singer, "From the glow corona into the breakdown", originally published on haefely.com, archived 9 March 2012, translated from Elektrotechnische Zeitschrift A, vol. 93 (1972), pp. 36–39. | Leader (spark) | [
"Physics",
"Materials_science",
"Astronomy"
] | 504 | [
"Physical phenomena",
"Materials science stubs",
"Plasma physics",
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Electrical phenomena",
"Plasma physics stubs",
"Electrical breakdown",
"Lightning",
"Electromagnetism stubs"
] |
6,190,932 | https://en.wikipedia.org/wiki/Perifocal%20coordinate%20system | The perifocal coordinate (PQW) system is a frame of reference for an orbit. The frame is centered at the focus of the orbit, i.e. the celestial body about which the orbit is centered. The unit vectors and lie in the plane of the orbit. is directed towards the periapsis of the orbit and has a true anomaly () of 90 degrees past the periapsis. The third unit vector is the angular momentum vector and is directed orthogonal to the orbital plane such that:
And, since is the unit vector in the direction of the angular momentum vector, it may also be expressed as:
where h is the specific relative angular momentum.
The position and velocity vectors can be determined for any location of the orbit. The position vector, r, can be expressed as:
where is the true anomaly and the radius may be calculated from the orbit equation.
The velocity vector, v, is found by taking the time derivative of the position vector:
A derivation from the orbit equation can be made to show that:
where is the gravitational parameter of the focus, h is the specific relative angular momentum of the orbital body, e is the eccentricity of the orbit, and is the true anomaly. is the radial component of the velocity vector (pointing inward toward the focus) and is the tangential component of the velocity vector. By substituting the equations for and into the velocity vector equation and simplifying, the final form of the velocity vector equation is obtained as:
Conversion between coordinate systems
The perifocal coordinate system can also be defined using the orbital parameters inclination (i), right ascension of the ascending node () and the argument of periapsis (). The following equations convert from perifocal coordinates to equatorial coordinates and vice versa.
Perifocal to equatorial
In most cases, .
Equatorial to perifocal
Applications
Perifocal reference frames are most commonly used with elliptical orbits for the reason that the coordinate must be aligned with the eccentricity vector. Circular orbits, having no eccentricity, give no means by which to orient the coordinate system about the focus.
The perifocal coordinate system may also be used as an inertial frame of reference because the axes do not rotate relative to the fixed stars. This allows the inertia of any orbital bodies within this frame of reference to be calculated. This is useful when attempting to solve problems like the two-body problem.
References
Coordinate systems
Astrodynamics | Perifocal coordinate system | [
"Mathematics",
"Engineering"
] | 501 | [
"Aerospace engineering",
"Astrodynamics",
"Coordinate systems"
] |
6,193,005 | https://en.wikipedia.org/wiki/Rotating%20tank | A rotating tank is a device used for fluid dynamics experiments. Typically cylinders filled with water on a rotating platform, the tanks can be used in various ways to simulate the atmosphere or ocean.
For example, a rotating tank with an ice bucket in the center can represent the Earth, with a cold pole simulated by the ice bucket. Just as in the atmosphere, eddies and a westerly jetstream form in the water.
External links
Rotating tank experiment descriptions and movies
Fluid dynamics | Rotating tank | [
"Chemistry",
"Engineering"
] | 96 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
6,194,406 | https://en.wikipedia.org/wiki/Rectangular%20potential%20barrier | In quantum mechanics, the rectangular (or, at times, square) potential barrier is a standard one-dimensional problem that demonstrates the phenomena of wave-mechanical tunneling (also called "quantum tunneling") and wave-mechanical reflection. The problem consists of solving the one-dimensional time-independent Schrödinger equation for a particle encountering a rectangular potential energy barrier. It is usually assumed, as here, that a free particle impinges on the barrier from the left.
Although classically a particle behaving as a point mass would be reflected if its energy is less than a particle actually behaving as a matter wave has a non-zero probability of penetrating the barrier and continuing its travel as a wave on the other side. In classical wave-physics, this effect is known as evanescent wave coupling. The likelihood that the particle will pass through the barrier is given by the transmission coefficient, whereas the likelihood that it is reflected is given by the reflection coefficient. Schrödinger's wave-equation allows these coefficients to be calculated.
Calculation
The time-independent Schrödinger equation for the wave function reads
where is the Hamiltonian, is the (reduced)
Planck constant, is the mass, the energy of the particle and
is the barrier potential with height and width .
is the Heaviside step function, i.e.,
The barrier is positioned between and . The barrier can be shifted to any position without changing the results. The first term in the Hamiltonian, is the kinetic energy.
The barrier divides the space in three parts (). In any of these parts, the potential is constant, meaning that the particle is quasi-free, and the solution of the Schrödinger equation can be written as a superposition of left and right moving waves (see free particle). If
where the wave numbers are related to the energy via
The index on the coefficients and denotes the direction of the velocity vector. Note that, if the energy of the particle is below the barrier height, becomes imaginary and the wave function is exponentially decaying within the barrier. Nevertheless, we keep the notation even though the waves are not propagating anymore in this case. Here we assumed . The case is treated below.
The coefficients have to be found from the boundary conditions of the wave function at and . The wave function and its derivative have to be continuous everywhere, so
Inserting the wave functions, the boundary conditions give the following restrictions on the coefficients
Transmission and reflection
At this point, it is instructive to compare the situation to the classical case. In both cases, the particle behaves as a free particle outside of the barrier region. A classical particle with energy larger than the barrier height would always pass the barrier, and a classical particle with incident on the barrier would always get reflected.
To study the quantum case, consider the following situation: a particle incident on the barrier from the left side It may be reflected or transmitted
To find the amplitudes for reflection and transmission for incidence from the left, we put in the above equations (incoming particle), (reflection), (no incoming particle from the right), and (transmission). We then eliminate the coefficients from the equation and solve for and
The result is:
Due to the mirror symmetry of the model, the amplitudes for incidence from the right are the same as those from the left. Note that these expressions hold for any energy If then so there is a singularity in both of these expressions.
Analysis of the obtained expressions
E < V0
The surprising result is that for energies less than the barrier height, there is a non-zero probability
for the particle to be transmitted through the barrier, with This effect, which differs from the classical case, is called quantum tunneling. The transmission is exponentially suppressed with the barrier width, which can be understood from the functional form of the wave function: Outside of the barrier it oscillates with wave vector whereas within the barrier it is exponentially damped over a distance If the barrier is much wider than this decay length, the left and right part are virtually independent and tunneling as a consequence is suppressed.
E > V0
In this case
where
Equally surprising is that for energies larger than the barrier height, , the particle may be reflected from the barrier with a non-zero probability
The transmission and reflection probabilities are in fact oscillating with . The classical result of perfect transmission without any reflection (, ) is reproduced not only in the limit of high energy but also when the energy and barrier width satisfy , where (see peaks near and 1.8 in the above figure). Note that the probabilities and amplitudes as written are for any energy (above/below) the barrier height.
E = V0
The transmission probability at is
This expression can be obtained by calculating the transmission coefficient from the constants stated above as for the other cases or by taking the limit of as approaches . For this purpose the ratio
is defined, which is used in the function :
In the last equation is defined as follows:
These definitions can be inserted in the expression for which was obtained for the case .
Now, when calculating the limit of as x approaches 1 (using L'Hôpital's rule),
also the limit of as approaches 1 can be obtained:
By plugging in the above expression for in the evaluated value for the limit, the above expression for T is successfully reproduced.
Remarks and applications
The calculation presented above may at first seem unrealistic and hardly useful. However it has proved to be a suitable model for a variety of real-life systems. One such example are interfaces between two conducting materials. In the bulk of the materials, the motion of the electrons is quasi-free and can be described by the kinetic term in the above Hamiltonian with an effective mass . Often the surfaces of such materials are covered with oxide layers or are not ideal for other reasons. This thin, non-conducting layer may then be modeled by a barrier potential as above. Electrons may then tunnel from one material to the other giving rise to a current.
The operation of a scanning tunneling microscope (STM) relies on this tunneling effect. In that case, the barrier is due to the gap between the tip of the STM and the underlying object. Since the tunnel current depends exponentially on the barrier width, this device is extremely sensitive to height variations on the examined sample.
The above model is one-dimensional, while space is three-dimensional. One should solve the Schrödinger equation in three dimensions. On the other hand, many systems only change along one coordinate direction and are translationally invariant along the others; they are separable. The Schrödinger equation may then be reduced to the case considered here by an ansatz for the wave function of the type: .
For another, related model of a barrier, see Delta potential barrier (QM), which can be regarded as a special case of the finite potential barrier. All results from this article immediately apply to the delta potential barrier by taking the limits while keeping constant.
See also
Morse/Long-range potential
Step potential
Finite potential well
References
Quantum models
Scattering theory
Schrödinger equation
Quantum mechanical potentials | Rectangular potential barrier | [
"Physics",
"Chemistry"
] | 1,461 | [
"Scattering theory",
"Equations of physics",
"Eponymous equations of physics",
"Quantum mechanics",
"Quantum models",
"Quantum mechanical potentials",
"Scattering",
"Schrödinger equation"
] |
6,196,078 | https://en.wikipedia.org/wiki/Cox%20process | In probability theory, a Cox process, also known as a doubly stochastic Poisson process is a point process which is a generalization of a Poisson process where the intensity that varies across the underlying mathematical space (often space or time) is itself a stochastic process. The process is named after the statistician David Cox, who first published the model in 1955.
Cox processes are used to generate simulations of spike trains (the sequence of action potentials generated by a neuron), and also in financial mathematics where they produce a "useful framework for modeling prices of financial instruments in which credit risk is a significant factor."
Definition
Let be a random measure.
A random measure is called a Cox process directed by , if is a Poisson process with intensity measure .
Here, is the conditional distribution of , given .
Laplace transform
If is a Cox process directed by , then has the Laplace transform
for any positive, measurable function .
See also
Poisson hidden Markov model
Doubly stochastic model
Inhomogeneous Poisson process, where λ(t) is restricted to a deterministic function
Ross's conjecture
Gaussian process
Mixed Poisson process
References
Notes
Bibliography
Cox, D. R. and Isham, V. Point Processes, London: Chapman & Hall, 1980
Donald L. Snyder and Michael I. Miller Random Point Processes in Time and Space Springer-Verlag, 1991 (New York) (Berlin)
Poisson point processes | Cox process | [
"Mathematics"
] | 301 | [
"Point processes",
"Point (geometry)",
"Poisson point processes"
] |
1,235,959 | https://en.wikipedia.org/wiki/Convection%20zone | A convection zone, convective zone or convective region of a star is a layer which is unstable due to convection. Energy is primarily or partially transported by convection in such a region. In a radiation zone, energy is transported by radiation and conduction.
Stellar convection consists of mass movement of plasma within the star which usually forms a circular convection current with the heated plasma ascending and the cooled plasma descending.
The Schwarzschild criterion expresses the conditions under which a region of a star is unstable to convection. A parcel of gas that rises slightly will find itself in an environment of lower pressure than the one it came from. As a result, the parcel will expand and cool. If the rising parcel cools to a lower temperature than its new surroundings, so that it has a higher density than the surrounding gas, then its lack of buoyancy will cause it to sink back to where it came from. However, if the temperature gradient is steep enough (i.e. the temperature changes rapidly with distance from the center of the star), or if the gas has a very high heat capacity (i.e. its temperature changes relatively slowly as it expands) then the rising parcel of gas will remain warmer and less dense than its new surroundings even after expanding and cooling. Its buoyancy will then cause it to continue to rise. The region of the star in which this happens is the convection zone.
Main sequence stars
In main sequence stars more than 1.3 times the mass of the Sun, the high core temperature causes nuclear fusion of hydrogen into helium to occur predominantly via the carbon-nitrogen-oxygen (CNO) cycle instead of the less temperature-sensitive proton–proton chain. The high temperature gradient in the core region forms a convection zone that slowly mixes the hydrogen fuel with the helium product. The core convection zone of these stars is overlaid by a radiation zone that is in thermal equilibrium and undergoes little or no mixing. In the most massive stars, the convection zone may reach all the way from the core to the surface.
In main sequence stars of less than about 1.3 solar masses, the outer envelope of the star contains a region where partial ionization of hydrogen and helium raises the heat capacity. The relatively low temperature in this region simultaneously causes the opacity due to heavier elements to be high enough to produce a steep temperature gradient. This combination of circumstances produces an outer convection zone, the top of which is visible in the Sun as solar granulation. Low-mass main-sequence stars, such as red dwarfs below 0.35 solar masses, as well as pre-main sequence stars on the Hayashi track, are convective throughout and do not contain a radiation zone.
In main sequence stars similar to the Sun, which have a radiative core and convective envelope, the transition region between the convection zone and the radiation zone is called the tachocline.
Red giants
In red giant stars, and particularly during the asymptotic giant branch phase, the surface convection zone varies in depth during the phases of shell burning. This causes dredge-up events, short-lived very deep convection zones that transport fusion products to the surface of the star.
References
Further reading
External links
Animated explanation of the Convection zone (University of South Wales).
Animated explanation of the temperature and density of the Convection zone (University of South Wales).
Zone
Stellar phenomena | Convection zone | [
"Physics",
"Chemistry"
] | 686 | [
"Transport phenomena",
"Physical phenomena",
"Convection",
"Thermodynamics",
"Stellar phenomena"
] |
1,235,977 | https://en.wikipedia.org/wiki/Dirichlet%20integral | In mathematics, there are several integrals known as the Dirichlet integral, after the German mathematician Peter Gustav Lejeune Dirichlet, one of which is the improper integral of the sinc function over the positive real number line.
This integral is not absolutely convergent, meaning has infinite Lebesgue or Riemann improper integral over the positive real line, so the sinc function is not Lebesgue integrable over the positive real line. The sinc function is, however, integrable in the sense of the improper Riemann integral or the generalized Riemann or Henstock–Kurzweil integral. This can be seen by using Dirichlet's test for improper integrals.
It is a good illustration of special techniques for evaluating definite integrals, particularly when it is not useful to directly apply the fundamental theorem of calculus due to the lack of an elementary antiderivative for the integrand, as the sine integral, an antiderivative of the sinc function, is not an elementary function. In this case, the improper definite integral can be determined in several ways: the Laplace transform, double integration, differentiating under the integral sign, contour integration, and the Dirichlet kernel. But since the integrand is an even function, the domain of integration can be extended to the negative real number line as well.
Evaluation
Laplace transform
Let be a function defined whenever Then its Laplace transform is given by
if the integral exists.
A property of the Laplace transform useful for evaluating improper integrals is
provided exists.
In what follows, one needs the result which is the Laplace transform of the function (see the section 'Differentiating under the integral sign' for a derivation) as well as a version of Abel's theorem (a consequence of the final value theorem for the Laplace transform).
Therefore,
Double integration
Evaluating the Dirichlet integral using the Laplace transform is equivalent to calculating the same double definite integral by changing the order of integration, namely,
The change of order is justified by the fact that for all , the integral is absolutely convergent.
Differentiation under the integral sign (Feynman's trick)
First rewrite the integral as a function of the additional variable namely, the Laplace transform of So let
In order to evaluate the Dirichlet integral, we need to determine The continuity of can be justified by applying the dominated convergence theorem after integration by parts. Differentiate with respect to and apply the Leibniz rule for differentiating under the integral sign to obtain
Now, using Euler's formula one can express the sine function in terms of complex exponentials:
Therefore,
Integrating with respect to gives
where is a constant of integration to be determined. Since using the principal value. This means that for
Finally, by continuity at we have as before.
Complex contour integration
Consider
As a function of the complex variable it has a simple pole at the origin, which prevents the application of Jordan's lemma, whose other hypotheses are satisfied.
Define then a new function
The pole has been moved to the negative imaginary axis, so can be integrated along the semicircle of radius centered at extending in the positive imaginary direction, and closed along the real axis. One then takes the limit
The complex integral is zero by the residue theorem, as there are no poles inside the integration path :
The second term vanishes as goes to infinity. As for the first integral, one can use one version of the Sokhotski–Plemelj theorem for integrals over the real line: for a complex-valued function defined and continuously differentiable on the real line and real constants and with one finds
where denotes the Cauchy principal value. Back to the above original calculation, one can write
By taking the imaginary part on both sides and noting that the function is even, we get
Finally,
Alternatively, choose as the integration contour for the union of upper half-plane semicircles of radii and together with two segments of the real line that connect them. On one hand the contour integral is zero, independently of and on the other hand, as and the integral's imaginary part converges to (here is any branch of logarithm on upper half-plane), leading to
Dirichlet kernel
Consider the well-known formula for the Dirichlet kernel:
It immediately follows that:
Define
Clearly, is continuous when to see its continuity at 0 apply L'Hopital's Rule:
Hence, fulfills the requirements of the Riemann-Lebesgue Lemma. This means:
(The form of the Riemann-Lebesgue Lemma used here is proven in the article cited.)
We would like to compute:
However, we must justify switching the real limit in to the integral limit in which will follow from showing that the limit does exist.
Using integration by parts, we have:
Now, as and the term on the left converges with no problem. See the list of limits of trigonometric functions. We now show that is absolutely integrable, which implies that the limit exists.
First, we seek to bound the integral near the origin. Using the Taylor-series expansion of the cosine about zero,
Therefore,
Splitting the integral into pieces, we have
for some constant This shows that the integral is absolutely integrable, which implies the original integral exists, and switching from to was in fact justified, and the proof is complete.
See also
Dirichlet distribution
Dirichlet principle
Sinc function
Fresnel integral
References
External links
Special functions
Integral calculus
Mathematical physics | Dirichlet integral | [
"Physics",
"Mathematics"
] | 1,141 | [
"Special functions",
"Calculus",
"Applied mathematics",
"Theoretical physics",
"Combinatorics",
"Mathematical physics",
"Integral calculus"
] |
1,236,616 | https://en.wikipedia.org/wiki/Hedorah | , also known as the Smog Monster, is a fictional monster, or kaiju who first appeared in Toho's 1971 film Godzilla vs. Hedorah. Hedorah was named for , the Japanese word for sludge, slime, vomit or chemical ooze.
Overview
Development
Whereas Godzilla was a symbol of Japanese concerns over nuclear weapons, Hedorah was envisioned as an embodiment of Yokkaichi asthma, caused by Japan's widespread smog and urban pollution at the time. Director Yoshimitsu Banno stated in an interview that his intention in creating Hedorah was to give Godzilla an adversary who was more than just a "giant lobster" and which represented "the most notorious thing in current society". He also stated that Hedorah's vertically tilted eyes were based on vaginas, which he joked were "scary". The monster was originally going to be named "Hedoron", though this changed once the TV series Spectreman introduced a character with an identical name.
The monster was realized via various props and a large sponge rubber suit donned by future Godzilla performer Kenpachiro Satsuma in his first acting role for Toho. Satsuma had been selected on account of his physical fitness, though he stated later that he had been disappointed to receive the role, as he had grown tired of taking non-speaking roles. In performing as Hedorah, Satsuma tried to emphasize Hedorah's otherworldly nature by making its movements seem more grotesque than animal-like. Several authors have noted that, unlike most Toho monsters, Hedorah's violent acts are graphically shown to claim human victims, and the creature shows genuine amusement at Godzilla's suffering. Banno wished to bring back Hedorah in a sequel set in Africa, but the project never materialized, as he was fired by producer and series co-creator Tomoyuki Tanaka, who allegedly accused him of ruining the Godzilla series. Complex listed the character as #8 on its "The 15 Most Badass Kaiju Monsters of All Time" list.
Banno had hoped to revisit Hedorah in his unrealized project Godzilla 3-D, which would have had featured a similar monster named Deathla. Like its predecessor, Deathla would have been a shape-shifting extraterrestrial, though it would have fed on chlorophyll rather than gas emissions, and all of its forms would have incorporated a skull motif.
Shōwa era (1971)
In Godzilla vs. Hedorah, Hedorah originates from the Dark Gas Nebula in the constellation of Orion. It journeys to Earth via a passing comet, and lands in Suruga Bay as a monstrous tadpole-like creature, increasing in size as it feeds on the pollutants contaminating the water. It proceeds to rampage throughout Japan, killing thousands and feeding on gas emissions and toxic waste, gradually gaining power as it advances from a water stage, to a land stage, and finally a bipedal Perfect Form that it can switch out for a smaller, flying form at any time. Godzilla confronts Hedorah at Mount Fuji, but his atomic breath has no effect on Hedorah's amorphous, water-rich body. Hedorah rapidly overpowers Godzilla using a combination of its fearsome strength and incredible durability, and almost kills the King of the Monsters after hurling him into a pit and attempting to drown him under a deluge of chemical ooze. It is later discovered that Hedorah is vulnerable to temperatures high enough to dehydrate it, so the JSDF constructs a pair of gigantic electrodes on the battlefield to use against the alien. Hedorah and Godzilla continue to fight, and the former is subsequently killed when Godzilla uses his atomic breath to power the electrodes, which cripple Hedorah and allow Godzilla to fully dehydrate its body into dust.
Millennium era (2004)
Hedorah briefly reappears in Godzilla: Final Wars as one of several monsters under the Xiliens' control before it is destroyed by Godzilla alongside Ebirah in Tokyo.
Other
Hedorah also appears in Godzilla: Planet of the Monsterss prequel novel Godzilla: Monster Apocalypse, in which it was originally a colony of sludge-like microorganisms that lived off dissolved chemicals inside a mine in Hebei, China. After the Chinese government discovered it in 1999, they studied and modified the microorganisms to operate as a giant, mist-like bioweapon with red and yellow eyes called Hedorah. In 2005, the Chinese military commenced "Operation: Hedorah" to kill Anguirus and Rodan with the bio-weapon. Hedorah successfully took down the two monsters, but it attained sentience afterwards and went on a rampage, consumed the pollutants in the surrounding area, then disappeared, leaving an estimated 8.2 million casualties in its wake.
A 2021 short film "Godzilla vs. Hedorah" created for the 50th Anniversary of the character would give the Final Wars incarnations of Hedorah and Godzilla a rematch amid an oil refinery in the daytime.
Hedorah also appears in Chibi Godzilla Raids Again.
Appearances
Films
Godzilla vs. Hedorah (1971)
Godzilla: Final Wars (2004)
Godzilla: Planet of the Monsters (2017)
Godzilla vs. Hedorah (2021, short film)
Television
Godzilla Island (1997-1998)
Godziban (2019–present)
Chibi Godzilla Raids Again (2023-2024)
Video games
Godzilla: Monster of Monsters (NES - 1988)
Godzilla / Godzilla-Kun: Kaijuu Daikessen (Game Boy - 1990)
Godzilla 2: War of the Monsters (NES - 1991)
Kaijū-ō Godzilla / King of the Monsters, Godzilla (Game Boy - 1993)
Godzilla: Battle Legends (Turbo Duo - 1993)
Godzilla Trading Battle (PlayStation - 1998)
Godzilla: Destroy All Monsters Melee (GCN, Xbox - 2002/2003)
Godzilla Unleashed: Double Smash (NDS - 2007)
Godzilla: The Game (PS3 - 2014 PS3 PS4 - 2015)
Godzilla Defense Force (2019)
Godzilla Battle Line (2021)
GigaBash (PS4, PS5, Steam, Epic Games - 2024)
Literature
Godzilla vs. Gigan and the Smog Monster (1996)
Godzilla at World’s End (1998)
Godzilla: Monster Apocalypse (2017)
Comics
Godzilla: Legends (comic - 2011–2012)
Godzilla: Ongoing (comic - 2012)
Godzilla: The Half-Century War (comic - 2012–2013)
Godzilla: Rulers of Earth (comic - 2013–2015)
Godzilla Rivals (comic - 2021)
Music
Hedorah appears on the album cover of Frank Zappa's Sleep Dirt.
Hedorah appears on the album cover of Dinosaur Jr's Sweep It Into Space.
References
Bibliography
Extraterrestrial supervillains
Fantasy film characters
Fictional amorphous creatures
Fictional characters with superhuman strength
Fictional mass murderers
Fictional parasite characters
Fictional superorganisms
Film characters introduced in 1971
Godzilla characters
Mothra characters
Horror film villains
Toho monsters | Hedorah | [
"Biology"
] | 1,461 | [
"Superorganisms",
"Fictional superorganisms"
] |
1,238,415 | https://en.wikipedia.org/wiki/Z22%20%28computer%29 | The Z22 was the seventh computer model Konrad Zuse developed (the first six being the Z1, Z2, Z3, Z4, Z5 and Z11, respectively). One of the early commercial computers, the Z22's design was finished about 1955. The major version jump from Z11 to Z22 was due to the use of vacuum tubes, as opposed to the electromechanical systems used in earlier models. The first machines built were shipped to Berlin and Aachen.
By the end of 1958 the ZMMD-group had built a working ALGOL 58 compiler for the Z22 computer. ZMMD was an abbreviation for Zürich (where Rutishauser worked), München (workplace of Bauer and Samelson), Mainz (location of the Z22 computer), Darmstadt (workplace of Bottenbruch).
In 1961, the Z22 was followed by a logically very similar transistorized version, the Z23. Already in 1954, Zuse had come to an agreement with Heinz Zemanek that his Zuse KG would finance the work of Rudolf Bodo, who helped Zemanek build the early European transistorized computer Mailüfterl, and that after that project Bodo should work for the Zuse KG—there he helped build the transistorized Z23. Furthermore, all circuit diagrams of the Z22 were supplied to Bodo and Zemanek.
The University of Applied Sciences, Karlsruhe still has an operational Z22 which is on permanent loan at the ZKM in Karlsruhe.
Altogether 55 Z22 computers were produced.
In the 1970s, clones of the Z22 using TTL were built by the company Thiemicke Computer.
Technical data
The typical setup of a Z22 was:
14 words of 38-bit as fast access RAM implemented as core memory
8192 word (38-bit each) magnetic drum memory as RAM
One teletype as console and main input/output device
Additional punch tape devices as fast input/output devices
600 tubes working as flip-flops
electrical cooling unit, needing a water tap connection (water cooling)
380 V 16 A three-phase power supply
The Z22 operated at 3 kHz operating frequency, which was synchronous with the speed of the drum storage. The input of data and programs was possible via punch-tape reader and console commands. The Z22 also had glow-lamps which showed the memory state and machine state as output.
Programming
The Z22 was designed to be easier to program than previous first generation computers. It was programmed in machine code with 38-bit instruction words, consisting of five fields:
2 bits `10` to mark an instruction
18-bit instruction field, thereof:
5 bits condition symbols
13 bits operation symbols
5-bit fast storage (core) address
13-bit (drum) memory address
The 18-bit instruction field did not contain a single opcode, but each bit controlled one functional unit of the CPU. Instructions were constructed from these. For example, the bit 'A' meaning to add the content of a memory location to the accumulator could be combined with `N` Nullstellen (zeroing) to turn the Add instruction into a Load. Many combinations are quite unusual by modern standards, like 'LLRA 4' means "multiply the accumulator by three".
There also was an assembly-like programming language called "Freiburger Code". It was designed to make writing programs for solving mathematical problems easier than writing machine code, and reportedly did so.
See also
List of vacuum-tube computers
References
External links
Z22 computer emulator
Homepage of the Z22/13 of the university of Karlsruhe (in German), Google translation
1950s computers
Vacuum tube computers
Computer-related introductions in 1955
Konrad Zuse
Computers designed in Germany
Serial computers | Z22 (computer) | [
"Technology"
] | 770 | [
"Serial computers",
"Computers"
] |
1,239,716 | https://en.wikipedia.org/wiki/Pariser%E2%80%93Parr%E2%80%93Pople%20method | In molecular physics, the Pariser–Parr–Pople method applies semi-empirical quantum mechanical methods to the quantitative prediction of electronic structures and spectra, in molecules of interest in the field of organic chemistry. Previous methods existed—such as the Hückel method which led to Hückel's rule—but were limited in their scope, application and complexity, as is the Extended Hückel method.
This approach was developed in the 1950s by Rudolph Pariser with Robert Parr and co-developed by John Pople.
It is essentially a more efficient method of finding reasonable approximations of molecular orbitals, useful in predicting physical and chemical nature of the molecule under study since molecular orbital characteristics have implications with regards to both the basic structure and reactivity of a molecule. This method used the zero-differential overlap (ZDO) approximation to reduce the problem to reasonable size and complexity but still required modern solid state computers (as opposed to punched card or vacuum tube systems) before becoming fully useful for molecules larger than benzene.
Originally, Pariser's goal of using this method was to predict the characteristics of complex organic dyes, but this was never realized. The method has wide applicability in precise prediction of electronic transitions, particularly lower singlet transitions, and found wide application in theoretical and applied quantum chemistry. The two basic papers on this subject were among the top five chemistry and physics citations reported in ISI, Current Contents 1977 for the period of 1961–1977 with a total of 2450 references.
In contrast to the Hartree–Fock-based semiempirical method counterparts (i.e.: MOPAC), the pi-electron theories have a very strong ab initio basis. The PPP formulation is actually an approximate pi-electron effective operator, and the empirical parameters, in fact, include effective electron correlation effects. A rigorous, ab initio theory of the PPP method is provided by diagrammatic, multi-reference, high order perturbation theory (Freed, Brandow, Lindgren, etc.). (The exact formulation is non-trivial, and requires some field theory) Large scale ab initio calculations (Martin and Birge, Martin and Freed, Sheppard and Freed, etc.) have confirmed many of the approximations of the PPP model and explain why the PPP-like models work so well with such a simple formulation.
References
Molecular physics
Semiempirical quantum chemistry methods | Pariser–Parr–Pople method | [
"Physics",
"Chemistry"
] | 501 | [
"Quantum chemistry stubs",
"Molecular physics",
"Quantum chemistry",
"Theoretical chemistry stubs",
"Computational chemistry",
" molecular",
"nan",
"Atomic",
"Molecular physics stubs",
"Physical chemistry stubs",
"Semiempirical quantum chemistry methods",
" and optical physics"
] |
23,752,090 | https://en.wikipedia.org/wiki/Oulu%20University%20Secure%20Programming%20Group | The Oulu University Secure Programming Group (OUSPG) is a research group at the University of Oulu that studies, evaluates and develops methods of implementing and testing application and system software in order to prevent, discover and eliminate implementation level security vulnerabilities in a pro-active fashion. The focus is on implementation level security issues and software security testing.
History
OUSPG has been active as an independent academic research group in the Computer Engineering Laboratory in the Department of Electrical and Information Engineering in the University of Oulu since summer 1996.
OUSPG is most known for its participation in protocol implementation security testing, which they called robustness testing, using the PROTOS mini-simulation method.
The PROTOS was co-operated project with VTT and number of industrial partners. The project developed different approaches of testing implementations of protocols using black-box (i.e. functional) testing methods. The goal was to support pro-active elimination of faults with information security implications, promote awareness in these issues and develop methods to support customer driven evaluation and acceptance testing of implementations. Improving the security robustness of products was attempted through supporting the development process.
The most notable result of the PROTOS project was the result of the c06-snmp test suite, which discovered multiple vulnerabilities in SNMP.
The work done in PROTOS is continued in PROTOS-GENOME, which applies automatic structure inference combined with domain specific reasoning capabilities to enable automated black-box program robustness testing tools without having prior knowledge of the protocol grammar. This work has resulted in a large number of vulnerabilities being found in archive file and antivirus products.
Commercial spin-offs
The group has produced two spin-off companies, Codenomicon continues the work of the PROTOS and Clarified Networks the work in FRONTIER.
References
External links
Computer security organizations
Software testing
Secure Programming Group | Oulu University Secure Programming Group | [
"Engineering"
] | 381 | [
"Software engineering",
"Software testing"
] |
23,756,855 | https://en.wikipedia.org/wiki/Severe%20plastic%20deformation | Severe plastic deformation (SPD) is a generic term describing a group of metalworking techniques involving very large strains typically involving a complex stress state or high shear, resulting in a high defect density and equiaxed "ultrafine" grain (UFG) size (d < 1000 nm) or nanocrystalline (NC) structure (d < 100 nm).
History
The significance of SPD was known from the ancient times, at least during the transition from the Bronze Age to the Iron Age, when repeated hammering and folding was employed for processing strategic tools such as swords. The development of the principles underlying SPD techniques goes back to the pioneering work of P.W. Bridgman at Harvard University in the 1930s. This work concerned the effects on solids of combining large hydrostatic pressures with concurrent shear deformation and it led to the award of the Nobel Prize in Physics in 1946. Very successful early implementations of these principles, described in more detail below, are the processes of equal-channel angular pressing (ECAP) developed by V.M. Segal and co-workers in Minsk in the 1970s and high-pressure torsion, derived from Bridgman's work, but not widely developed until the 1980s at the Russian Institute of Metals Physics in modern-day Yekaterinburg.
Some definitions of SPD describe it as a process in which high strain is applied without any significant change in the dimensions of the workpiece, resulting in a large hydrostatic pressure component. However, the mechanisms that lead to grain refinement in SPD are the same as those originally developed for mechanical alloying, a powder process that has been characterized as "severe plastic deformation" by authors as early as 1983. Additionally, some more recent processes such as asymmetric rolling, do result in a change in the dimensions of the workpiece, while still producing an ultrafine grain structure. The principles behind SPD have even been applied to surface treatments.
Methods
SPD methods are classified into three main groups of bulk-SPD methods, surface-SPD methods and powder-SPD methods. Here some popular SPD methods are briefly explained.
Equal channel angular Pressing
Equal channel angular extrusion (ECAE, sometimes called Equal channel angular pressing, ECAP) was developed in the 1970s. In this process, a metal billet is pressed through an angled (typically 90 degrees) channel. To achieve optimal results, the process may be repeated several times, changing the orientation of the billet with each pass. This produces a uniform shear throughout the bulk of the material.
High pressure torsion
High pressure torsion (HPT) can be traced back to the experiments that won Percy Bridgman the 1946 Nobel Prize in Physics, though its use in metal processing is considerably more recent. In this method, a disk of the material to be strained is placed between 2 anvils. A large compressive stress (typically several gigapascals) is applied, while one anvil is rotated to create a torsion force. HPT can be performed unconstrained, in which the material is free to flow outward, fully constrained, or to some degree between in which outward flow is allowed, but limited.
Accumulative roll bonding
In accumulative roll bonding (ARB), 2 sheets of the same material are stacked, heated (to below the recrystallization temperature), and rolled, bonding the 2 sheets together. This sheet is cut in half, the 2 halves are stacked, and the process is repeated several times. Compared to other SPD processes, ARB has the benefit that it does not require specialized equipment or tooling, only a conventional rolling mill. However, the surfaces to be joined must be well-cleaned before rolling to ensure good bonding.
Repetitive corrugation and straightening
Repetitive corrugation and straightening (RCS) is a severe plastic deformation technique used to process sheet metals. In RCS, a sheet is pressed between two corrugated dies followed by pressing between two flat dies. RCS has gained wide popularity to produce fine grained sheet metals. Endeavors to improve this technique lead to introduce Repetitive Corrugation and Straightening by Rolling (RCSR), a novel SPD method. Applicability of this new method approved in the various materials.
Asymmetric rolling
In asymmetric rolling (ASR), a rolling mill is modified such that one roll has a higher velocity than the other. This is typically done with either independent speed control or by using rolls of different size. This creates a region in which the frictional forces on the top and bottom of the sheet being rolled are opposite, creating shear stresses throughout the material in addition to the normal compressive stress from rolling. Unlike other SPD processes, ASR does not maintain the same net shape, but the effect on the microstructure of the material is similar.
Mechanical alloying
Mechanical alloying/milling (MA/MM) performed in a high-energy ball mill such as a shaker mill or planetary mill will also induce severe plastic deformation in metals. During milling, particles are fractured and cold welded together, resulting in large deformations. The end product is generally a powder that must then be consolidated in some way (often using other SPD processes), but some alloys have the ability to consolidate in-situ during milling. Mechanical alloying also allows powders of different metals to be alloyed together during processing.
Surface treatments
More recently, the principles behind SPD have been used to develop surface treatments that create a nanocrystalline layer on the surface of a material. In the surface mechanical attrition treatment (SMAT), an ultrasonic horn is connected to an ultrasonic (20 kHz) transducer), with small balls on top of the horn. The workpiece is mounted a small distance above the horn. The high frequency results in a large number of collisions between the balls and the surface, creating a strain rate on the order of 102–103 s−1. The NC surface layer developed can be on the order of 50 μm thick. The process is similar to shot peening, but the kinetic energy of the balls is much higher in SMAT.
An ultrasonic nanocrystalline surface modification (UNSM) technique is also one of the newly developed surface modification technique. In the UNSM process, not only the static load, but also the dynamic load are exerted. The processing is conducted striking a workpiece surface up to 20K or more times per second with shots of an attached ball to the horn in the range of 1K-100K per square millimeter. The strikes, which can be described as cold-forging, introduce SPD to produce a NC surface layer by refining the coarse grains until nanometer scale without changing the chemical composition of a material which render the high strength and high ductility. This UNSM technique does not only improve the mechanical and tribological properties of a material, but also produces a corrugated structure having numerous of desired dimples on the treated surface.
Applications
Most research into SPD has focused on grain refinement, which has obvious applications in the development of high-strength materials as a result of the Hall-Petch relation. Conventionally processed industrial metals typically have a grain size from 10–100 μm. Reducing the grain size from 10 μm to 1 μm can increase the yield strength of metals by more than 100%. Techniques that use bulk materials such as ECAE can provide reliable and relatively inexpensive ways of producing ultrafine grain materials compared to rapid solidification techniques such as melt spinning.
However, other effects of SPD, such as texture modification also have potential industrial applications as properties such as the Lankford coefficient (important for deep drawing processes) and magnetic properties of electrical steel are highly dependent on texture.
Processes such as ECAE and HPT have also been used to consolidate metal powders and composites without the need for the high temperatures used in conventional consolidation processes such as hot isostatic pressing, allowing desirable characteristics such as nanocrystalline grain sizes or amorphous structures to be retained.
Some known commercial application of SPD processes are in the production of Sputtering targets by Honeywell and UFG titanium for medical implants.
Grain refinement mechanism
The presence of a high hydrostatic pressure, in combination with large shear strains, is essential for producing high densities of crystal lattice defects, particularly dislocations, which can result in a significant refining of the grains. Grain refinement in SPD processes occurs by a multi-step process:
Dislocations, which are initially distributed throughout the grains, rearrange and group together into dislocation "cells" to reduce the total strain energy.
As deformation continues and more dislocations are generated, misorientation develops between the cells, forming "subgrains"
The process repeats within the subgrains until the size becomes sufficiently small such that the subgrains can rotate
Additional deformation causes the subgrains to rotate into high-angle grain boundaries, typically with an equiaxed shape.
The mechanism by which the subgrains rotate is less understood. Wu et al. describe a process in which dislocation motion becomes restricted due to the small subgrain size and grain rotation becomes more energetically favorable. Mishra et al. propose a slightly different explanation, in which the rotation is aided by diffusion along the grain boundaries (which is much faster than through the bulk).
F.A. Mohamad has proposed a model for the minimum grain size achievable using mechanical milling. The model is based on the concept that the grain size is dependent on the rates at which dislocations are generated and annihilated. The full model is given by
On the left side of the equation: dmin is the minimum grain size and b is the Burgers vector.
A3 is a constant.
β=Qp−Qm/Q (Qp is the activation energy for pipe diffusion along dislocations, Qm is the activation energy for vacancy migration, and Q is the activation energy for self-diffusion), βQ represents the activation energy for recovery, R is the gas constant, and T is the processing temperature.
Dp0 is the temperature-independent component of the pipe diffusion coefficient, G is the shear modulus, ν0 is the dislocation velocity, k is the Boltzmann constant, γ is the stacking fault energy, and H is the hardness.
While the model was developed specifically for mechanical milling, it has also been successfully applied to other SPD processes. Frequently only a portion of the model is used (typically the term involving the stacking fault energy) as the other terms are often unknown and difficult to measure. This is still useful as it implies that all other things remaining equal, reducing the stacking fault energy, a property that is a function of the alloying elements, will allow for better grain refinement. A few studies, however, suggested that despite the significance of stacking fault energy on the grain refinement at the early stages of straining, the steady-state grain size at large strains is mainly controlled by the homologous temperature in pure metals and by the interaction of solute atoms and dislocations in single-phase alloys.
References
Deformation (mechanics)
Metal forming
Materials science | Severe plastic deformation | [
"Physics",
"Materials_science",
"Engineering"
] | 2,295 | [
"Deformation (mechanics)",
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
18,543,448 | https://en.wikipedia.org/wiki/Universal%20approximation%20theorem | In the mathematical theory of artificial neural networks, universal approximation theorems are theorems of the following form: Given a family of neural networks, for each function from a certain function space, there exists a sequence of neural networks from the family, such that according to some criterion. That is, the family of neural networks is dense in the function space.
The most popular version states that feedforward networks with non-polynomial activation functions are dense in the space of continuous functions between two Euclidean spaces, with respect to the compact convergence topology.
Universal approximation theorems are existence theorems: They simply state that there exists such a sequence , and do not provide any way to actually find such a sequence. They also do not guarantee any method, such as backpropagation, might actually find such a sequence. Any method for searching the space of neural networks, including backpropagation, might find a converging sequence, or not (i.e. the backpropagation might get stuck in a local optimum).
Universal approximation theorems are limit theorems: They simply state that for any and a criterion of closeness , if there are enough neurons in a neural network, then there exists a neural network with that many neurons that does approximate to within . There is no guarantee that any finite size, say, 10000 neurons, is enough.
Setup
Artificial neural networks are combinations of multiple simple mathematical functions that implement more complicated functions from (typically) real-valued vectors to real-valued vectors. The spaces of multivariate functions that can be implemented by a network are determined by the structure of the network, the set of simple functions, and its multiplicative parameters. A great deal of theoretical work has gone into characterizing these function spaces.
Most universal approximation theorems are in one of two classes. The first quantifies the approximation capabilities of neural networks with an arbitrary number of artificial neurons ("arbitrary width" case) and the second focuses on the case with an arbitrary number of hidden layers, each containing a limited number of artificial neurons ("arbitrary depth" case). In addition to these two classes, there are also universal approximation theorems for neural networks with bounded number of hidden layers and a limited number of neurons in each layer ("bounded depth and bounded width" case).
History
Arbitrary width
The first examples were the arbitrary width case. George Cybenko in 1989 proved it for sigmoid activation functions. , Maxwell Stinchcombe, and Halbert White showed in 1989 that multilayer feed-forward networks with as few as one hidden layer are universal approximators. Hornik also showed in 1991 that it is not the specific choice of the activation function but rather the multilayer feed-forward architecture itself that gives neural networks the potential of being universal approximators. Moshe Leshno et al in 1993 and later Allan Pinkus in 1999 showed that the universal approximation property is equivalent to having a nonpolynomial activation function.
Arbitrary depth
The arbitrary depth case was also studied by a number of authors such as Gustaf Gripenberg in 2003, Dmitry Yarotsky, Zhou Lu et al in 2017, Boris Hanin and Mark Sellke in 2018 who focused on neural networks with ReLU activation function. In 2020, Patrick Kidger and Terry Lyons extended those results to neural networks with general activation functions such, e.g. tanh, GeLU, or Swish.
One special case of arbitrary depth is that each composition component comes from a finite set of mappings. In 2024, Cai constructed a finite set of mappings, named a vocabulary, such that any continuous function can be approximated by compositing a sequence from the vocabulary. This is similar to the concept of compositionality in linguistics, which is the idea that a finite vocabulary of basic elements can be combined via grammar to express an infinite range of meanings.
Bounded depth and bounded width
The bounded depth and bounded width case was first studied by Maiorov and Pinkus in 1999. They showed that there exists an analytic sigmoidal activation function such that two hidden layer neural networks with bounded number of units in hidden layers are universal approximators.
In 2018, Guliyev and Ismailov constructed a smooth sigmoidal activation function providing universal approximation property for two hidden layer feedforward neural networks with less units in hidden layers. In 2018, they also constructed single hidden layer networks with bounded width that are still universal approximators for univariate functions. However, this does not apply for multivariable functions.
In 2022, Shen et al. obtained precise quantitative information on the depth and width required to approximate a target function by deep and wide ReLU neural networks.
Quantitative bounds
The question of minimal possible width for universality was first studied in 2021, Park et al obtained the minimum width required for the universal approximation of Lp functions using feed-forward neural networks with ReLU as activation functions. Similar results that can be directly applied to residual neural networks were also obtained in the same year by Paulo Tabuada and Bahman Gharesifard using control-theoretic arguments. In 2023, Cai obtained the optimal minimum width bound for the universal approximation.
For the arbitrary depth case, Leonie Papon and Anastasis Kratsios derived explicit depth estimates depending on the regularity of the target function and of the activation function.
Kolmogorov network
The Kolmogorov–Arnold representation theorem is similar in spirit. Indeed, certain neural network families can directly apply the Kolmogorov–Arnold theorem to yield a universal approximation theorem. Robert Hecht-Nielsen showed that a three-layer neural network can approximate any continuous multivariate function. This was extended to the discontinuous case by Vugar Ismailov. In 2024, Ziming Liu and co-authors showed a practical application.
Variants
Discontinuous activation functions, noncompact domains, certifiable networks,
random neural networks, and alternative network architectures and topologies.
The universal approximation property of width-bounded networks has been studied as a dual of classical universal approximation results on depth-bounded networks. For input dimension dx and output dimension dy the minimum width required for the universal approximation of the Lp functions is exactly max{dx + 1, dy} (for a ReLU network). More generally this also holds if both ReLU and a threshold activation function are used.
Universal function approximation on graphs (or rather on graph isomorphism classes) by popular graph convolutional neural networks (GCNs or GNNs) can be made as discriminative as the Weisfeiler–Leman graph isomorphism test. In 2020, a universal approximation theorem result was established by Brüel-Gabrielsson, showing that graph representation with certain injective properties is sufficient for universal function approximation on bounded graphs and restricted universal function approximation on unbounded graphs, with an accompanying -runtime method that performed at state of the art on a collection of benchmarks (where and are the sets of nodes and edges of the graph respectively).
There are also a variety of results between non-Euclidean spaces and other commonly used architectures and, more generally, algorithmically generated sets of functions, such as the convolutional neural network (CNN) architecture, radial basis functions, or neural networks with specific properties.
Arbitrary-width case
A spate of papers in the 1980s—1990s, from George Cybenko and etc, established several universal approximation theorems for arbitrary width and bounded depth. See for reviews. The following is the most often quoted:
Also, certain non-continuous activation functions can be used to approximate a sigmoid function, which then allows the above theorem to apply to those functions. For example, the step function works. In particular, this shows that a perceptron network with a single infinitely wide hidden layer can approximate arbitrary functions.
Such an can also be approximated by a network of greater depth by using the same construction for the first layer and approximating the identity function with later layers.
The above proof has not specified how one might use a ramp function to approximate arbitrary functions in . A sketch of the proof is that one can first construct flat bump functions, intersect them to obtain spherical bump functions that approximate the Dirac delta function, then use those to approximate arbitrary functions in . The original proofs, such as the one by Cybenko, use methods from functional analysis, including the Hahn-Banach and Riesz–Markov–Kakutani representation theorems.
Notice also that the neural network is only required to approximate within a compact set . The proof does not describe how the function would be extrapolated outside of the region.
The problem with polynomials may be removed by allowing the outputs of the hidden layers to be multiplied together (the "pi-sigma networks"), yielding the generalization:
Arbitrary-depth case
The "dual" versions of the theorem consider networks of bounded width and arbitrary depth. A variant of the universal approximation theorem was proved for the arbitrary depth case by Zhou Lu et al. in 2017. They showed that networks of width n + 4 with ReLU activation functions can approximate any Lebesgue-integrable function on n-dimensional input space with respect to distance if network depth is allowed to grow. It was also shown that if the width was less than or equal to n, this general expressive power to approximate any Lebesgue integrable function was lost. In the same paper it was shown that ReLU networks with width n + 1 were sufficient to approximate any continuous function of n-dimensional input variables. The following refinement, specifies the optimal minimum width for which such an approximation is possible and is due to.
Together, the central result of yields the following universal approximation theorem for networks with bounded width (see also for the first result of this kind).
Certain necessary conditions for the bounded width, arbitrary depth case have been established, but there is still a gap between the known sufficient and necessary conditions.
Bounded depth and bounded width case
The first result on approximation capabilities of neural networks with bounded number of layers, each containing a limited number of artificial neurons was obtained by Maiorov and Pinkus. Their remarkable result revealed that such networks can be universal approximators and for achieving this property two hidden layers are enough.
This is an existence result. It says that activation functions providing universal approximation property for bounded depth bounded width networks exist. Using certain algorithmic and computer programming techniques, Guliyev and Ismailov efficiently constructed such activation functions depending on a numerical parameter. The developed algorithm allows one to compute the activation functions at any point of the real axis instantly. For the algorithm and the corresponding computer code see. The theoretical result can be formulated as follows.
Here “ is -strictly increasing on some set ” means that there exists a strictly increasing function such that for all . Clearly, a -increasing function behaves like a usual increasing function as gets small.
In the "depth-width" terminology, the above theorem says that for certain activation functions depth- width- networks are universal approximators for univariate functions and depth- width- networks are universal approximators for -variable functions ().
See also
Kolmogorov–Arnold representation theorem
Representer theorem
No free lunch theorem
Stone–Weierstrass theorem
Fourier series
References
Theorems in analysis
Artificial neural networks
Network architecture
Networks | Universal approximation theorem | [
"Mathematics",
"Engineering"
] | 2,317 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Network architecture",
"Computer networks engineering",
"Mathematical problems",
"Mathematical theorems"
] |
18,545,328 | https://en.wikipedia.org/wiki/Aescin | Aescin or escin is a mixture of saponins with anti-inflammatory, vasoconstrictor and vasoprotective effects found in Aesculus hippocastanum (the horse chestnut). Aescin is the main active component in horse chestnut, and is responsible for most of its medicinal properties. The main active compound of aescin is β-aescin, although the mixture also contains various other components including α-aescin, protoescigenin, barringtogenol, cryptoescin and benzopyrones.
Evidence suggests that aescin, especially pure β-aescin, is a safe and effective treatment for short-term treatment of chronic venous insufficiency; however, more high quality randomized controlled trials are required to confirm the effectiveness. Horse chestnut extract may be as effective and well tolerated as the use of compression stockings.
Mechanism of action
Aescin appears to produce effects through a wide range of mechanisms. It induces endothelial nitric oxide synthesis by making endothelial cells more permeable to calcium ions, and also induces release of prostaglandin F2α. Other possible mechanisms include serotonin antagonism and histamine antagonism and reduced catabolism of tissue mucopolysaccharides.
References
External links
Information on horse chestnut extract from Memorial Sloan-Kettering Cancer Center
- alpha-Aescin
Saponins
Triterpene glycosides
Acetate esters | Aescin | [
"Chemistry"
] | 316 | [
"Biomolecules by chemical classification",
"Natural products",
"Saponins"
] |
18,545,396 | https://en.wikipedia.org/wiki/Neo-bulk%20cargo | In the ocean shipping trade, neo-bulk cargo is a type of cargo that is a subcategory of general / break-bulk cargo, that exists alongside the other main categories of bulk cargo and containerized cargo. Gerhardt Muller, erstwhile professor at the United States Merchant Marine Academy and Manager of Regional Intermodal Planning of the Port Authority of New York and New Jersey, promotes it from a subcategory to being a third main category of cargo in its own right, alongside containerized and bulk cargo.
Description
It comprises goods that are prepackaged, counted as they are loaded and unloaded (as opposed to bulk cargo where individual items are not counted), not stored in containers, and transferred as units at port. Types of neo-bulk cargo goods include heavy machinery, lumber, bundled steel / steel coils, large units of scrap metal, banana bunches, waste paper bundles, and motor vehicles. The category has only become recognized as a distinct cargo category in its own right in recent decades.
Ocean vessels that are designed to carry specific forms of neo-bulk cargo, such as dedicated Roll-on/roll-off car-carrying ships, are called neo-bulk carriers. They are specially designed for the individual types of neo-bulk cargoes that they carry, although car-carriers can sometimes double-up to carry different types of cargo on a return journey. In 2000, the largest neo-bulk car carrier company in the world was Wallenius Wilhelmsen, with a fleet of 20 carrier vessels, and a total haulage that year of 1.5 million vehicles. Other special designs of neo-bulk carriers include log-carriers that are designed to tip their load over the side of the vessel into the water, relying upon the fact that logs will float, or specialist carriers for newsprint or livestock.
References
Reference bibliography
Maritime transport | Neo-bulk cargo | [
"Physics"
] | 380 | [
"Physical systems",
"Transport",
"Transport stubs"
] |
18,548,164 | https://en.wikipedia.org/wiki/Government%20Operational%20Research%20Service | In the United Kingdom, the Government Operational Research Service (GORS) supports and champions Operational Research across government. GORS currently supports policy-making, strategy and operations in many different departments and agencies across the United Kingdom and employs over 1000 analysts, ranging from sandwich students to members of the Senior Civil Service.
References
Operations research
Research institutes in the United Kingdom
Government of the United Kingdom
Government research | Government Operational Research Service | [
"Mathematics"
] | 79 | [
"Applied mathematics",
"Operations research"
] |
18,550,182 | https://en.wikipedia.org/wiki/Biomarker%20Insights | Biomarker Insights is a peer-reviewed open access academic journal focusing on biomarkers and their clinical applications. The journal aims to be a venue for rapid communications in the field. The journal was established in 2006 and was originally published by Libertas Academica. SAGE Publications became the publisher in September 2016. The editor in chief is Karen Pulford.
Indexing
The journal is indexed in:
References
External links
Biochemistry journals
Academic journals established in 2006
English-language journals
SAGE Publishing academic journals
Open access journals | Biomarker Insights | [
"Chemistry"
] | 103 | [
"Biochemistry stubs",
"Biochemistry journals",
"Biochemistry literature",
"Biochemistry journal stubs"
] |
18,550,252 | https://en.wikipedia.org/wiki/Evolutionary%20Bioinformatics | Evolutionary Bioinformatics is a peer-reviewed open access scientific journal focusing on computational biology in the study of evolution. The journal was established in 2005 by Allen Rodrigo and is currently edited by Dennis Wall (Stanford University). It was originally published by Libertas Academica, but Sage Publishing became the publisher in September 2016.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.6.
References
External links
Academic journals established in 2005
Bioinformatics and computational biology journals
Evolutionary biology journals
Open access journals
SAGE Publishing academic journals
English-language journals | Evolutionary Bioinformatics | [
"Biology"
] | 132 | [
"Bioinformatics",
"Bioinformatics and computational biology journals"
] |
23,617,720 | https://en.wikipedia.org/wiki/Pest%20resistance%20management%20plans | To protect the continued use of biopesticides, the United States Environmental Protection Agency is requiring companies developing transgenic crops to submit and implement pest resistance management plans as a requirement of product registration.
If they are exposed to a toxin excessively, most insect populations can develop resistance, making pest control products less effective. With new biopesticide technologies comes the concern that pests will rapidly develop resistance to natural insecticides, because plant pesticides tend to produce the pesticidal active ingredient throughout a growing season, increasing the selection pressure upon both the target pests and any other susceptible insects feeding on the transformed crop. A resistance management plan is intended to sustain the useful life of transgenic technology and well as the utility of the toxin for organic farmers. This is part of the reason the EPA enacts “risk assessments that evaluate the potential for harm to humans, wildlife, fish, and plants, including endangered species and non-target organisms. Contamination of surface water or ground water from leaching, runoff, and spray drift” on farms across the country.
See also
Bioengineering
Pesticide resistance
References
Agriculture in the United States
Biopesticides
Regulation of genetically modified organisms
Genetic engineering and agriculture | Pest resistance management plans | [
"Engineering",
"Biology"
] | 243 | [
"Genetic engineering and agriculture",
"Regulation of genetically modified organisms",
"Genetic engineering",
"Regulation of biotechnologies"
] |
23,618,338 | https://en.wikipedia.org/wiki/C8H16 | {{DISPLAYTITLE:C8H16}}
The molecular formula C8H16 (molar mass: 112.21 g/mol, exact mass: 112.1252 u) may refer to:
Cyclooctane
Methylcycloheptane
Dimethylcyclohexanes
Octenes
1-Octene
Molecular formulas | C8H16 | [
"Physics",
"Chemistry"
] | 75 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
23,619,112 | https://en.wikipedia.org/wiki/Antimetric%20electrical%20network | An antimetric electrical network is an electrical network that exhibits anti-symmetrical electrical properties. The term is often encountered in filter theory, but it applies to general electrical network analysis. Antimetric is the diametrical opposite of symmetric; it does not merely mean "asymmetric" (i.e., "lacking symmetry"). It is possible for networks to be symmetric or antimetric in their electrical properties without being physically or topologically symmetric or antimetric.
Definition
References to symmetry and antimetry of a network usually refer to the input impedances of a two-port network when correctly terminated. A symmetric network will have two equal input impedances, Zi1 and Zi2. For an antimetric network, the two impedances must be the dual of each other with respect to some nominal impedance R0. That is,
or equivalently
It is necessary for antimetry that the terminating impedances are also the dual of each other, but in many practical cases the two terminating impedances are resistors and are both equal to the nominal impedance R0. Hence, they are both symmetric and antimetric at the same time.
Physical and electrical antimetry
Symmetric and antimetric networks are often also topologically symmetric and antimetric, respectively. The physical arrangement of their components and values are symmetric or antimetric as in the ladder example above. However, it is not a necessary condition for electrical antimetry. For example, if the example networks of figure 1 have an additional identical T-section added to the left-hand side as shown in figure 2, then the networks remain topologically symmetric and antimetric. However, the network resulting from the application of Bartlett's bisection theorem applied to the first T-section in each network, as shown in figure 3, are neither physically symmetric nor antimetric but retain their electrical symmetric (in the first case) and antimetric (in the second case) properties.
Two-port parameters
The conditions for symmetry and antimetry can be stated in terms of two-port parameters. For a two-port network described by normalized impedance parameters (z-parameters),
if the network is symmetric, and
if the network is antimetric. Passive networks of the kind illustrated in this article are also reciprocal, which requires that
and results in a normalized z-parameter matrix of,
for symmetric networks and
for antimetric networks.
For a two-port network described by scattering parameters (S-parameters),
if the network is symmetric, and
if the network is antimetric. The condition for reciprocity is,
resulting in an S-parameter matrix of,
for symmetric networks and
for antimetric networks.
Applications
Some circuit designs naturally output antimetric networks. For instance, a low-pass Butterworth filter implemented as a ladder network with an even number of elements will be antimetric. Similarly, a bandpass Butterworth with an even number of resonators will be antimetric, as will a Butterworth mechanical filter with an even number of mechanical resonators.
Glossary notes
References
Linear filters
Filter theory
Analog circuits
Electronic design | Antimetric electrical network | [
"Engineering"
] | 628 | [
"Telecommunications engineering",
"Electronic design",
"Analog circuits",
"Filter theory",
"Electronic engineering",
"Design"
] |
23,619,441 | https://en.wikipedia.org/wiki/W.%20E.%20S.%20Turner | William Ernest Stephen Turner (22 September 1881 – 27 October 1963) was a British chemist and pioneer of scientific glass technology.
Biography
Turner was born in Wednesbury, Staffordshire on 22 September 1881. He went to King Edward VI Grammar School, Five Ways, Birmingham, and achieved a BSc (1902) and MSc (1904) in chemistry at the University of Birmingham.
He married Mary Isobell Marshall (died 1939) and they had 4 children.
In 1904, he joined the University College of Sheffield as a lecturer, and, in 1915, established the Department of Glass Manufacture, becoming in 1916 the Department of Glass Technology. He remained as its head until his retirement in 1945.
In 1943, he married Helen Nairn Munro, an artist noted for her glass engraving, and a teacher of glass decoration at the Edinburgh College of Art. She was provided with a blue dress and shoes in glass fibre cloth (which was then an unusual industrial material). This has been selected as one of the items in the BBC's A History of the World in 100 Objects. The same year, he established a collection of historical and modern glass which became the Turner Museum of Glass from his extensive collection, and the wedding dress is on display there.
He died on 27 October 1963.
Work
Publications
From 1904 to 1914, he published 21 papers on physical chemistry, mainly on molecular weights in solution. However, the bulk of his work from 1917 to 1954 was on the chemistry and technology of glass. Following his retirement, he produced an extensive series on the history of glass technology and on glass in archeology. Apart from this, in 1909, he wrote a series of articles in the Sheffield Daily Telegraph about the scientist in industry, in which cooperation with universities was urged.
Research
His early career was strictly academic, largely dealing with the associations of molecules in the liquid state. However, as his articles in the local newspaper showed, he was interested in the application of science to practical industrial problems, and this became the main theme of his work. The beginning of the First World War cut off metallurgical supplies from Germany and Austria, and Turner proposed that the University should help British industry. The work in metallurgy led to enquiries about glass, and in 1915 Turner produced a 'Report on the glass industry of Yorkshire', noting that this was largely unscientific and rule of thumb in nature. He thereby persuaded the University to set up a Department of Glass Manufacture in 1915 for research and teaching where he remained for the rest of his career, becoming internationally known. The main thrust of his research was on a fundamental understanding of the relationship between the chemical composition and the working properties of glasses.
In 1916, he founded the Society of Glass Technology, becoming its first secretary. It published a Journal, which he edited until 1951. He was also involved in the formation of the International Commission on Glass.
Teaching
Turner initially taught physical chemistry, and in 1905 started specific courses for metallurgists. This involvement led him to become President of the Sheffield Society of Applied Metallurgy in 1914. In 1915, the Department of Glass Manufacture began an outreach programme, providing short courses to industry in Mexborough, Barnsley, Castleford and Knottingley in addition to Saturday classes in Sheffield. These were extended to glass making centres in Derby, Alloa, Glasgow and London. From 1917, full-time day students entered for what became a Bachelor of Technical Science degree. During the Second World War, Turner and other staff of the department provided technical lectures to industries such as those making glass electronic vacuum tubes.
Honours
He was appointed an Officer of the Order of the British Empire in the 1919 New Year Honours for application of science to the glass industry, and in 1938 was appointed a Fellow of the Royal Society. He was the only person outside Germany to receive the Otto Schott Medal.
References
External links
Fellows of the Royal Society
Officers of the Order of the British Empire
1881 births
1963 deaths
English chemists
Glass engineering and science
Glass makers
Alumni of the University of Birmingham | W. E. S. Turner | [
"Materials_science",
"Engineering"
] | 812 | [
"Glass engineering and science",
"Materials science"
] |
3,517,607 | https://en.wikipedia.org/wiki/Sorbitol-MacConkey%20agar | Sorbitol-MacConkey agar is a variant of traditional MacConkey agar used in the detection of E. coli O157:H7. Traditionally, MacConkey agar has been used to distinguish those bacteria that ferment lactose from those that do not. This is important because gut bacteria, such as Escherichia coli, can typically ferment lactose, while important gut pathogens, such as Salmonella enterica and most shigellas are unable to ferment lactose. Shigella sonnei can ferment lactose, but only after prolonged incubation, so it is referred to as a late-lactose fermenter.
During fermentation of the sugar, acid is formed and the pH of the medium drops, changing the color of the pH indicator. Different formulations use different indicators; neutral red is often used. For example, lactose fermenters turn a deep red when this pH indicator is used. Those bacteria unable to ferment lactose, often referred to as nonlactose fermenters, or NLFs for short, use the peptone in the medium. This releases ammonia, which raises the pH of the medium. Although some authors refer to NLFs as being colourless, in reality they turn neutral red a buffish color.
E. coli O157:H7 differs from most other strains of E. coli in being unable to ferment sorbitol. In sorbitol-MacConkey agar, lactose is replaced by sorbitol. Non-pathogenic strains of E. coli ferment sorbitol to produce acid: Pathogenic E. coli cannot ferment sorbitol, so this strain uses peptone to grow. This raises the pH of the medium, allowing the pathogenic strain to be differentiated from other non-pathogenic E.coli strains through the action of the pH indicator in the medium.
References
Microbiological media | Sorbitol-MacConkey agar | [
"Biology"
] | 417 | [
"Microbiological media",
"Microbiology equipment"
] |
3,518,436 | https://en.wikipedia.org/wiki/Transistor%20model | Transistors are simple devices with complicated behavior. In order to ensure the reliable operation of circuits employing transistors, it is necessary to scientifically model the physical phenomena observed in their operation using transistor models. There exists a variety of different models that range in complexity and in purpose. Transistor models divide into two major groups: models for device design and models for circuit design.
Models for device design
The modern transistor has an internal structure that exploits complex physical mechanisms. Device design requires a detailed understanding of how device manufacturing processes such as ion implantation, impurity diffusion, oxide growth, annealing, and etching affect device behavior. Process models simulate the manufacturing steps and provide a microscopic description of device "geometry" to the device simulator. "Geometry" does not mean readily identified geometrical features such as a planar or wrap-around gate structure, or raised or recessed forms of source and drain (see Figure 1 for a memory device with some unusual modeling challenges related to charging the floating gate by an avalanche process). It also refers to details inside the structure, such as the doping profiles after completion of device processing.
With this information about what the device looks like, the device simulator models the physical processes taking place in the device to determine its electrical behavior in a variety of circumstances: DC current–voltage behavior, transient behavior (both large-signal and small-signal), dependence on device layout (long and narrow versus short and wide, or interdigitated versus rectangular, or isolated versus proximate to other devices). These simulations tell the device designer whether the device process will produce devices with the electrical behavior needed by the circuit designer, and is used to inform the process designer about any necessary process improvements. Once the process gets close to manufacture, the predicted device characteristics are compared with measurement on test devices to check that the process and device models are working adequately.
Although long ago the device behavior modeled in this way was very simple mainly drift plus diffusion in simple geometries today many more processes must be modeled at a microscopic level; for example, leakage currents in junctions and oxides, complex transport of carriers including velocity saturation and ballistic transport, quantum mechanical effects, use of multiple materials (for example, Si-SiGe devices, and stacks of different dielectrics) and even the statistical effects due to the probabilistic nature of ion placement and carrier transport inside the device. Several times a year the technology changes and simulations have to be repeated. The models may require change to reflect new physical effects, or to provide greater accuracy. The maintenance and improvement of these models is a business in itself.
These models are very computer intensive, involving detailed spatial and temporal solutions of coupled partial differential equations on three-dimensional grids inside the device.
Such models are slow to run and provide detail not needed for circuit design. Therefore, faster transistor models oriented toward circuit parameters are used for circuit design.
Models for circuit design
Transistor models are used for almost all modern electronic design work. Analog circuit simulators such as SPICE use models to predict the behavior of a design. Most design work is related to integrated circuit designs which have a very large tooling cost, primarily for the photomasks used to create the devices, and there is a large economic incentive to get the design working without any iterations. Complete and accurate models allow a large percentage of designs to work the first time.
Modern circuits are usually very complex. The performance of such circuits is difficult to predict without accurate computer models, including but not limited to models of the devices used. The device models include effects of transistor layout: width, length, interdigitation, proximity to other devices; transient and DC current–voltage characteristics; parasitic device capacitance, resistance, and inductance; time delays; and temperature effects; to name a few items.
Large-signal nonlinear models
Nonlinear, or large signal transistor models fall into three main types:
Physical models
These are models based upon device physics, based upon approximate modeling of physical phenomena within a transistor. Parameters within these models are based upon physical properties such as oxide thicknesses, substrate doping concentrations, carrier mobility, etc. In the past these models were used extensively, but the complexity of modern devices makes them inadequate for quantitative design. Nonetheless, they find a place in hand analysis (that is, at the conceptual stage of circuit design), for example, for simplified estimates of signal-swing limitations.
Empirical models
This type of model is entirely based upon curve fitting, using whatever functions and parameter values most adequately fit measured data to enable simulation of transistor operation. Unlike a physical model, the parameters in an empirical model need have no fundamental basis, and will depend on the fitting procedure used to find them. The fitting procedure is key to success of these models if they are to be used to extrapolate to designs lying outside the range of data to which the models were originally fitted. Such extrapolation is a hope of such models, but is not fully realized so far.
Small-signal linear models
Small-signal or linear models are used to evaluate stability, gain, noise and bandwidth, both in the conceptual stages of circuit design (to decide between alternative design ideas before computer simulation is warranted) and using computers. A small-signal model is generated by taking derivatives of the current–voltage curves about a bias point or Q-point. As long as the signal is small relative to the nonlinearity of the device, the derivatives do not vary significantly, and can be treated as standard linear circuit elements.
An advantage of small signal models is they can be solved directly, while large signal nonlinear models are generally solved iteratively, with possible convergence or stability issues. By simplification to a linear model, the whole apparatus for solving linear equations becomes available, for example, simultaneous equations, determinants, and matrix theory (often studied as part of linear algebra), especially Cramer's rule. Another advantage is that a linear model is easier to think about, and helps to organize thought.
Small-signal parameters
A transistor's parameters represent its electrical properties. Engineers employ transistor parameters in production-line testing and in circuit design. A group of a transistor's parameters sufficient to predict circuit gain, input impedance, and output impedance are components in its small-signal model.
A number of different two-port network parameter sets may be used to model a transistor. These include:
Transmission parameters (T-parameters),
Hybrid-parameters (h-parameters),
Impedance parameters (z-parameters),
Admittance parameters (y-parameters), and
Scattering parameters (S-parameters).
Scattering parameters, or S parameters, can be measured for a transistor at a given bias point with a vector network analyzer. S parameters can be converted to another parameter set using standard matrix algebra operations.
Popular models
Gummel–Poon model
Ebers–Moll model
Hybrid-pi model
H-parameter model
See also
Safe operating area
Electronic design automation
Electronic circuit simulation
Semiconductor device modeling
References
External links
Agilent EEsof EDA, IC-CAP Parameter Extraction and Device Modeling Software http://eesof.tm.agilent.com/products/iccap_main.html
Electronic engineering
Transistor modeling | Transistor model | [
"Technology",
"Engineering"
] | 1,493 | [
"Electrical engineering",
"Electronic engineering",
"Computer engineering"
] |
3,521,038 | https://en.wikipedia.org/wiki/Isothermal%E2%80%93isobaric%20ensemble | The isothermal–isobaric ensemble (constant temperature and constant pressure ensemble) is a statistical mechanical ensemble that maintains constant temperature and constant pressure applied. It is also called the -ensemble, where the number of particles is also kept as a constant. This ensemble plays an important role in chemistry as chemical reactions are usually carried out under constant pressure condition. The NPT ensemble is also useful for measuring the equation of state of model systems whose virial expansion for pressure cannot be evaluated, or systems near first-order phase transitions.
In the ensemble, the probability of a microstate is , where is the partition function, is the internal energy of the system in microstate , and is the volume of the system in microstate .
The probability of a macrostate is , where is the Gibbs free energy.
Derivation of key properties
The partition function for the -ensemble can be derived from statistical mechanics by beginning with a system of identical atoms described by a Hamiltonian of the form and contained within a box of volume . This system is described by the partition function of the canonical ensemble in 3 dimensions:
,
where , the thermal de Broglie wavelength ( and is the Boltzmann constant), and the factor (which accounts for indistinguishability of particles) both ensure normalization of entropy in the quasi-classical limit. It is convenient to adopt a new set of coordinates defined by such that the partition function becomes
.
If this system is then brought into contact with a bath of volume at constant temperature and pressure containing an ideal gas with total particle number such that , the partition function of the whole system is simply the product of the partition functions of the subsystems:
.
The integral over the coordinates is simply . In the limit that , while stays constant, a change in volume of the system under study will not change the pressure of the whole system. Taking allows for the approximation . For an ideal gas, gives a relationship between density and pressure. Substituting this into the above expression for the partition function, multiplying by a factor (see below for justification for this step), and integrating over the volume V then gives
.
The partition function for the bath is simply . Separating this term out of the overall expression gives the partition function for the -ensemble:
.
Using the above definition of , the partition function can be rewritten as
,
which can be written more generally as a weighted sum over the partition function for the canonical ensemble
The quantity is simply some constant with units of inverse volume, which is necessary to make the integral dimensionless. In this case, , but in general it can take on multiple values. The ambiguity in its choice stems from the fact that volume is not a quantity that can be counted (unlike e.g. the number of particles), and so there is no “natural metric” for the final volume integration performed in the above derivation. This problem has been addressed in multiple ways by various authors, leading to values for C with the same units of inverse volume. The differences vanish (i.e. the choice of becomes arbitrary) in the thermodynamic limit, where the number of particles goes to infinity.
The -ensemble can also be viewed as a special case of the Gibbs canonical ensemble, in which the macrostates of the system are defined according to external temperature and external forces acting on the system . Consider such a system containing particles. The Hamiltonian of the system is then given by where is the system's Hamiltonian in the absence of external forces and are the conjugate variables of . The microstates of the system then occur with probability defined by
where the normalization factor is defined by
.
This distribution is called generalized Boltzmann distribution by some authors.
The -ensemble can be found by taking and . Then the normalization factor becomes
,
where the Hamiltonian has been written in terms of the particle momenta and positions . This sum can be taken to an integral over both and the microstates . The measure for the latter integral is the standard measure of phase space for identical particles: . The integral over term is a Gaussian integral, and can be evaluated explicitly as
.
Inserting this result into gives a familiar expression:
.
This is almost the partition function for the -ensemble, but it has units of volume, an unavoidable consequence of taking the above sum over volumes into an integral. Restoring the constant yields the proper result for .
From the preceding analysis it is clear that the characteristic state function of this ensemble is the Gibbs free energy,
This thermodynamic potential is related to the Helmholtz free energy (logarithm of the canonical partition function), , in the following way:
Applications
Constant-pressure simulations are useful for determining the equation of state of a pure system. Monte Carlo simulations using the -ensemble are particularly useful for determining the equation of state of fluids at pressures of around 1 atm, where they can achieve accurate results with much less computational time than other ensembles.
Zero-pressure -ensemble simulations provide a quick way of estimating vapor-liquid coexistence curves in mixed-phase systems.
-ensemble Monte Carlo simulations have been applied to study the excess properties and equations of state of various models of fluid mixtures.
The -ensemble is also useful in molecular dynamics simulations, e.g. to model the behavior of water at ambient conditions.
References
Statistical ensembles | Isothermal–isobaric ensemble | [
"Physics"
] | 1,087 | [
"Statistical ensembles",
"Statistical mechanics"
] |
3,521,050 | https://en.wikipedia.org/wiki/XOR%20gate | XOR gate (sometimes EOR, or EXOR and pronounced as Exclusive OR) is a digital logic gate that gives a true (1 or HIGH) output when the number of true inputs is odd. An XOR gate implements an exclusive or () from mathematical logic; that is, a true output results if one, and only one, of the inputs to the gate is true. If both inputs are false (0/LOW) or both are true, a false output results. XOR represents the inequality function, i.e., the output is true if the inputs are not alike otherwise the output is false. A way to remember XOR is "must have one or the other but not both".
An XOR gate may serve as a "programmable inverter" in which one input determines whether to invert the other input, or to simply pass it along with no change. Hence it functions as a inverter (a NOT gate) which may be activated or deactivated by a switch.
XOR can also be viewed as addition modulo 2. As a result, XOR gates are used to implement binary addition in computers. A half adder consists of an XOR gate and an AND gate. The gate is also used in subtractors and comparators.
The algebraic expressions or or or all represent the XOR gate with inputs A and B. The behavior of XOR is summarized in the truth table shown on the right.
Symbols
There are three schematic symbols for XOR gates: the traditional ANSI and DIN symbols and the IEC symbol. In some cases, the DIN symbol is used with ⊕ instead of ≢. For more information see Logic Gate Symbols.
The "=1" on the IEC symbol indicates that the output is activated by only one active input.
The logic symbols ⊕, Jpq, and ⊻ can be used to denote an XOR operation in algebraic expressions.
C-like languages use the caret symbol ^ to denote bitwise XOR. (Note that the caret does not denote logical conjunction (AND) in these languages, despite the similarity of symbol.)
Implementation
The XOR gate is most commonly implemented using MOSFETs circuits. Some of those implementations include:
AND-OR-Invert
XOR gates can be implemented using AND-OR-Invert (AOI) or OR-AND-Invert (OAI) logic.
CMOS
The metal–oxide–semiconductor (CMOS) implementations of the XOR gate corresponding to the AOI logic above
are shown below.
On the left, the nMOS and pMOS transistors are arranged so that the input pairs and activate the 2 pMOS transistors of the top left or the 2 pMOS transistors of the top right respectively, connecting Vdd to the output for a logic high. The remaining input pairs and activate each one of the two nMOS paths in the bottom to Vss for a logic low.
If inverted inputs (for example from a flip-flop) are available, this gate can be used directly. Otherwise, two additional inverters with two transistors each are needed to generate and , bringing the total number of transistors to twelve.
The AOI implementation without inverted input has been used, for example, in the Intel 386 CPU.
Transmission gates
The XOR gate can also be implemented through the use of transmission gates with pass transistor logic.
This implementation uses two Transmission gates and two inverters not shown in the diagram to generate and for a total of eight transistors, four less than in the previous design.
The XOR function is implemented by passing through to the output the inverted value of A when B is high and passing the value of A when B is at a logic low. so when both inputs are low the transmission gate at the bottom is off and the one at the top is on and lets A through which is low so the output is low. When both are high only the one at the bottom is active and lets the inverted value of A through and since A is high the output will again be low. Similarly if B stays high but A is low the output would be which is high as expected and if B is low but A is high the value of A passes through and the output is high completing the truth table for the XOR gate.
The trade-off with the previous implementation is that since transmission gates are not ideal switches, there is resistance associated with them, so depending on the signal strength of the input, cascading them may degrade the output levels.
Optimized pass-gate-logic wiring
The previous transmission gate implementation can be further optimized from eight to six transistors by implementing the functionality of the inverter that generates and the bottom pass-gate with just two transistors arranged like an inverter but with the source of the pMOS connected to instead of Vdd and the source of the nMOS connected to instead of GND.
The two leftmost transistors mentioned above, perform an optimized conditional inversion of A when B is at a logic high using pass transistor logic to reduce the transistor count and when B is at a logic low, their output is at a high impedance state. The two in the middle are a transmission gate that drives the output to the value of A when B is at a logic low and the two rightmost transistors form an inverter needed to generate used by the transmission gate and the pass transistor logic circuit.
As with the previous implementation, the direct connection of the inputs to the outputs through the pass gate transistors or through the two leftmost transistors, should be taken into account, especially when cascading them.</ref>
XOR with AND and NOR
Replacing the second NOR with a normal OR Gate will create an XNOR Gate.
Alternatives
If a specific type of gate is not available, a circuit that implements the same function can be constructed from other available gates. A circuit implementing an XOR function can be trivially constructed from an XNOR gate followed by a NOT gate. If we consider the expression , we can construct an XOR gate circuit directly using AND, OR and NOT gates. However, this approach requires five gates of three different kinds.
As alternative, if different gates are available we can apply Boolean algebra to transform as stated above, and apply de Morgan's Law to the last term to get which can be implemented using only four gates as shown on the right. intuitively, XOR is equivalent to OR except for when both A and B are high. So the AND of the OR with then NAND that gives a low only when both A and B are high is equivalent to the XOR.
An XOR gate circuit can be made from four NAND gates. In fact, both NAND and NOR gates are so-called "universal gates" and any logical function can be constructed from either NAND logic or NOR logic alone. If the four NAND gates are replaced by NOR gates, this results in an XNOR gate, which can be converted to an XOR gate by inverting the output or one of the inputs (e.g. with a fifth NOR gate).
An alternative arrangement is of five NOR gates in a topology that emphasizes the construction of the function from , noting from de Morgan's Law that a NOR gate is an inverted-input AND gate. Another alternative arrangement is of five NAND gates in a topology that emphasizes the construction of the function from , noting from de Morgan's Law that a NAND gate is an inverted-input OR gate.
For the NAND constructions, the upper arrangement requires fewer gates. For the NOR constructions, the lower arrangement offers the advantage of a shorter propagation delay (the time delay between an input changing and the output changing).
Standard chip packages
XOR chips are readily available. The most common standard chip codes are:
4070: CMOS quad dual input XOR gates.
4030: CMOS quad dual input XOR gates.
7486: TTL quad dual input XOR gates.
More than two inputs
Literal interpretation of the name "exclusive or", or observation of the IEC rectangular symbol, raises the question of correct behaviour with additional inputs. If a logic gate were to accept three or more inputs and produce a true output if exactly one of those inputs were true, then it would in effect be a one-hot detector (and indeed this is the case for only two inputs). However, it is rarely implemented this way in practice.
It is most common to regard subsequent inputs as being applied through a cascade of binary exclusive-or operations: the first two signals are fed into an XOR gate, then the output of that gate is fed into a second XOR gate together with the third signal, and so on for any remaining signals. The result is a circuit that outputs a 1 when the number of 1s at its inputs is odd, and a 0 when the number of incoming 1s is even. This makes it practically useful as a parity generator or a modulo-2 adder.
For example, the 74LVC1G386 microchip is advertised as a three-input logic gate, and implements a parity generator.
Applications
XOR gates and AND gates are the two most-used structures in VLSI applications.
Addition
The XOR logic gate can be used as a one-bit adder that adds any two bits together to output one bit. For example, if we add 1 plus 1 in binary, we expect a two-bit answer, 10 (i.e. 2 in decimal). Since the trailing sum bit in this output is achieved with XOR, the preceding carry bit is calculated with an AND gate. This is the main principle in Half Adders. A slightly larger Full Adder circuit may be chained together in order to add longer binary numbers.
In certain situations, the inputs to an OR gate (for example, in a full-adder) or to an XOR gate can never be both 1's. As this is the only combination for which the OR and XOR gate outputs differ, an OR gate may be replaced by an XOR gate (or vice versa) without altering the resulting logic. This is convenient if the circuit is being implemented using simple integrated circuit chips which contain only one gate type per chip.
Pseudo-random number generator
Pseudo-random number (PRN) generators, specifically linear-feedback shift registers (LFSR), are defined in terms of the exclusive-or operation. Hence, a suitable setup of XOR gates can model a linear-feedback shift register, in order to generate random numbers.
Phase detectors
XOR gates may be used in simplest phase detectors.
Buffer or invert a signal
An XOR gate may be used to easily change between buffering or inverting a signal. For example, XOR gates can be added to the output of a seven-segment display decoder circuit to allow a user to choose between active-low or active-high output.
Correlation and sequence detection
XOR gates produce a 0 when both inputs match. When searching for a specific bit pattern or PRN sequence in a very long data sequence, a series of XOR gates can be used to compare a string of bits from the data sequence against the target sequence in parallel. The number of 0 outputs can then be counted to determine how well the data sequence matches the target sequence. Correlators are used in many communications devices such as CDMA receivers and decoders for error correction and channel codes. In a CDMA receiver, correlators are used to extract the polarity of a specific PRN sequence out of a combined collection of PRN sequences.
A correlator looking for 11010 in the data sequence 1110100101 would compare the incoming data bits against the target sequence at every possible offset while counting the number of matches (zeros):
1110100101 (data)
11010 (target)
00111 (XOR) 2 zero bits
1110100101
11010
00000 5 zero bits
1110100101
11010
01110 2 zero bits
1110100101
11010
10011 2 zero bits
1110100101
11010
01000 4 zero bits
1110100101
11010
11111 0 zero bits
Matches by offset:
.
: :
: : : : :
-----------
0 1 2 3 4 5
In this example, the best match occurs when the target sequence is offset by 1 bit and all five bits match. When offset by 5 bits, the sequence exactly matches its inverse. By looking at the difference between the number of ones and zeros that come out of the bank of XOR gates, it is easy to see where the sequence occurs and whether or not it is inverted. Longer sequences are easier to detect than short sequences.
Analytical representation
is an analytical representation of XOR gate:
is an alternative analytical representation.
See also
Exclusive or
AND gate
OR gate
Inverter (NOT gate)
NAND gate
NOR gate
XNOR gate
IMPLY gate
Boolean algebra
Logic gate
References
Logic gates
sv:Disjunktion (logik)#OR-grind och XOR-grind | XOR gate | [
"Mathematics",
"Engineering"
] | 2,712 | [
"Boolean algebra",
"Digital electronics",
"Mathematical logic",
"Fields of abstract algebra",
"Electronic engineering"
] |
3,522,161 | https://en.wikipedia.org/wiki/Askaryan%20radiation | The Askaryan radiation also known as Askaryan effect is the phenomenon whereby a particle traveling faster than the phase velocity of light in a dense dielectric (such as salt, ice or the lunar regolith) produces a shower of secondary charged particles which contains a charge anisotropy and emits a cone of coherent radiation in the radio or microwave part of the electromagnetic spectrum. The signal is a result of the Cherenkov radiation from individual particles in the shower. Wavelengths greater than the extent of the shower interfere constructively and thus create a radio or microwave signal which is strongest at the Cherenkov angle. The effect is named after Gurgen Askaryan, a Soviet-Armenian physicist who postulated it in 1962.
The radiation was first observed experimentally in 2000, 38 years after its theoretical prediction. So far the effect has been observed in silica sand, rock salt, ice, and Earth's atmosphere.
The effect is of primary interest in using bulk matter to detect ultra-high energy neutrinos. The Antarctic Impulse Transient Antenna (ANITA) experiment uses antennas attached to a balloon flying over Antarctica to detect the Askaryan radiation produced by showers of particles when cosmic neutrinos interact in the ice. Several experiments have also used the Moon as a neutrino detector based on detection of the Askaryan radiation.
See also
Cherenkov radiation
References
External links
RADHEP-2000 Write-ups
Physical phenomena
Particle physics | Askaryan radiation | [
"Physics"
] | 297 | [
"Physical phenomena",
"Particle physics"
] |
3,523,316 | https://en.wikipedia.org/wiki/Distributed%20parameter%20system | In control theory, a distributed-parameter system (as opposed to a lumped-parameter system) is a system whose state space is infinite-dimensional. Such systems are therefore also known as infinite-dimensional systems. Typical examples are systems described by partial differential equations or by delay differential equations.
Linear time-invariant distributed-parameter systems
Abstract evolution equations
Discrete-time
With U, X and Y Hilbert spaces and ∈ L(X), ∈ L(U, X), ∈ L(X, Y) and ∈ L(U, Y) the following difference equations determine a discrete-time linear time-invariant system:
with (the state) a sequence with values in X, (the input or control) a sequence with values in U and (the output) a sequence with values in Y.
Continuous-time
The continuous-time case is similar to the discrete-time case but now one considers differential equations instead of difference equations:
,
.
An added complication now however is that to include interesting physical examples such as partial differential equations and delay differential equations into this abstract framework, one is forced to consider unbounded operators. Usually A is assumed to generate a strongly continuous semigroup on the state space X. Assuming B, C and D to be bounded operators then already allows for the inclusion of many interesting physical examples, but the inclusion of many other interesting physical examples forces unboundedness of B and C as well.
Example: a partial differential equation
The partial differential equation with and given by
fits into the abstract evolution equation framework described above as follows. The input space U and the output space Y are both chosen to be the set of complex numbers. The state space X is chosen to be L2(0, 1). The operator A is defined as
It can be shown that A generates a strongly continuous semigroup on X. The bounded operators B, C and D are defined as
Example: a delay differential equation
The delay differential equation
fits into the abstract evolution equation framework described above as follows. The input space U and the output space Y are both chosen to be the set of complex numbers. The state space X is chosen to be the product of the complex numbers with L2(−τ, 0). The operator A is defined as
It can be shown that A generates a strongly continuous semigroup on X. The bounded operators B, C and D are defined as
Transfer functions
As in the finite-dimensional case the transfer function is defined through the Laplace transform (continuous-time) or Z-transform (discrete-time). Whereas in the finite-dimensional case the transfer function is a proper rational function, the infinite-dimensionality of the state space leads to irrational functions (which are however still holomorphic).
Discrete-time
In discrete-time the transfer function is given in terms of the state-space parameters by and it is holomorphic in a disc centered at the origin. In case 1/z belongs to the resolvent set of A (which is the case on a possibly smaller disc centered at the origin) the transfer function equals . An interesting fact is that any function that is holomorphic in zero is the transfer function of some discrete-time system.
Continuous-time
If A generates a strongly continuous semigroup and B, C and D are bounded operators, then the transfer function is given in terms of the state space parameters by for s with real part larger than the exponential growth bound of the semigroup generated by A. In more general situations this formula as it stands may not even make sense, but an appropriate generalization of this formula still holds.
To obtain an easy expression for the transfer function it is often better to take the Laplace transform in the given differential equation than to use the state space formulas as illustrated below on the examples given above.
Transfer function for the partial differential equation example
Setting the initial condition equal to zero and denoting Laplace transforms with respect to t by capital letters we obtain from the partial differential equation given above
This is an inhomogeneous linear differential equation with as the variable, s as a parameter and initial condition zero. The solution is . Substituting this in the equation for Y and integrating gives so that the transfer function is .
Transfer function for the delay differential equation example
Proceeding similarly as for the partial differential equation example, the transfer function for the delay equation example is .
Controllability
In the infinite-dimensional case there are several non-equivalent definitions of controllability which for the finite-dimensional case collapse to the one usual notion of controllability. The three most important controllability concepts are:
Exact controllability,
Approximate controllability,
Null controllability.
Controllability in discrete-time
An important role is played by the maps which map the set of all U valued sequences into X and are given by . The interpretation is that is the state that is reached by applying the input sequence u when the initial condition is zero. The system is called
exactly controllable in time n if the range of equals X,
approximately controllable in time n if the range of is dense in X,
null controllable in time n if the range of includes the range of An.
Controllability in continuous-time
In controllability of continuous-time systems the map given by plays the role that plays in discrete-time. However, the space of control functions on which this operator acts now influences the definition. The usual choice is L2(0, ∞;U), the space of (equivalence classes of) U-valued square integrable functions on the interval (0, ∞), but other choices such as L1(0, ∞;U) are possible. The different controllability notions can be defined once the domain of is chosen. The system is called
exactly controllable in time t if the range of equals X,
approximately controllable in time t if the range of is dense in X,
null controllable in time t if the range of includes the range of .
Observability
As in the finite-dimensional case, observability is the dual notion of controllability. In the infinite-dimensional case there are several different notions of observability which in the finite-dimensional case coincide. The three most important ones are:
Exact observability (also known as continuous observability),
Approximate observability,
Final state observability.
Observability in discrete-time
An important role is played by the maps which map X into the space of all Y valued sequences and are given by if k ≤ n and zero if k > n. The interpretation is that is the truncated output with initial condition x and control zero. The system is called
exactly observable in time n if there exists a kn > 0 such that for all x ∈ X,
approximately observable in time n if is injective,
final state observable in time n if there exists a kn > 0 such that for all x ∈ X.
Observability in continuous-time
In observability of continuous-time systems the map given by for s∈[0,t] and zero for s>t plays the role that plays in discrete-time. However, the space of functions to which this operator maps now influences the definition. The usual choice is L2(0, ∞, Y), the space of (equivalence classes of) Y-valued square integrable functions on the interval (0,∞), but other choices such as L1(0, ∞, Y) are possible. The different observability notions can be defined once the co-domain of is chosen. The system is called
exactly observable in time t if there exists a kt > 0 such that for all x ∈ X,
approximately observable in time t if is injective,
final state observable in time t if there exists a kt > 0 such that for all x ∈ X.
Duality between controllability and observability
As in the finite-dimensional case, controllability and observability are dual concepts (at least when for the domain of and the co-domain of the usual L2 choice is made). The correspondence under duality of the different concepts is:
Exact controllability ↔ Exact observability,
Approximate controllability ↔ Approximate observability,
Null controllability ↔ Final state observability.
See also
Control theory
State space (controls)
Notes
References | Distributed parameter system | [
"Mathematics"
] | 1,734 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
22,258,202 | https://en.wikipedia.org/wiki/Launch%20status%20check | A launch status check, also known as a "go/no go poll" and several other terms, occurs at the beginning of an American spaceflight mission in which flight controllers monitoring various systems are queried for operation and readiness status before a launch can proceed. For Space Shuttle missions, in the firing room at the Launch Control Center, the NASA Test Director (NTD) performed this check via a voice communications link with other NASA personnel. The NTD was the leader of the shuttle test team responsible for directing and integrating all flight crew, orbiter, external tank/solid rocket booster and ground support testing in the shuttle launch countdown. The NTD was also responsible for the safety of all personnel inside the pad after external tank loading, including the flight crew, about 10 go/no go reports. He reported to the Launch Director, who has another about 5 go/no go reports. The Launch director declares if a mission is go for launch.
Checklist of firing room positions
Space Shuttle
OTC – Orbiter Test Conductor Prime
TBC – Tank/Booster Test Conductor and Tank/Booster Test Conductor Prime
PTC – Payload Test Conductor
LPS – Launch Processing System Test Conductors
Houston Flight – Flight Director at the Christopher C. Kraft Jr. Mission Control Center in Houston, TX
MILA – Merritt Island Spaceflight Tracking & Data Network Stations
STM – Support Test Manager
Safety Console – Safety Console Coordinator
SPE – Shuttle Project Engineer
LRD – Landing and Recovery Director
SRO – Superintendent of Range Operations
CDR – Mission Commander (Crew)
Apollo Program
In the Apollo program, the MCC launch status check was initiated by the Flight Director, or FLIGHT. The following "preflight check" order was used before the launch of Apollo 13:
BOOSTER – Booster Systems Engineer (monitored the Saturn V in pre-launch and ascent)
RETRO – Retrofire Officer (responsible for abort procedures and Trans-Earth Injection, or TEI, retrofire burns)
FIDO – Flight Dynamics Officer (responsible for the flight path of the space vehicle)
GUIDANCE – Guidance Officer (monitored onboard navigational systems and onboard guidance computer software)
SURGEON – Flight Surgeon (directs all operational medical activities)
EECOM – Electrical, Environmental, and Consumables Management (monitored cryogenic levels, and cabin cooling/pressure systems; electrical distribution systems)
GNC – Guidance, Navigation, and Control Systems Engineer (responsible for the reaction control system, and CSM main engine)
TELMU – Telemetry, Electrical, and EVA Mobility Unit (lunar spacesuit) Officer
CONTROL – Flight Controller
PROCEDURES – Procedures, or Organization and Procedures Officer (enforced mission policy and rules)
INCO – Integrated Communications Officer
FAO – Flight Activities Officer (checklists, procedures, etc.)
NETWORK – Network (supervised ground station communications)
RECOVERY – Recovery Supervisor (coordinated capsule recovery)
CAPCOM – Capsule Communicator (communicated with the astronauts)
Other/Uncrewed spaceflight
Varies depending on the type of mission and model of craft, here is one example:
LCDR - Launch Conductor
Talker - Person responsible for directing countdown steps as delegated by the Launch Conductor
Timer- Countdown Clock Operator and person who calls out the T- time
QAM - Quality Assurance Monitor
SSC - Second Stage Console
SSP - Second Stage Propulsion
FSC - First Stage Console
Prop 1 - Propulsion 1st Stage #1
Prop 2 - Propulsion 1st Stage #2
TSC - Third Stage Console
MCE - Missile Chief Engineer
PTO - Propulsion Telemetry Observer
TM-1 - Telemetry Monitor 1st Stage
TM-2 - Telemetry Monitor 2nd Stage
LWO - Launch Weather Officer
AFLC - Air Force Launch Conductor
LD - Launch Director
See also
Spaceflight
Space launch
Launch vehicle
Mission control center
Launch Control Center
Spacecraft
List of human spaceflights
List of launch vehicles
Timeline of spaceflight
Space exploration
Space logistics
Spacecraft propulsion
References
External links
Video recordings
Endeavour Go For Launch (STS-123)
STS-122 Go/No Go Poll (STS-122)
STS-115 Atlantis long countdown to launch (launch status check at 3:03) (STS-115) (video private as of 4/7/24)
Space Shuttle STS-114 Launch Final Poll (STS-114)
Go For Launch Part 1 of 2 (2 examples launch director's poll) (video private as of 2/23/24)
Go For Launch Part 2 of 2 (example final readiness poll) (video private as of 2/23/24)
Delta II - ICESat-2 (example Final Readiness Poll)
Text transcripts
Space Shuttle Launch Countdown (NASA Transcript from 1995)
Spaceflight concepts
Spaceflight
Rocketry | Launch status check | [
"Astronomy",
"Engineering"
] | 930 | [
"Rocketry",
"Spaceflight",
"Outer space",
"Aerospace engineering"
] |
22,259,308 | https://en.wikipedia.org/wiki/MineCam | The MineCam is a remote exploration camera built by I.A.Recordings. It is used for mine shaft exploration and other similar environments. It was originally conceptualized in 1988, and since went under several design revisions. The name MineCam, is a pun on MiniCam, an early hand-held broadcast camera built by CBS Laboratories.
History
Peter Eggleston of I.A.Recordings first had the idea for what became "MineCam" in 1988. He had been visiting some metal mines in Wales with the Shropshire Caving and Mining Club and spent several hours setting up a single rope technique rig to descend a remote shaft, only to find that there were no ways off at the bottom. This was the motivation to build a miniature camera which would allow enthusiasts to explore hard to reach, unsafe, or impossible to reach areas.
The remote exploration of mines prior to 1988 had been done commercially for several years by pipeline camera firms using equipment that needed to be housed in a vehicle and powered by a generator. Many old mine shafts are remote from roads though, so Peter's final goal was a small lightweight battery-powered kit which could be carried on foot. The first two versions of MineCam did not achieve this, but tested various approaches with the video technology available at the time.
Versions
MineCam 1
MineCam 1 used a monochrome vidicon camera in a waterproof housing made out of a 10 cm plastic sewer pipe and fittings, with an acrylic window. This was successfully tested in the deep end of a swimming pool. The camera was insensitive - it needed a 150 W lamp, which required a 240 V supply, but so did the camera. The cable was 100 m of video co-axial and power, taped together at 2 m intervals and numbered to give a crude depth measurement. The camera and lamp were heavy, so an old 6 mm static climbing rope was used to support it. The monitor was a 10 cm portable TV.
MineCam 1 worked, but the monochrome image was sometimes difficult to interpret. It was time to try colour, and this was not yet available from the commercial shaft inspection firms.
MineCam 2
MineCam 2 used parts from a disused Sony 'Handycam.' The colour CCD chip was removed and put in a round tobacco tin connected by a short cable to the rest of the electronics in a small Eddystone die-cast box. The 'Eddy box' contained extra hardware to convert the Y/C (700 kHz) direct colour-under output to composite PAL, and provide various unusual power supply voltages. Because of the colour stripe filter and the early technology, this CCD was only as sensitive as the monochrome vidicon.
Both MineCam 1 and 2 needed to be lowered twice to investigate a shaft. First, they were lowered looking vertically down, and notes made of the depth and heading of any interesting features. The camera was then hauled to the surface and re-rigged to look horizontally at the interesting items found earlier. A remote tilt mechanism was needed as it would halve the time and effort.
To indicate heading, MineCams 1 and 2 used an ordinary spherical fluid-mounted car compass on an aluminium arm about 30 cm from the camera. A supplementary lens (from an old pair of spectacles) glued to the acrylic window brought the compass into focus at the corner of the frame.
MineCam 3
Mark 3 was radically different. A high quality colour camera became available, a Pulnix TMC-X not much bigger than a Mini-Maglite. It was also much more sensitive, so the light could be smaller and a remote tilt mechanism became practicable. A coded control signal was needed that would go down the video cable to save using extra wires, so a model radio control system was adapted, giving 2 proportional channels for tilt and eventual pan movements. The 27 MHz carrier was easily combined and split from the baseband video. The first tilt motor was a standard model control servo with 180 degree rotation. The camera head and light were mounted in an open-framework cage about 25 cm long, rotated on a horizontal axis by the servo. Despite taking care, motor gears were frequently stripped when the cage hit obstacles and the servo had to be replaced regularly! The light source needed to pan with the camera, so had to be small enough to fit in the cage. A 12 V 50 W quartz-halogen lamp with 5 cm diameter integral dichroic reflector was fitted, powered from a small switch-mode PSU. Although the lighting power had been reduced, it still required a high voltage supply down the cable to reduce voltage drop, so 240 V was still used.
The cage rotated in an aluminium yoke attached to the bottom of a waterproof plastic box containing the rest of the electronics, including a power supply for the camera.
To be able to use the system in locations remote from mains power and not accessible by a vehicle carrying a generator, I.A.Recordings obtained a 150 W inverter and 12V lead-acid battery from an outdoor leisure shop.
MineCam 3 was the first to use an electronic compass for heading display. Tandy (Radio Shack) had developed a device using a flux gate sensor to give x & y components of the Earth's magnetic field which drove the orthogonal windings of a 360 degree mechanical indicator, for use as a car compass. I.A.Recordings discarded the indicator and used the x & y voltages to control the position of a flashing spot added to the video picture. This gave a compass-circle type display on screen. If the x (east-west) signal was inverted, when looking vertically down, the spot appeared to be fixed above a point on the ground as the camera rotated.
Electromagnetic interference from the switch mode power supply and harmonics of its square-wave power waveform appeared in the video signal as noise. This was reduced by careful layout, screening and the use of a single-point ground return.
The miniature camera was waterproofed by sliding it into a square-section 25 mm wide aluminium tube (only slightly bigger than the camera), with a glass window glued in one end, and a cable gland at the other. The waterproof container sold for this camera was too large and heavy for the MineCam, but the aluminium tube is not as watertight. It has an IP rating of between 66 and 67.
MineCam 4
For MineCam 4, the final major development was a pan motor. It was difficult to arrange a system which allowed 360 degree rotation of the tilt yoke whilst maintaining connections for video, tilt servo, lighting power, camera power and flux-gate sensor. An RS Components geared dc servo motor was used, driven from the remote control receiver, and a thrust bearing and Oldham coupler. The motor's powerful magnet interfered with the compass sensor so a sheet of mu-metal donated by a helpful local firm was formed into a screen round the whole motor & gearbox assembly. The tilt servo did not seem to cause the same problem, as it is smaller and rotates with the sensor on the yoke. Wherever possible, all hardware is plastic or non-ferrous metal.
Other improvements in MineCam 4 include a high-torque tilt servo, a microphone with balanced amplifier, a laser diode module beside the camera to produce a spot of light on the subject for range and size estimation and experimentation with bat detection and gas sensors.
The control box is and contains a power supply, the servo control transmitter and a video equaliser. The cable is now 200 m of thin multi-core containing one coaxial and 5 single wires. As it is not load-bearing, a rope still has to be used and 9 mm static climbing rope was found to be better than anything thinner, including wire rope, at preventing twist. The rope often becomes tangled with the cable, so I.A.Recordings is still looking for a weight-bearing cable. The 180 degree tilt range allows the underside of shaft caps to be inspected. Compared with commercial versions which were still monochrome and used a mirror to switch the image from vertical to horizontal, the MineCam was much more flexible, and the goal of having a complete system that could be carried by individuals on foot had been reached. The picture was recorded on U-Matic, then Hi-8 and now on mini-DV tape.
Rigging
To get the camera out over the centre of larger shafts, I.A.Recordings use either a scaffold pole with a pulley on the end, or for shafts that have run-in so the crater on the surface is several metres wide, they have developed a "Tyrolean traverse" or "Blondin" arrangement. A large diameter pulley (a Sinclair C5 wheel) is mounted on a trolley which is winched along a wire rope slung across the shaft and kept in tension. When the pulley is centred, the traverse rope is locked-off and the camera lowering rope can be released.
The 50 W lamp & reflector is a convenient size and is available in a variety of beam-widths, but the idea of the dichroic reflector preventing heat being reflected forward is actually a disadvantage for the MineCam. The lamp head has to be enclosed to protect it and to screen the power supply interference, but without careful internal design of reflectors and baffles, the housing can get very hot.
MineCam 4 has proved reliable and useful and gives a high-quality colour picture good enough to use in video productions. It is featured for example in the I.A.Recordings video "Snailbeach".
References
Notes
I.A.Recordings website article on MineCam
External links
BBC Radio Shropshire article on I.A.Recordings describes MineCam
British Film Institute page listing an example of a MineCam recording
Shropshire Caving & Mining Club article on one MineCam exploration
Cave surveying
Mineral exploration
Robots
Video hardware | MineCam | [
"Physics",
"Technology",
"Engineering"
] | 2,022 | [
"Machines",
"Robots",
"Physical systems",
"Electronic engineering",
"Video hardware"
] |
1,852,572 | https://en.wikipedia.org/wiki/Marangoni%20effect | The Marangoni effect (also called the Gibbs–Marangoni effect) is the mass transfer along an interface between two phases due to a gradient of the surface tension. In the case of temperature dependence, this phenomenon may be called thermo-capillary convection or Bénard–Marangoni convection.
History
This phenomenon was first identified in the so-called "tears of wine" by physicist James Thomson (Lord Kelvin's brother) in 1855. The general effect is named after Italian physicist Carlo Marangoni, who studied it for his doctoral dissertation at the University of Pavia and published his results in 1865. A complete theoretical treatment of the subject was given by J. Willard Gibbs in his work On the Equilibrium of Heterogeneous Substances (1875–1878).
Mechanism
Since a liquid with a high surface tension pulls more strongly on the surrounding liquid than one with a low surface tension, the presence of a gradient in surface tension will naturally cause the liquid to flow away from regions of low surface tension. The surface tension gradient can be caused by concentration gradient or by a temperature gradient (surface tension is a function of temperature).
In simple cases, the speed of the flow , where is the difference in surface tension and is the viscosity of the liquid. Water at room temperature has a surface tension of around 0.07 N/m and a viscosity of approximately 10−3 Pa⋅s. So even variations of a few percent in the surface tension of water can generate Marangoni flows of almost 1 m/s. Thus Marangoni flows are common and easily observed.
For the case of a small drop of surfactant dropped onto the surface of water, Roché and coworkers performed quantitative experiments and developed a simple model that was in approximate agreement with the experiments. This described the expansion in the radius of a patch of the surface covered in surfactant, due to an outward Marangoni flow at a speed . They found that speed of expansion of the surfactant-covered patch of the water surface occurred at speed of approximately
for the surface tension of water, the (lower) surface tension of the surfactant-covered water surface, the viscosity of water, and the mass density of water. For N/m, i.e., of order of tens of percent reduction in surface tension of water, and as for water N⋅m−6⋅s3, we obtain with u in m/s and r in m. This gives speeds that decrease as surfactant-covered region grows, but are of order of cm/s to mm/s.
The equation is obtained by making a couple of simple approximations, the first is by equating the stress at the surface due to the concentration gradient of surfactant (which drives the Marangoni flow) with the viscous stresses (that oppose flow). The Marangoni stress , i.e., gradient in the surface tension due gradient in the surfactant concentration (from high in the centre of the expanding patch, to zero far from the patch). The viscous shear stress is simply the viscosity times the gradient in shear velocity , for the depth into the water of the flow due to the spreading patch. Roché and coworkers assume that the momentum (which is directed radially) diffuses down into the liquid, during spreading, and so when the patch has reached a radius , , for the kinematic viscosity, which is the diffusion constant for momentum in a fluid. Equating the two stresses,
where we approximated the gradient . Taking the 2/3 power of both sides gives the expression above.
The Marangoni number, a dimensionless value, can be used to characterize the relative effects of surface tension and viscous forces.
Tears of wine
As an example, wine may exhibit a visible effect called "tears of wine". The effect is a consequence of the fact that alcohol has a lower surface tension and higher volatility than water. The water/alcohol solution rises up the surface of the glass lowering the surface energy of the glass. Alcohol evaporates from the film leaving behind liquid with a higher surface tension (more water, less alcohol). This region with a lower concentration of alcohol (greater surface tension) pulls on the surrounding fluid more strongly than the regions with a higher alcohol concentration (lower in the glass). The result is the liquid is pulled up until its own weight exceeds the force of the effect, and the liquid drips back down the vessel's walls. This can also be easily demonstrated by spreading a thin film of water on a smooth surface and then allowing a drop of alcohol to fall on the center of the film. The liquid will rush out of the region where the drop of alcohol fell.
Significance to transport phenomena
Under earth conditions, the effect of gravity causing natural convection in a system with a temperature gradient along a fluid/fluid interface is usually much stronger than the Marangoni effect. Many experiments (ESA MASER 1-3) have been conducted under microgravity conditions aboard sounding rockets to observe the Marangoni effect without the influence of gravity. Research on heat pipes performed on the International Space Station revealed that whilst heat pipes exposed to a temperature gradient on Earth cause the inner fluid to evaporate at one end and migrate along the pipe, thus drying the hot end, in space (where the effects of gravity can be ignored) the opposite happens and the hot end of the pipe is flooded with liquid. This is due to the Marangoni effect, together with capillary action. The fluid is drawn to the hot end of the tube by capillary action. But the bulk of the liquid still ends up as a droplet a short distance away from the hottest part of the tube, explained by Marangoni flow. The temperature gradients in axial and radial directions makes the fluid flow away from the hot end and the walls of the tube, towards the center axis. The liquid forms a droplet with a small contact area with the tube walls, a thin film circulating liquid between the cooler droplet and the liquid at the hot end.
The effect of the Marangoni effect on heat transfer in the presence of gas bubbles on the heating surface (e.g., in subcooled nucleate boiling) has long been ignored, but it is currently a topic of ongoing research interest because of its potential fundamental importance to the understanding of heat transfer in boiling.
Examples and application
A familiar example is in soap films: the Marangoni effect stabilizes soap films. Another instance of the Marangoni effect appears in the behavior of convection cells, the so-called Bénard cells.
One important application of the Marangoni effect is the use for drying silicon wafers after a wet processing step during the manufacture of integrated circuits. Liquid spots left on the wafer surface can cause oxidation that damages components on the wafer. To avoid spotting, an alcohol vapor (IPA) or other organic compound in gas, vapor, or aerosol form is blown through a nozzle over the wet wafer surface (or at the meniscus formed between the cleaning liquid and wafer as the wafer is lifted from an immersion bath), and the subsequent Marangoni effect causes a surface-tension gradient in the liquid allowing gravity to more easily pull the liquid completely off the wafer surface, effectively leaving a dry wafer surface.
A similar phenomenon has been creatively utilized to self-assemble nanoparticles into ordered arrays and to grow ordered nanotubes. An alcohol containing nanoparticles is spread on the substrate, followed by blowing humid air over the substrate. The alcohol is evaporated under the flow. Simultaneously, water condenses and forms microdroplets on the substrate. Meanwhile, the nanoparticles in alcohol are transferred into the microdroplets and finally form numerous coffee rings on the substrate after drying.
Another application is the manipulation of particles taking advantage of the relevance of the surface tension effects at small scales. A controlled thermo-capillary convection is created by locally heating the air–water interface using an infrared laser. Then, this flow is used to control floating objects in both position and orientation and can prompt the self-assembly of floating objects, profiting from the Cheerios effect.
The Marangoni effect is also important to the fields of welding, crystal growth and electron beam melting of metals.
See also
Plateau–Rayleigh instability — an instability in a stream of liquid
Diffusioosmosis - the Marangoni effect is flow at a fluid/fluid interface due to a gradient in the interfacial free energy, the analog at a fluid/solid interface is diffusioosmosis
References
External links
Motoring Oil Drops Physical Review Focus February 22, 2005
Thin Film Physics, ISS astronaut Don Pettit demonstrate. YouTube-movie.
Fluid mechanics
Convection
Physical phenomena
Articles containing video clips | Marangoni effect | [
"Physics",
"Chemistry",
"Engineering"
] | 1,822 | [
"Transport phenomena",
"Physical phenomena",
"Convection",
"Civil engineering",
"Thermodynamics",
"Fluid mechanics"
] |
1,853,037 | https://en.wikipedia.org/wiki/Robot%20welding | Robot welding is the use of mechanized programmable tools (robots), which completely automate a welding process by both performing the weld and handling the part. Processes such as gas metal arc welding, while often automated, are not necessarily equivalent to robot welding, since a human operator sometimes prepares the materials to be welded. Robot welding is commonly used for resistance spot welding and arc welding in high production applications, such as the automotive industry.
History
Robot welding is a relatively new application of robotics, even though robots were first introduced into U.S. industry during the 1960s. The use of robots in welding did not take off until the 1980s, when the automotive industry began using robots extensively for spot welding. Since then, both the number of robots used in industry and the number of their applications has grown greatly. In 2005, more than 120,000 robots were in use in North American industry, about half of them for welding. Growth is primarily limited by high equipment costs, and the resulting restriction to high-production applications.
Robot arc welding has begun growing quickly just recently, and already it commands about 20 percent of industrial robot applications. The major components of arc welding robots are the manipulator or the mechanical unit and the controller, which acts as the robot's "brain". The manipulator is what makes the robot move, and the design of these systems can be categorized into several common types, such as SCARA and cartesian coordinate robot, which use different coordinate systems to direct the arms of the machine.
The robot may weld a pre-programmed position, be guided by machine vision, or by a combination of the two methods. However, the many benefits of robotic welding have proven to make it a technology that helps many original equipment manufacturers increase accuracy, repeat-ability, and throughput One welding robot can do the work of several human welders. For example, in arc welding, which produces hot sparks and smoke, a human welder can keep his torch on the work for roughly thirty percent of the time; for robots, the percentage is about 90.
The technology of signature image processing has been developed since the late 1990s for analyzing electrical data in real time collected from automated, robotic welding, thus enabling the optimization of welds.
Advantages
Advantages of robot welding include:
Increased productivity
Decreased risk of injury
Lower production costs
Reduced cost of labor
Consistent quality
Reduced waste
Disadvantages
Disadvantages of robot welding include:
Lost jobs and wages
High cost of machinery and installation
Cost of specialized training
Limited functionality
Delayed quality control
References
External links
Robotic equipment
Robotic welding video
ABB Robotics
ABB arc welding equipment
ABB spot welding equipment
FANUC welding robots
FANUC arc welding robots
FANUC spot welding robots
ABICOR BINZEL Through-arm Robot Welding Torches
Robotic Friction Stir Welding video
Novarc Technologies Spool Welding Robot In Action
AutoMetrics Manufacturing Technologies Inc
Welding
Welding | Robot welding | [
"Engineering"
] | 578 | [
"Welding",
"Mechanical engineering"
] |
1,853,344 | https://en.wikipedia.org/wiki/Floor%20area%20ratio | Floor area ratio (FAR) is the ratio of a building's total floor area (gross floor area) to the size of the piece of land upon which it is built. It is often used as one of the regulations in city planning along with the building-to-land ratio. The terms can also refer to limits imposed on such a ratio through zoning. FAR includes all floor areas but is indifferent to their spatial distribution on the lot whereas the building coverage ratio (or lot coverage) measures building footprint on the lot but is indifferent to building height.
Written as a formula, FAR = .
Lower maximum-allowed floor area ratios are linked to lower land values and lower housing density. Stringent limits on floor area ratios lead to less housing supply, and higher rents.
Terminology
Floor Area ratio is sometimes called floor space ratio (FSR), floor space index (FSI), site ratio or plot ratio.
The difference between FAR and FSI is that the first is a ratio, while the latter is an index.
Index numbers are values expressed as a percentage of a single base figure. Thus an FAR of 1.5 is translated as an FSI of 150%.
Regional variation
The terms most commonly used for this measurement vary from one country or region to the next.
In Australia floor space ratio (FSR) is used in New South Wales and plot ratio in Western Australia.
In France coefficient d'occupation des sols (COS) is used.
In Brazil, Coeficiente de Aproveitamento (CA) is used.
In Germany Geschossflächenzahl (GFZ) is used. Not to be confused with Grundflächenzahl (GRZ), which is the Site Coverage Ratio.
In India floor space index (FSI) and floor area ratio (FAR) are both used.
In the United Kingdom and Hong Kong both plot ratio and site ratio are used.
In Singapore the terms plot ratio and gross plot ratio (GPR) are more commonly used.
In the United States and Canada, floor space ratio (FSR) and floor area ratio (FAR) are both used.
Use ratios are used as a measure of the density of the site being developed. High FAR indicates a dense construction. The ratio is generated by dividing the building area by the parcel area, using the same units.
History
One of the purposes of the 1916 zoning ordinance of New York City was to prevent tall buildings from obstructing too much light and air. The 1916 zoning ordinance sought to control building size by regulating height and setback requirements for towers. In 1961, a revision to the zoning ordinance introduced the concept of floor area ratio (FAR). Buildings built before 1961 often have FARs that would be unachievable today, such as the Empire State Building which has an FAR of 25 - meaning that it earns considerably greater rent than a newer building on the same land could hope for.
Purpose and use
The floor area ratio (FAR) can be used in zoning to limit urban density. While it directly limits building density, indirectly it also limits the number of people that a building can hold, without controlling a building's external shape.
For example, if a lot must adhere to a 0.1 FAR, then the total area of all floors in all buildings on the lot must be no more than one-tenth the area of the parcel itself. In other words, if the lot was 10,000 sq. ft, then the total floor area of all floors in all buildings must not exceed 1,000 sq. ft.
An architect can plan for either a single-story building consuming the entire allowable area in one floor, or a multi-story building that rises higher above the plane of the land, but which must consequently result in a smaller footprint than would a single-story building of the same total floor area. By combining the horizontal and vertical limits into a single figure, some flexibility is permitted in building design, while achieving a hard limit on at least one measure of overall size. One advantage to fixing this parameter, as opposed to others such as height, width, or length, is that floor area correlates well with other considerations relevant to zoning regulation, such as total parking that would be required for an office building, total number of units that might be available for residential use, total load on municipal services, etc. The amounts of these things tend to be constant for a given total floor area, regardless of how that area is distributed horizontally and vertically. Thus, many jurisdictions have found it unnecessary to include hard height limitations when using floor area ratio calculations.
Common exclusions to the total calculation of square footage for the purpose of floor area ratio (FAR) include unoccupied areas such as mechanical equipment floors, basements exclusively used for parking, stair towers, elevator shafts, and parking garages.
India
In India FAR and FSI are both used. FAR regulations vary from city to city and generally it is from 1.3 to 3.25. In Mumbai 1.33 is the norm but higher FSI is allowed along the Metro rail line and slum areas like Dharavi. In Bangalore, 40 feet streets allow only an FAR of 1.75 but 100 feet streets allow 3.25 FAR.
New York City
In New York City FAR or floor Area Ratio is one of the regulations that determine the density and face of the neighborhood, FAR is related to your Zoning Area and Special Districts that can modify the general regulation of the zoning and introduce some exemptions for your property. The other important regulation you must be aware of it as an architect and professional designer is Height and Setback, and open space regulation. In many cases you your calculated FAR allows you to build more, but above regulations that comes from NYC Zoning resolution limits your design and cannot go for the maximum allowed FAR.
Impact on land value
FAR has a major impact on the value of the land. Higher allowable FAR yields higher land value.
A 2022 study found that lower maximum-allowed FAR in New York City led to lower land value and lower density.
Criticism
Andres Duany et al. (2000) note:
Abdicating to floor area ratios (market forces) is the opposite of aiming a community toward something more than the sum of its parts.
FAR, a poor predictor of physical form, should not be used when the objective is to conserve and enhance neighborhood character; whereas traditional design standards (height, lot coverage and setbacks or build-to lines) enable anyone to make reasonably accurate predictions, recognize violations, and feel secure in their investment decisions.
If FAR is carelessly combined with traditional setbacks, assembled lots have a considerable advantage over individual lots, which has a negative effect on fine-grained cities and the diversity of ownership.
FAR does not consider the factors affecting the environment like the new buildings, greenhouse gas emissions, energy consumption, and repercussions on local ecosystems.
Clarifying Duany's second criticism in reference to "lot coverage": If localities seek to regulate density through floor area ratio, the logical consequence is to encourage expansive one story building with less green space, as single story construction is less expensive than multi-story construction on a per square foot basis. On the other hand, if density is regulated by building coverage ratio (a.k.a. lot coverage or site coverage) then green space can be preserved and multi-story construction becomes financial advantageous. This outcome is demonstrated in the illustration comparing FAR to BCR.
Footnotes
References
Meriam, Dwight (2004). The Complete Guide to Zoning. McGraw-Hill.
Birch, Eugenie L. (2009). "The Urban and Regional Planning Reader". Routledge.
External links
An explanation of the floor area ratio by J.H. Crawford
Complete information on FSI or floor area ratio by Civil Site
Urban studies and planning terminology
Real property law
Engineering ratios
de:Maß der baulichen Nutzung#Geschossflächenzahl (GFZ) | Floor area ratio | [
"Mathematics",
"Engineering"
] | 1,621 | [
"Quantity",
"Metrics",
"Engineering ratios"
] |
1,853,642 | https://en.wikipedia.org/wiki/Water%20damage | Water damage describes various possible losses caused by water intruding where it will enable attack of a material or system by destructive processes such as rotting of wood, mold growth, bacteria growth, rusting of steel, swelling of composite woods, de-laminating of materials such as plywood, short-circuiting of electrical devices, etc.
The damage may be imperceptibly slow and minor such as water spots that could eventually mar a surface, or it may be instantaneous and catastrophic such as burst pipes and flooding. However fast it occurs, water damage is a major contributor to loss of property.
An insurance policy may or may not cover the costs associated with water damage and the process of water damage restoration. While a common cause of residential water damage is often the failure of a sump pump, many homeowner's insurance policies do not cover the associated costs without an addendum which adds to the monthly premium of the policy. Often the verbiage of this addendum is similar to "Sewer and Drain Coverage".
In the United States, those individuals who are affected by wide-scale flooding may have the ability to apply for government and FEMA grants through the Individual Assistance program. On a larger level, businesses, cities, and communities can apply to the FEMA Public Assistance program for funds to assist after a large flood. For example, the city of Fond du Lac Wisconsin received $1.2 million FEMA grant after flooding in June 2008. The program allows the city to purchase the water damaged properties, demolish the structures, and turn the former land into public green space.
Causes
Water damage can originate by different sources such as a broken dishwasher hose, a washing machine overflow, a dishwasher leakage, broken/leaking pipes, flood waters, groundwater seepage, building envelope failures (leaking roof, windows, doors, siding, etc.) and clogged toilets. According to the Environmental Protection Agency, 13.7% of all water used in the home today can be attributed to plumbing leaks. On average that is approximately 10,000 gallons of water per year wasted by leaks for each US home. A tiny, 1/8-inch crack in a pipe can release up to 250 gallons of water a day. According to Claims Magazine in August 2000, broken water pipes ranked second to hurricanes in terms of both the number of homes damaged and the amount of claims (on average $50,000 per insurance claim) costs in the US. Experts suggest that homeowners inspect and replace worn pipe fittings and hose connections to all household appliances that use water at least once a year. This includes washing machines, dishwashers, kitchen sinks, and bathroom lavatories, refrigerator icemakers, water softeners, and humidifiers. A few US companies offer whole-house leak protection systems utilizing flow-based technologies. A number of insurance companies offer policyholders reduced rates for installing a whole-house leak protection system.
As far as insurance coverage is concerned, damage caused by surface water intrusion to the dwelling is considered flood damage and is normally excluded from coverage under traditional homeowners' insurance. Surface water is water that enters the dwelling from the surface of the ground because of inundation or insufficient drainage and causes loss to the dwelling. Coverage for surface water intrusion to the dwelling would usually require a separate flood insurance policy.
Categories
There are three basic categories of water damage, based on the level of contamination.
Category 1 Water - Refers to a source of water that does not pose substantial threat to humans and classified as "clean water". Examples are broken water supply lines, tub or sink overflows or appliance malfunctions that involves water supply lines.
Category 2 Water - Refers to a source of water that contains a significant degree of chemical, biological or physical contaminants and causes discomfort or sickness when consumed or even exposed to. Known as "grey water". This type carries microorganisms and nutrients of micro-organisms. Examples are toilet bowls with urine (no feces), sump pump failures, seepage due to hydrostatic failure and water discharge from dishwashers or washing machines.
Category 3 Water - Known as "black water" and is grossly unsanitary. This water contains unsanitary agents, harmful bacteria and fungi, causing severe discomfort or sickness. Type 3 category are contaminated water sources that affect the indoor environment. This category includes water sources from sewage, seawater, rising water from rivers or streams, storm surge, ground surface water or standing water. Category 2 Water or Grey Water that is not promptly removed from the structure and or have remained stagnant may be re classified as Category 3 Water. Toilet back flows that originates from beyond the toilet trap is considered black water contamination regardless of visible content or color.
Classes
Class of water damage is determined by the probable rate of evaporation based on the type of materials affected, or wet, in the room or space that was flooded. Determining the class of water damage is an important first step, and will determine the amount and type of equipment utilized to dry-down the structure.
Class 1 - Slow Rate of Evaporation. Affects only a portion of a room. Materials have a low permeance/porosity. Minimum moisture is absorbed by the materials. **IICRC s500 2016 update adds that class 1 be indicated when <5% of the total square footage of a room (ceiling+walls+floor) are affected **
Class 2 - Fast Rate of Evaporation. Water affects the entire room of carpet and cushion. May have wicked up the walls, but not more than 24 inches. **IICRC s500 2016 update adds that class 2 be indicated when 5% to 40% of the total square footage of a room (ceiling+walls+floor) are affected **
Class 3 - Fastest Rate of Evaporation. Water generally comes from overhead, affecting the entire area; walls, ceilings, insulation, carpet, cushion, etc. **IICRC s500 2016 update adds that class 3 be indicated when >40% of the total square footage of a room (ceiling+walls+floor) are affected **
Class 4 - Specialty Drying Situations. Involves materials with a very low permeance/porosity, such as hardwood floors, concrete, crawlspaces, gypcrete, plaster, etc. Drying generally requires very low specific humidity to accomplish drying.
Restoration
Water damage restoration can be performed by property management teams, building maintenance personnel, or by the homeowners themselves; however, contacting a certified professional water damage restoration specialist is often regarded as the safest way to restore water damaged property. Certified professional water damage restoration specialists utilize psychrometrics to monitor the drying process.
Standards and regulation
While there are currently no government regulations in the United States dictating procedures, two certifying bodies, the Institute of Inspection Cleaning and Restoration Certification (IICRC) and the RIA, do recommend standards of care. The current IICRC standard is ANSI/IICRC S500-2021. It is the collaborative work of the IICRC, SCRT, IEI, IAQA, and NADCA.
Fire and Water Restoration companies are regulated by the appropriate state's Department of Consumer Affairs - usually the state contractors license board. In California, all Fire and Water Restoration companies must register with the California Contractors State License Board. Presently, the California Contractors State License Board has no specific classification for "water and fire damage restoration."
Procedures
Water damage restoration is often prefaced by a loss assessment and evaluation of affected materials. The damaged area is inspected with water sensing equipment such as probes and other infrared tools in order to determine the source of the damage and possible extent of areas affected. Emergency mitigation services are the first order of business. Controlling the source of water, removal of non-salvageable materials, water extraction and pre-cleaning of impacted materials are all part of the mitigation process. Restoration services would then be rendered to the property in order to dry the structure, stabilize building materials, sanitize any affected or cross-contaminated areas, and deodorize all affected areas and materials. After the labor is completed, water damage equipment including air movers, air scrubbers, dehumidifiers, wood floor drying systems, and sub-floor drying equipment is left in the residence. The goal of the drying process is to stabilize the moisture content of impacted materials below 15%, the generally accepted threshold for microbial amplification. Industry standards state that drying vendors should return at regular time intervals, preferably every twenty-four hours, to monitor the equipment, temperature, humidity, and moisture content of the affected walls and contents.[6]
In conclusion, key aspects of water damage restoration include fast action, adequate equipment, moisture measurements, and structural drying. Dehumidification is especially crucial for structural components affected by water damage, such as wooden beams, flooring, and drywall.
See also
Indoor mold
References
Moisture protection
Building engineering | Water damage | [
"Engineering"
] | 1,849 | [
"Building engineering",
"Civil engineering",
"Architecture"
] |
1,853,790 | https://en.wikipedia.org/wiki/Suicide%20bridge | A suicide bridge is a bridge used frequently by people to end their lives, most typically by jumping off and into the water or ground below. A fall from the height of a tall bridge into water may be fatal, although some people have survived jumps from high bridges such as the Golden Gate Bridge. However, significant injury or death is far from certain; numerous studies report minimally injured persons who died from drowning.
To reach such locations, those with the intention of ending their lives must often walk long distances to reach the point where they finally decide to jump. For example, some individuals have traveled over the San Francisco–Oakland Bay Bridge by car in order to jump from the Golden Gate Bridge.
Prevention
Suicide prevention advocates believe that suicide by bridge is more likely to be impulsive than other means, and that barriers can have a significant effect on reducing the incidence of suicides by bridge. One study showed that installing barriers on the Duke Ellington Bridge in Washington, D.C.—which has a high incidence of suicide—did not cause an increase of suicides at the nearby Taft Bridge. A similar result was seen when barriers were erected on the popular suicide bridge the Clifton Suspension Bridge, in the United Kingdom. Families affected and groups that help the mentally ill have lobbied governments to erect similar barriers. One such barrier is the Luminous Veil on the Prince Edward Viaduct in Toronto, Canada, once considered North America's second deadliest bridge, with over 400 jumps on record.
Special telephones with connections to crisis hotlines are sometimes installed on bridges.
Bridges
Australia
The Sydney Harbour Bridge, the Mooney Mooney Bridge on the Central Coast (New South Wales), and the Westgate Bridge in Melbourne, Australia and the Story Bridge in Brisbane are considered suicide bridges.
Sydney Harbour Bridge has a suicide prevention barrier. In February 2009, following the murder of a four-year-old girl who was thrown off the bridge by her father, the first stage of a temporary suicide barrier was erected on Westgate Bridge, constructed of concrete crash barriers topped with a welded mesh fence. The permanent barrier has now been completed throughout the span of the bridge. The barriers are costed at AU$20 million and have been reported to have reduced suicide rates on the Westgate by 85%.
Suicide prevention barriers were installed on the Story Bridge in 2013; a three-metre-high barrier runs the full length of both sides of the bridge.
Canada
There are a number of suicide bridges in the Metro Vancouver area, the most frequented being the Lions Gate Bridge, which saw 324 suicidal incidents, including 78 jumps from 2006 to 2017.
The High Level Bridge in Edmonton, Alberta, is considered a suicide bridge. It is unknown how many deaths have occurred at the bridge, but there have been at least 25 in total, with 10 being from 2012–2013. There have also been many failed attempts at the bridge. A suicide prevention barrier has been installed along with signage and support phone lines.
The Jacques Cartier Bridge in Montreal, Quebec, is considered a suicide bridge. In 2004, a suicide prevention barrier was installed. Until then the bridge saw an average of 10 suicides a year.
The Prince Edward Viaduct, commonly referred to as the Bloor Viaduct, in Toronto, Ontario, was considered a suicide bridge. With nearly 500 suicides by 2003, the Viaduct was ranked as the second most fatal standing structure in North America, after the Golden Gate Bridge in San Francisco. Suicides dropped to zero after a barrier was completed in 2003.
The Lethbridge Viaduct in Lethbridge, Alberta, also known as the High Level Bridge, is considered a suicide bridge. It is unknown how many deaths have occurred at the bridge since its opening in 1909. Suicide prevention signage has been installed at the entrance to the bridge, however no further prevention program is in development.
The Angus L. Macdonald Bridge in Halifax, Nova Scotia, has been used for suicide attempts. As of 2010, safety barriers have been installed the full length of the pedestrian walkway.
The Reversing Falls Bridge in Saint John, New Brunswick has had often use for those making suicide attempts. Efforts have been made by the city to install barriers, but they have struggled to secure provincial funds to do so.
The Burgoyne Bridge in St. Catharines, Ontario, has had several suicides. In 2020, stainless steel netting was installed as a suicide prevention measure.
Czech Republic
About 300 people have jumped to their death from the Nusle Bridge, in Prague, Czech Republic. Barriers almost 3 metres high were erected here in 1997 with the aim to prevent further jumps. In 2007, the fencing was topped off with a of polished metal to make it impossible to climb.
The in Kladno has also been described as a suicide bridge and "second Nusle". Between 2013 and 2018, 23 suicides were attempted there. Because it is only from the ground, attempts are not always successful, however the bridge is easy to access and there is no suicide barrier.
New Zealand
The Auckland Harbour Bridge and Grafton Bridge in Auckland have been known for suicides and suicide attempts, with multiple attempts to install suicide prevention barriers in recent decades.
South Africa
88 people have jumped to their death from the Van Stadens Bridge, near Port Elizabeth, Eastern Cape, South Africa. A barrier has since been installed.
South Korea
A frequently used suicide bridge in Seoul is the Mapo Bridge, locally known as "Suicide Bridge" and "The Bridge of Death". South Korean authorities have tried to counter this by calling the bridge "The Bridge of Life" and posting reassuring messages on the ledges.
United Kingdom
The Clifton Suspension Bridge in Bristol was designed by Isambard Kingdom Brunel and opened in 1864. Since then, it has gained a reputation as a suicide bridge, with over 500 deaths from jumping. It has plaques that advertise the telephone number of Samaritans. In 1998, the bridge was fitted with suicide barriers, which halved the suicide rate in the years following. It lays over the River Avon. CCTV is also installed on the bridge.
A notable suicide bridge in London is the Hornsey Lane Bridge, which passes over Archway Road and connects the Highgate and Crouch End areas. The bridge provides views of notable landmarks such as St. Paul's Cathedral, The Gherkin and The Shard. It was the venue for mental illness campaign group Mad Pride's inaugural vigil in 2000 and was the subject of Johnny Burke's 2006 film The Bridge. When, at the end of 2010, three men in three weeks died by suicide from jumping from the bridge, a campaign was set up by local residents for better anti-suicide measures to be put in place. In October 2015 Islington Council and Haringey Council approved Transport for London's plans for the construction of a safety fence. In summer 2019, Haringey Council installed additional measures to prevent suicide from the bridge in the form of a 3m high fence.
At the Humber Bridge in Hull, more than 200 incidents of people jumping or falling from the bridge have taken place since its opening in 1981. Between 1990 and February 2001 the Humber Rescue Team was called 64 times to deal with people falling or jumping off the bridge.
Overtoun Bridge near Dumbarton in West Dunbartonshire has been publicised due to reports of dogs jumping or falling from the bridge.
United States
The Golden Gate Bridge in San Francisco has the second highest number of suicides in the world (after the Nanjing Yangtze River Bridge) with around 1,600 bodies having been recovered as of 2012, and the assumption of many more unconfirmed deaths. In 2004, documentary filmmaker Eric Steel set off controversy by revealing that he had tricked the bridge committee into allowing him to film the Golden Gate for months and had captured 23 suicides on film for his documentary The Bridge (2006). In March 2005, San Francisco supervisor Tom Ammiano proposed funding a study on erecting a suicide barrier on the bridge. In June 2014, a suicide barrier was approved for the Golden Gate Bridge. Barrier construction began in 2017 and was expected to be completed by 2021.
In Seattle, Washington, more than 230 people have died by suicide from the George Washington Memorial Bridge, making it the second deadliest suicide bridge in the United States. In a span of a decade ending in January 2007, nearly 50 people jumped to their deaths, nine in 2006. At a cost of $5,000,000, a suicide barrier was completed on February 16, 2011.
The San Diego-Coronado Bridge is the third-deadliest suicide bridge in the United States, followed by the Sunshine Skyway Bridge in St. Petersburg, Florida.
The Cold Spring Canyon Arch Bridge along State Route 154 in Santa Barbara County, California has seen 55 jumps by suicide since opening in 1964, including 7 in 2009. A proposal to install a barrier on this bridge in 2005 led to the completion of a safety barrier/fence in March 2012.
Colorado Street Bridge in Pasadena, California, has also seen barriers erected.
During the mid-20th century in Philadelphia, Pennsylvania, the Wissahickon Memorial Bridge had a policeman stationed after it opened because of the numerous suicides taking place.
In recent years, the Eads Bridge, connecting St. Louis, Missouri and East St. Louis, Illinois, has seen several suicides, approximately 18 since its re-opening.
Cornell University has had a number of suicides by jumping from the bridges over the gorges on campus from the 1970s to 2010. Between 1991 and 1994, five students died by suicide in the gorges.
New River Gorge Bridge in Fayetteville, West Virginia
Suicide Bridge Road is located just off Maryland Route 14 near the town of Secretary, Maryland.
The Chesapeake Bay Bridge in Maryland.
The George Washington Bridge in New York City.
The Natchez Trace Parkway Bridge in Williamson County, Tennessee
The All-America Bridge in Akron, Ohio.
The Washington Avenue Bridge in Minneapolis, Minnesota
See also
Copycat suicide
List of suicide sites
Lover's Leap
Aokigahara suicide forest
References
External links
(A series of articles about suicides on the Golden Gate Bridge.)
(A bridge suicide jump survivor invents a prevention device.)
(Detailed documentation of Skyway Bridge suicides in Florida.)
Bridges
Suicide by jumping | Suicide bridge | [
"Engineering"
] | 2,078 | [
"Structural engineering",
"Bridges"
] |
1,853,791 | https://en.wikipedia.org/wiki/Hydraulic%20conductivity | In science and engineering, hydraulic conductivity (, in SI units of meters per second), is a property of porous materials, soils and rocks, that describes the ease with which a fluid (usually water) can move through the pore space, or fracture network. It depends on the intrinsic permeability (, unit: m) of the material, the degree of saturation, and on the density and viscosity of the fluid. Saturated hydraulic conductivity, , describes water movement through saturated media.
By definition, hydraulic conductivity is the ratio of volume flux to hydraulic gradient yielding a quantitative measure of a saturated soil's ability to transmit water when subjected to a hydraulic gradient.
Methods of determination
There are two broad approaches for determining hydraulic conductivity:
In the empirical approach the hydraulic conductivity is correlated to soil properties like pore-size and particle-size (grain-size) distributions, and soil texture.
In the experimental approach the hydraulic conductivity is determined from hydraulic experiments that are interpreted using Darcy's law.
The experimental approach is broadly classified into:
Laboratory tests using soil samples subjected to hydraulic experiments
Field tests (on site, in situ) that are differentiated into:
small-scale field tests, using observations of the water level in cavities in the soil
large-scale field tests, like pumping tests in wells or by observing the functioning of existing horizontal drainage systems.
The small-scale field tests are further subdivided into:
infiltration tests in cavities above the water table
slug tests in cavities below the water table
The methods of determining hydraulic conductivity and other hydraulic properties are investigated by numerous researchers and include additional empirical approaches.
Estimation by empirical approach
Estimation from grain size
Allen Hazen derived an empirical formula for approximating hydraulic conductivity from grain-size analyses:
where
Hazen's empirical coefficient, which takes a value between 0.0 and 1.5 (depending on literature), with an average value of 1.0. A.F. Salarashayeri & M. Siosemarde indicate C is usually between 1.0 and 1.5, with D in mm and K in cm/s.
is the diameter of the 10 percentile grain size of the material.
Pedotransfer function
A pedotransfer function (PTF) is a specialized empirical estimation method, used primarily in the soil sciences, but increasingly used in hydrogeology. There are many different PTF methods, however, they all attempt to determine soil properties, such as hydraulic conductivity, given several measured soil properties, such as soil particle size, and bulk density.
Determination by experimental approach
There are relatively simple and inexpensive laboratory tests that may be run to determine the hydraulic conductivity of a soil: constant-head method and falling-head method.
Laboratory methods
Constant-head method
The constant-head method is typically used on granular soil.
This procedure allows water to move through the soil under a steady state head condition while the volume of water flowing through the soil specimen is measured over a period of time.
By knowing the volume of water measured in a time , over a specimen of length and cross-sectional area , as well as the head , the hydraulic conductivity () can be derived by simply rearranging Darcy's law:
Proof:
Darcy's law states that the volumetric flow depends on the pressure differential between the two sides of the sample, the permeability and the dynamic viscosity as:
In a constant head experiment, the head (difference between two heights) defines an excess water mass, , where is the density of water.
This mass weighs down on the side it is on, creating a pressure differential of , where is the gravitational acceleration.
Plugging this directly into the above gives
If the hydraulic conductivity is defined to be related to the hydraulic permeability as
this gives the result.
Falling-head method
In the falling-head method, the soil sample is first saturated under a specific head condition.
The water is then allowed to flow through the soil without adding any water, so the pressure head declines as water passes through the specimen.
The advantage to the falling-head method is that it can be used for both fine-grained and coarse-grained soils.
.
If the head drops from to in a time , then the hydraulic conductivity is equal to
Proof: As above, Darcy's law reads
The decrease in volume is related to the falling head by .
Plugging this relationship into the above, and taking the limit as , the differential equation
has the solution
Plugging in and rearranging gives the result.
In-situ (field) methods
In compare to laboratory method, field methods gives the most reliable information about the permeability of soil with minimum disturbances. In laboratory methods, the degree of disturbances affect the reliability of value of permeability of the soil.
Pumping Test
Pumping test is the most reliable method to calculate the coefficient of permeability of a soil. This test is further classified into Pumping in test and pumping out test.
Augerhole method
There are also in-situ methods for measuring the hydraulic conductivity in the field.
When the water table is shallow, the augerhole method, a slug test, can be used for determining the hydraulic conductivity below the water table.
The method was developed by Hooghoudt (1934) in The Netherlands and introduced in the US by Van Bavel en Kirkham (1948).
The method uses the following steps:
an augerhole is perforated into the soil to below the water table
water is bailed out from the augerhole
the rate of rise of the water level in the hole is recorded
the -value is calculated from the data as:
where:
is the horizontal saturated hydraulic conductivity (m/day)
is the depth of the water level in the hole relative to the water table in the soil (cm):
at time
at time
is the time (in seconds) since the first measurement of as
is a factor depending on the geometry of the hole:
where:
is the radius of the cylindrical hole (cm)
is the average depth of the water level in the hole relative to the water table in the soil (cm), found as
is the depth of the bottom of the hole relative to the water table in the soil (cm).
The picture shows a large variation of -values measured with the augerhole method in an area of 100 ha. The ratio between the highest and lowest values is 25. The cumulative frequency distribution is lognormal and was made with the CumFreq program.
Related magnitudes
Transmissivity
The transmissivity is a measure of how much water can be transmitted horizontally, such as to a pumping well.
<small>Transmissivity should not be confused with the similar word transmittance used in optics, meaning the fraction of incident light that passes through a sample.</small>
An aquifer may consist of soil layers. The transmissivity of a horizontal flow for the th soil layer with a saturated thickness and horizontal hydraulic conductivity is:
Transmissivity is directly proportional to horizontal hydraulic conductivity and thickness . Expressing in m/day and in m, the transmissivity is found in units m2/day.
The total transmissivity of the aquifer is the sum of every layer's transmissivity:
The apparent horizontal hydraulic conductivity of the aquifer is:
where , the total thickness of the aquifer, is the sum of each layer's individual thickness:
The transmissivity of an aquifer can be determined from pumping tests.Influence of the water tableWhen a soil layer is above the water table, it is not saturated and does not contribute to the transmissivity. When the soil layer is entirely below the water table, its saturated thickness corresponds to the thickness of the soil layer itself. When the water table is inside a soil layer, the saturated thickness corresponds to the distance of the water table to the bottom of the layer. As the water table may behave dynamically, this thickness may change from place to place or from time to time, so that the transmissivity may vary accordingly.
In a semi-confined aquifer, the water table is found within a soil layer with a negligibly small transmissivity, so that changes of the total transmissivity () resulting from changes in the level of the water table are negligibly small.
When pumping water from an unconfined aquifer, where the water table is inside a soil layer with a significant transmissivity, the water table may be drawn down whereby the transmissivity reduces and the flow of water to the well diminishes.
Resistance
The resistance to vertical flow () of the th soil layer with a saturated thickness and vertical hydraulic conductivity is:
Expressing in m/day and in m, the resistance () is expressed in days.
The total resistance () of the aquifer is the sum of each layer's resistance:
The apparent vertical hydraulic conductivity () of the aquifer is:
where is the total thickness of the aquifer:
The resistance plays a role in aquifers where a sequence of layers occurs with varying horizontal permeability so that horizontal flow is found mainly in the layers with high horizontal permeability while the layers with low horizontal permeability transmit the water mainly in a vertical sense.
Anisotropy
When the horizontal and vertical hydraulic conductivity ( and ) of the soil layer differ considerably, the layer is said to be anisotropic with respect to hydraulic conductivity.
When the apparent horizontal and vertical hydraulic conductivity ( and ) differ considerably, the aquifer is said to be anisotropic with respect to hydraulic conductivity.
An aquifer is called semi-confined when a saturated layer with a relatively small horizontal hydraulic conductivity (the semi-confining layer or aquitard) overlies a layer with a relatively high horizontal hydraulic conductivity so that the flow of groundwater in the first layer is mainly vertical and in the second layer mainly horizontal.
The resistance of a semi-confining top layer of an aquifer can be determined from pumping tests.
When calculating flow to drains or to a well field in an aquifer with the aim to control the water table, the anisotropy is to be taken into account, otherwise the result may be erroneous.
Relative properties
Because of their high porosity and permeability, sand and gravel aquifers have higher hydraulic conductivity than clay or unfractured granite aquifers. Sand or gravel aquifers would thus be easier to extract water from (e.g., using a pumping well) because of their high transmissivity, compared to clay or unfractured bedrock aquifers.
Hydraulic conductivity has units with dimensions of length per time (e.g., m/s, ft/day and (gal/day)/ft2 ); transmissivity then has units with dimensions of length squared per time. The following table gives some typical ranges (illustrating the many orders of magnitude which are likely) for K values.
Hydraulic conductivity (K) is one of the most complex and important of the properties of aquifers in hydrogeology as the values found in nature:
range over many orders of magnitude (the distribution is often considered to be lognormal),
vary a large amount through space (sometimes considered to be randomly spatially distributed, or stochastic in nature),
are directional (in general K is a symmetric second-rank tensor; e.g., vertical K values can be several orders of magnitude smaller than horizontal K values),
are scale dependent (testing a m³ of aquifer will generally produce different results than a similar test on only a cm³ sample of the same aquifer),
must be determined indirectly through field pumping tests, laboratory column flow tests or inverse computer simulation, (sometimes also from grain size analyses), and
are very dependent (in a non-linear way) on the water content, which makes solving the unsaturated flow equation difficult. In fact, the variably saturated K for a single material varies over a wider range than the saturated K'' values for all types of materials (see chart below for an illustrative range of the latter).
Ranges of values for natural materials
Table of saturated hydraulic conductivity (K) values found in nature
Values are for typical fresh groundwater conditions — using standard values of viscosity and specific gravity for water at 20 °C and 1 atm.
See the similar table derived from the same source for intrinsic permeability values.
Source: modified from Bear, 1972
See also
Aquifer test
Hydraulic analogy
Pedotransfer function – for estimating hydraulic conductivities given soil properties
References
External links
Hydraulic conductivity calculator
Hydrology
Hydraulic engineering
Soil mechanics
Soil physics | Hydraulic conductivity | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering",
"Environmental_science"
] | 2,648 | [
"Physical phenomena",
"Hydrology",
"Applied and interdisciplinary physics",
"Physical quantities",
"Quantity",
"Soil mechanics",
"Soil physics",
"Physical properties",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Environmental engineering",
"Hydraulic engineering"
] |
1,854,270 | https://en.wikipedia.org/wiki/Rainout%20%28radioactivity%29 | A rainout is the process of precipitation causing the removal of radioactive particles from the atmosphere onto the ground, creating nuclear fallout by rain. The rainclouds of the rainout are often formed by the particles of a nuclear explosion itself and because of this, the decontamination of rainout is more difficult than a "dry" fallout.
In atmospheric science, rainout also refers to the removal of soluble species—not necessarily radioactive—from the atmosphere by precipitation.
Factors affecting rainout
A rainout could occur in the vicinity of ground zero or the contamination could be carried aloft before deposition depending on the current atmospheric conditions and how the explosion occurred. The explosion, or burst, can be air, surface, subsurface, or seawater. An air burst will produce less fallout than a comparable explosion near the ground due to less particulate being contaminated. Detonations at the surface will tend to produce more fallout material. In case of water surface bursts, the particles tend to be rather lighter and smaller, producing less local fallout but extending over a greater area. The particles contain mostly sea salts with some water; these can have a cloud seeding effect causing local rainout and areas of high local fallout. Fallout from a seawater burst is difficult to remove once it has soaked into porous surfaces because the fission products are present as metallic ions which become chemically bonded to many surfaces. For subsurface bursts, there is an additional phenomenon present called "base surge". The base surge is a cloud that rolls outward from the bottom of the subsiding column, which is caused by an excessive density of dust or water droplets in the air. This surge is made up of small solid particles, but it still behaves like a fluid. A soil earth medium favors base surge formation in an underground burst. Although the base surge typically contains only about 10% of the total bomb debris in a subsurface burst, it can create larger radiation doses than fallout near the detonation, because it arrives sooner than fallout, before much radioactive decay has occurred. For underwater bursts, the visible surge is, in effect, a cloud of liquid (usually water) droplets with the property of flowing almost as if it were a homogeneous fluid. After the water evaporates, an invisible base surge of small radioactive particles may persist.
Meteorogically, snow and rain will accelerate local fallout. Under special meteorological conditions, such as a local rain shower that originates above the radioactive cloud, limited areas of heavy contamination just downwind of a nuclear blast may be formed. Rain on an area contaminated by a surface burst changes the pattern of radioactive intensities by washing off higher elevations, buildings, equipment, and vegetation. This reduces intensities in some areas and possibly increases intensities in drainage systems; on low ground; and in flat, poorly drained areas.
References
Radioactivity | Rainout (radioactivity) | [
"Physics",
"Chemistry"
] | 581 | [
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Radioactivity",
"Nuclear physics"
] |
1,854,369 | https://en.wikipedia.org/wiki/Hydraulic%20head | Hydraulic head or piezometric head is a specific measurement of liquid pressure above a vertical datum.
It is usually measured as a liquid surface elevation, expressed in units of length, at the entrance (or bottom) of a piezometer. In an aquifer, it can be calculated from the depth to water in a piezometric well (a specialized water well), and given information of the piezometer's elevation and screen depth. Hydraulic head can similarly be measured in a column of water using a standpipe piezometer by measuring the height of the water surface in the tube relative to a common datum. The hydraulic head can be used to determine a hydraulic gradient between two or more points.
Definition
In fluid dynamics, head is a concept that relates the energy in an incompressible fluid to the height of an equivalent static column of that fluid. From Bernoulli's principle, the total energy at a given point in a fluid is the kinetic energy associated with the speed of flow of the fluid, plus energy from static pressure in the fluid, plus energy from the height of the fluid relative to an arbitrary datum. Head is expressed in units of distance such as meters or feet. The force per unit volume on a fluid in a gravitational field is equal to ρg where ρ is the density of the fluid, and g is the gravitational acceleration. On Earth, additional height of fresh water adds a static pressure of about 9.8 kPa per meter (0.098 bar/m) or 0.433 psi per foot of water column height.
The static head of a pump is the maximum height (pressure) it can deliver. The capability of the pump at a certain RPM can be read from its Q-H curve (flow vs. height).
Head is useful in specifying centrifugal pumps because their pumping characteristics tend to be independent of the fluid's density.
There are generally four types of head:
Velocity head is due to the bulk motion (kinetic energy) of a fluid. Note that is equal to the dynamic pressure for irrotational flow.
Elevation head is due to the fluid's weight, the gravitational force acting on a column of fluid. The elevation head is simply the elevation (h) of the fluid above an arbitrarily designated zero point:
Pressure head is due to the static pressure, the internal molecular motion of a fluid that exerts a force on its container. It is equal to the pressure divided by the force/volume of the fluid in a gravitational field:
Resistance head (or friction head or Head Loss) is due to the frictional forces acting against a fluid's motion by the container. For a continuous medium, this is described by Darcy's law which relates volume flow rate (q) to the gradient of the hydraulic head through the hydraulic conductivity K: while in a piped system head losses are described by the Hagen–Poiseuille equation and Bernoulli’s equation.
Components
After free falling through a height in a vacuum from an initial velocity of 0, a mass will have reached a speed
where is the acceleration due to gravity. Rearranged as a head:
The term is called the velocity head, expressed as a length measurement. In a flowing fluid, it represents the energy of the fluid due to its bulk motion.
The total hydraulic head of a fluid is composed of pressure head and elevation head. The pressure head is the equivalent gauge pressure of a column of water at the base of the piezometer, and the elevation head is the relative potential energy in terms of an elevation. The head equation, a simplified form of the Bernoulli principle for incompressible fluids, can be expressed as:
where
is the hydraulic head (Length in m or ft), also known as the piezometric head.
is the pressure head, in terms of the elevation difference of the water column relative to the piezometer bottom (Length in m or ft), and
is the elevation at the piezometer bottom (Length in m or ft)
In an example with a 400 m deep piezometer, with an elevation of 1000 m, and a depth to water of 100 m: z = 600 m, ψ = 300 m, and h = 900 m.
The pressure head can be expressed as:
where
is the gauge pressure (Force per unit area, often Pa or psi),
is the unit weight of the liquid (Force per unit volume, typically N·m−3 or lbf/ft3),
is the density of the liquid (Mass per unit volume, frequently kg·m−3), and
is the gravitational acceleration (velocity change per unit time, often m·s−2)
Fresh water head
The pressure head is dependent on the density of water, which can vary depending on both the temperature and chemical composition (salinity, in particular). This means that the hydraulic head calculation is dependent on the density of the water within the piezometer. If one or more hydraulic head measurements are to be compared, they need to be standardized, usually to their fresh water head, which can be calculated as:
where
is the fresh water head (Length, measured in m or ft), and
is the density of fresh water (Mass per unit volume, typically in kg·m−3)
Hydraulic gradient
The hydraulic gradient is a vector gradient between two or more hydraulic head measurements over the length of the flow path. For groundwater, it is also called the Darcy slope, since it determines the quantity of a Darcy flux or discharge. It also has applications in open-channel flow where it is also known as stream gradient and can be used to determine whether a reach is gaining or losing energy. A dimensionless hydraulic gradient can be calculated between two points with known head values as:
where
is the hydraulic gradient (dimensionless),
is the difference between two hydraulic heads (length, usually in m or ft), and
is the flow path length between the two piezometers (length, usually in m or ft)
The hydraulic gradient can be expressed in vector notation, using the del operator. This requires a hydraulic head field, which can be practically obtained only from numerical models, such as MODFLOW for groundwater or standard step or HEC-RAS for open channels. In Cartesian coordinates, this can be expressed as:
This vector describes the direction of the groundwater flow, where negative values indicate flow along the dimension, and zero indicates 'no flow'. As with any other example in physics, energy must flow from high to low, which is why the flow is in the negative gradient. This vector can be used in conjunction with Darcy's law and a tensor of hydraulic conductivity to determine the flux of water in three dimensions.
In groundwater
The distribution of hydraulic head through an aquifer determines where groundwater will flow. In a hydrostatic example (first figure), where the hydraulic head is constant, there is no flow. However, if there is a difference in hydraulic head from the top to bottom due to draining from the bottom (second figure), the water will flow downward, due to the difference in head, also called the hydraulic gradient.
Atmospheric pressure
Even though it is convention to use gauge pressure in the calculation of hydraulic head, it is more correct to use absolute pressure (gauge pressure + atmospheric pressure), since this is truly what drives groundwater flow. Often detailed observations of barometric pressure are not available at each well through time, so this is often disregarded (contributing to large errors at locations where hydraulic gradients are low or the angle between wells is acute.)
The effects of changes in atmospheric pressure upon water levels observed in wells has been known for many years. The effect is a direct one, an increase in atmospheric pressure is an increase in load on the water in the aquifer, which increases the depth to water (lowers the water level elevation). Pascal first qualitatively observed these effects in the 17th century, and they were more rigorously described by the soil physicist Edgar Buckingham (working for the United States Department of Agriculture (USDA)) using air flow models in 1907.
Head loss
In any real moving fluid, energy is dissipated due to friction; turbulence dissipates even more energy for high Reynolds number flows. This dissipation, called head loss, is divided into two main categories, "major losses" associated with energy loss per length of pipe, and "minor losses" associated with bends, fittings, valves, etc. The most common equation used to calculate major head losses is the Darcy–Weisbach equation. Older, more empirical approaches are the Hazen–Williams equation and the Prony equation.
For relatively short pipe systems, with a relatively large number of bends and fittings, minor losses can easily exceed major losses. In design, minor losses are usually estimated from tables using coefficients or a simpler and less accurate reduction of minor losses to equivalent length of pipe, a method often used for shortcut calculations of pneumatic conveying lines pressure drop.
See also
Borda–Carnot equation
Dynamic pressure
Minor losses in pipe flow
Total dynamic head
Stage (hydrology)
Head (hydrology)
Hydraulic accumulator
Notes
References
Bear, J. 1972. Dynamics of Fluids in Porous Media, Dover. .
for other references which discuss hydraulic head in the context of hydrogeology, see that page's further reading section
Aquifers
Water
Hydrology
Fluid dynamics
Water wells
Pressure | Hydraulic head | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,924 | [
"Scalar physical quantities",
"Hydrology",
"Mechanical quantities",
"Water",
"Physical quantities",
"Chemical engineering",
"Pressure",
"Water wells",
"Aquifers",
"Environmental engineering",
"Piping",
"Wikipedia categories named after physical quantities",
"Fluid dynamics"
] |
1,854,440 | https://en.wikipedia.org/wiki/Drawdown%20%28hydrology%29 | In hydrology, there are two similar but distinct definitions in use for the word drawdown:
In subsurface hydrogeology, drawdown is the reduction in hydraulic head observed at a well in an aquifer, typically due to pumping a well as part of an aquifer test or well test.
In surface water hydrology and civil engineering, drawdown refers to the lowering of the surface elevation of a body of water, the water table, the piezometric surface, or the water surface of a well, as a result of the withdrawal of water.
In either case, drawdown is the change in hydraulic head or water level relative to the initial spatial and temporal conditions of the system. Drawdown is often represented in cross-sectional diagrams of aquifers. A record of hydraulic head, or rate of flow (discharge), versus time is more generally called a hydrograph (in both groundwater and surface water). The main contributor to groundwater drawdown since the 1960s is over-exploitation of groundwater resources.
Drawdown occurs in response to:
pumping from the bore
interference from a neighbouring pumping bore
in response to local, intensive groundwater pumping
regional seasonal decline due to discharge in excess of recharge
Terminology
Aquifer is an underground layer of permeable rock or sand, that hold or transmit groundwater below the water table that yield a significant supply of water to a well.
Aquifer test (or a pumping test) is a field experiment in which a well is pumped at a controlled rate and the aquifer's response (drawdown) is measured in one or more observation wells.
Cone of depression is a conically-shaped depression that is produced in a water table as a result of pumping water from a well at a given rate.
Groundwater is water located beneath the earth's surface in pores and fractures of soil and rocks.
Hydraulic head (or piezometric head) is a specific measurement of the potential of water above a vertical datum. It is the height of the free surface of water above a given point beneath the surface.
Pumping level is the level of water in the well during pumping.
Specific capacity is the well yield per unit of drawdown.
Static level is the level of water in the well when no water is being removed from the well by pumping.
Water table is the upper level of the zone of saturation, an underground surface in which the soil or rock is permanently saturated with water.
Well yield is the volume of water per unit time that is produced by the well from pumping.
Methods for measuring drawdown
Transducers are used to measure water levels in groundwater wells, rivers, streams, tanks, open channels and lift stations.
Acoustic well sounders or echometers are a simple, cost effective, and minimally intrusive tool used to measure subsurface pressures and levels.
Electric sounders are a practical land cost-effective method used to measure well water levels. This method uses a weight attached to a stranded insulated wire and an ammeter to indicate a closed circuit. Current supplied from a small battery flows through the circuit when the tip of the wire is in contact with the surface of the water.
Air line method is a convenient and nonintrusive method used to measure water levels that is often used for the repeated testing of wells over 300 feet deep. This method obtains water table depth using a pressure gauge and water displacement.
Wetted tape method is a commonly-used method for measuring water levels up to roughly 90 feet deep. This method uses a lead weight attached to a steel measuring tape.
Ecological impacts of groundwater drawdown
Groundwater drawdown due to excessive water extraction can have adverse ecological impacts. Groundwater environments often have high biodiversity, however, drawdown alters the amount and types of nutrients released to surrounding organisms. In addition, nearby wetlands, fisheries, terrestrial and aquatic habitats may be altered with a reduction in the water available to these ecosystems, sometimes altering species ecophysiology.
Extracting groundwater at a rate that is faster than it can be naturally replenished is often referred to as overdrafting. Overdrafting may decrease the amount of groundwater that naturally feeds surrounding water bodies, including wetlands, lakes, rivers and streams. Additionally, when a cone of depression is formed around a pumping well due to groundwater extraction, nearby groundwater sources may flow toward the well to replenish the cone, taking water from local streams and lakes. This may result in poor water quality in these local water bodies as baseflow water contribution is reduced, which could result in perennial streams becoming more intermittent, and intermittent streams becoming more ephemeral. Finally, drawdown from groundwater extraction may lead to an increased sensitivity of the ecosystem to climate change and may be a contributing factor to sea-level rise and land subsidence.
Related
Subsidence
References
Hydrology
Aquifers
Water
Water wells
Hydraulic engineering | Drawdown (hydrology) | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 984 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Water wells",
"Civil engineering",
"Aquifers",
"Environmental engineering",
"Water",
"Hydraulic engineering"
] |
1,854,663 | https://en.wikipedia.org/wiki/GLIMMER | In bioinformatics, GLIMMER (Gene Locator and Interpolated Markov ModelER) is used to find genes in prokaryotic DNA. "It is effective at finding genes in bacteria, archea, viruses, typically finding 98-99% of all relatively long protein coding genes". GLIMMER was the first system that used the interpolated Markov model to identify coding regions. The GLIMMER software is open source and is maintained by Steven Salzberg, Art Delcher, and their colleagues at the Center for Computational Biology at Johns Hopkins University. The original GLIMMER algorithms and software were designed by Art Delcher, Simon Kasif and Steven Salzberg and applied to bacterial genome annotation in collaboration with Owen White.
Versions
GLIMMER 1.0
First Version of GLIMMER "i.e., GLIMMER 1.0" was released in 1998 and it was published in the paper Microbial gene identification using interpolated Markov model. Markov models were used to identify microbial genes in GLIMMER 1.0. GLIMMER considers the local composition sequence dependencies which makes GLIMMER more flexible and more powerful when compared to fixed-order Markov model.
There was a comparison made between interpolated Markov model used by GLIMMER and fifth order Markov model in the paper Microbial gene identification using interpolated Markov models. "GLIMMER algorithm found 1680 genes out of 1717 annotated genes in Haemophilus influenzae where fifth order Markov model found 1574 genes. GLIMMER found 209 additional genes which were not included in 1717 annotated genes where fifth order Markov model found 104 genes."'
GLIMMER 2.0
Second Version of GLIMMER i.e., GLIMMER 2.0 was released in 1999 and it was published in the paper Improved microbial identification with GLIMMER. This paper provides significant technical improvements such as using interpolated context model instead of interpolated Markov model and resolving overlapping genes which improves the accuracy of GLIMMER.
Interpolated context models are used instead of interpolated Markov model which gives the flexibility to select any base. In interpolated Markov model probability distribution of a base is determined from the immediate preceding bases. If the immediate preceding base is irrelevant amino acid translation, interpolated Markov model still considers the preceding base to determine the probability of given base where as interpolated context model which was used in GLIMMER 2.0 can ignore irrelevant bases. False positive predictions were increased in GLIMMER 2.0 to reduce the number of false negative predictions. Overlapped genes are also resolved in GLIMMER 2.0.
Various comparisons between GLIMMER 1.0 and GLIMMER 2.0 were made in the paper Improved microbial identification with GLIMMER which shows improvement in the later version. "Sensitivity of GLIMMER 1.0 ranges from 98.4 to 99.7% with an average of 99.1% where as GLIMMER 2.0 has a sensitivity range from 98.6 to 99.8% with an average of 99.3%. GLIMMER 2.0 is very effective in finding genes of high density. The parasite Trypanosoma brucei, responsible for causing African sleeping sickness is being identified by GLIMMER 2.0"
GLIMMER 3.0
Third version of GLIMMER, "GLIMMER 3.0" was released in 2007 and it was published in the paper Identifying bacterial genes and endosymbiont DNA with Glimmer. This paper describes several major changes made to the GLIMMER system including improved methods to identify coding regions and start codon. Scoring of ORF in GLIMMER 3.0 is done in reverse order i.e., starting from stop codon and moves back towards the start codon. Reverse scanning helps in identifying the coding portion of the gene more accurately which is contained in the context window of IMM. GLIMMER 3.0 also improves the generated training set data by comparing the long-ORF with universal amino acid distribution of widely disparate bacterial genomes."GLIMMER 3.0 has an average long-ORF output of 57% for various organisms where as GLIMMER 2.0 has an average long-ORF output of 39%."
GLIMMER 3.0 reduces the rate of false positive predictions which were increased in GLIMMER 2.0 to reduce the number of false negative predictions. "GLIMMER 3.0 has a start-site prediction accuracy of 99.5% for 3'5' matches where as GLIMMER 2.0 has 99.1% for 3'5' matches. GLIMMER 3.0 uses a new algorithm for scanning coding regions, a new start site detection module, and architecture which integrates all gene predictions across an entire genome."
Minimum description length
Theoretical and Biological Foundation
The GLIMMER project helped introduce and popularize the use of variable length models in Computational Biology and Bioinformatics that subsequently have been applied to numerous problems such as protein classification and others. Variable length modeling was originally pioneered by information theorists and subsequently ingeniously applied and popularized in data compression (e.g. Ziv-Lempel compression). Prediction and compression are intimately linked using Minimum Description Length Principles. The basic idea is to create a dictionary of frequent words (motifs in biological sequences). The intuition is that the frequently occurring motifs are likely to be most predictive and informative. In GLIMMER the interpolated model is a mixture model of the probabilities of these relatively common motifs. Similarly to the development of HMMs in Computational Biology, the authors of GLIMMER were conceptually influenced by the previous application of another variant of interpolated Markov models to speech recognition by researchers such as Fred Jelinek (IBM) and Eric Ristad (Princeton). The learning algorithm in GLIMMER is different from these earlier approaches.
Access
GLIMMER can be downloaded from The Glimmer home page (requires a C++ compiler).
Alternatively, an online version is hosted by NCBI .
How it works
GLIMMER primarily searches for long-ORFS. An open reading frame might overlap with any other open reading frame which will be resolved using the technique described in the sub section. Using these long-ORFS and following certain amino acid distribution GLIMMER generates training set data.
Using these training data, GLIMMER trains all the six Markov models of coding DNA from zero to eight order and also train the model for noncoding DNA
GLIMMER tries to calculate the probabilities from the data. Based on the number of observations, GLIMMER determines whether to use fixed order Markov model or interpolated Markov model.
If the number of observations are greater than 400, GLIMMER uses fixed order Markov model to obtain there probabilities.
If the number of observations are less than 400, GLIMMER uses interpolated Markov model which is briefly explained in the next sub section.
GLIMMER obtains score for every long-ORF generated using all the six coding DNA models and also using non-coding DNA model.
If the score obtained in the previous step is greater than a certain threshold then GLIMMER predicts it to be a gene.
The steps explained above describes the basic functionality of GLIMMER. There are various improvements made to GLIMMER and some of them are described in the following sub-sections.
The GLIMMER system
GLIMMER system consists of two programs. First program called build-imm, which takes an input set of sequences and outputs the interpolated Markov model as follows.
The probability for each base i.e., A,C,G,T for all k-mers for 0 ≤ k ≤ 8 is computed. Then, for each k-mer, GLIMMER computes weight. New sequence probability is computed as follows.
where n is the length of the sequence is the oligomer at position x. , the -order interpolated Markov model score is computed as
"where is the weight of the k-mer at position x-1 in the sequence S and is the estimate obtained from the training data of the probability of the base located at position x in the -order model."
The probability of base given the i previous bases is computed as follows.
"The value of associated with can be regarded as a measure of confidence in the accuracy of this value as an estimate of the true probability. GLIMMER uses two criteria to determine . The first of these is simple frequency occurrence in which the number of occurrences of context string in the training data exceeds a specific threshold value, then is set to 1.0. The current default value for threshold is 400, which gives 95% confidence. When there are insufficient sample occurrences of a context string, build-imm employ additional criteria to determine value. For a given context string of length i, build-imm compare the observed frequencies of the following base , , , with the previously calculated interpolated Markov model probabilities using the next shorter context, , , , . Using a test, build-imm determine how likely it is that the four observed frequencies are consistent with the IMM values from the next shorter context."
The second program called glimmer, then uses this IMM to identify putative gene in an entire genome. GLIMMER identifies all the open reading frame which score higher than threshold and check for overlapping genes. Resolving overlapping genes is explained in the next sub-section.
Equations and explanation of the terms used above are taken from the paper 'Microbial gene identification using interpolated Markov models
Resolving overlapping genes
In GLIMMER 1.0, when two genes A and B overlap, the overlap region is scored. If A is longer than B, and if A scores higher on the overlap region, and if moving B's start site will not resolve the overlap, then B is rejected.
GLIMMER 2.0 provided a better solution to resolve the overlap. In GLIMMER 2.0, when two potential genes A and B overlap, the overlap region is scored. Suppose gene A scores higher, four different orientations are considered.
In the above case, moving of start sites does not remove the overlap. If A is significantly longer than B, then B is rejected or else both A and B are called genes, with a doubtful overlap.
In the above case, moving of B can resolve the overlap, A and B can be called non overlapped genes but if B is significantly shorter than A, then B is rejected.
In the above case, moving of A can resolve the overlap. A is only moved if overlap is a small fraction of A or else B is rejected.
In the above case, both A and B can be moved. We first move the start of B until the overlap region scores higher for B. Then we move the start of A until it scores higher. Then B again, and so on, until either the overlap is eliminated or no further moves can be made.
The above example has been taken from the paper 'Identifying bacterial genes and endosymbiont DNA with Glimmer'.
Ribosome binding sites
Ribosome binding site(RBS) signal can be used to find true start site position. GLIMMER results are passed as an input for RBSfinder program to predict ribosome binding sites. GLIMMER 3.0 integrates RBSfinder program into gene predicting function itself.
ELPH software( which was determined as highly effective at identifying RBS in the paper) is used for identifying RBS and is available at this website . Gibbs sampling algorithm is used to identify shared motif in any set of sequences. This shared motif sequences and their length is given as input to ELPH. ELPH then computes the position weight matrix(PWM) which will be used by GLIMMER 3 to score any potential RBS found by RBSfinder. The above process is done when we have a substantial amount of training genes. If there are inadequate number of training genes, GLIMMER 3 can bootstrap itself to generate a set of gene predictions which can be used as input to ELPH. ELPH now computes PWM and this PWM can be again used on the same set of genes to get more accurate results for start-sites. This process can be repeated for many iterations to obtain more consistent PWM and gene prediction results.
Performance
Glimmer supports genome annotation efforts on a wide range of bacterial, archaeal, and viral species. In a large-scale reannotation effort at the DNA Data Bank of Japan (DDBJ, which mirrors Genbank). Kosuge et al. (2006) examined the gene finding methods used for 183 genomes. They reported that of these projects, Glimmer was the gene finder for 49%, followed by GeneMark with 12%, with other algorithms used in 3% or fewer of the projects. (They also reported that 33% of genomes used "other" programs, which in many cases meant that they could not identify the method. Excluding those cases, Glimmer was used for 73% of the genomes for which the methods could be unambiguously identified.) Glimmer was used by the DDBJ to re-annotate all bacterial genomes in the International Nucleotide Sequence Databases. It is also being used by this group to annotate viruses. Glimmer is part of the bacterial annotation pipeline at the National Center for Biotechnology Information (NCBI), which also maintains a web server for Glimmer, as do sites in Germany, Canada.
According to Google Scholar, as of early 2011 the original Glimmer article (Salzberg et al., 1998) has been cited 581 times, and the Glimmer 2.0 article (Delcher et al., 1999) has been cited 950 times.
References
External links
The Glimmer home page at CCB, Johns Hopkins University, from which the software can be downloaded.
Bioinformatics software
Markov models | GLIMMER | [
"Biology"
] | 2,939 | [
"Bioinformatics",
"Bioinformatics software"
] |
1,855,478 | https://en.wikipedia.org/wiki/Azide-alkyne%20Huisgen%20cycloaddition | The azide-alkyne Huisgen cycloaddition is a 1,3-dipolar cycloaddition between an azide and a terminal or internal alkyne to give a 1,2,3-triazole. Rolf Huisgen was the first to understand the scope of this organic reaction. American chemist Karl Barry Sharpless has referred to copper-catalyzed version of this cycloaddition as "the cream of the crop" of click chemistry and "the premier example of a click reaction".
In the reaction above azide 2 reacts neatly with alkyne 1 to afford the product triazole as a mixture of 1,4-adduct (3a) and 1,5-adduct (3b) at 98 °C in 18 hours.
The standard 1,3-cycloaddition between an azide 1,3-dipole and an alkene as dipolarophile has largely been ignored due to lack of reactivity as a result of electron-poor olefins and elimination side reactions. Some success has been found with non-metal-catalyzed cycloadditions, such as the reactions using dipolarophiles that are electron-poor olefins or alkynes.
Although azides are not the most reactive 1,3-dipole available for reaction, they are preferred for their relative lack of side reactions and stability in typical synthetic conditions.
Copper catalysis
A notable variant of the Huisgen 1,3-dipolar cycloaddition is the copper(I) catalyzed variant, no longer a true concerted cycloaddition, in which organic azides and terminal alkynes are united to afford 1,4-regioisomers of 1,2,3-triazoles as sole products (substitution at positions 1' and 4' as shown above). The copper(I)-catalyzed variant was first reported in 2002 in independent publications by Morten Meldal at the Carlsberg Laboratory in Denmark and Valery Fokin and K. Barry Sharpless at the Scripps Research Institute.
While the copper(I)-catalyzed variant gives rise to a triazole from a terminal alkyne and an azide, formally it is not a 1,3-dipolar cycloaddition and thus should not be termed a Huisgen cycloaddition. This reaction is better termed the Copper(I)-catalyzed Azide-Alkyne Cycloaddition (CuAAC).
While the reaction can be performed using commercial sources of copper(I) such as cuprous bromide or iodide, the reaction works much better using a mixture of copper(II) (e.g. copper(II) sulfate) and a reducing agent (e.g. sodium ascorbate) to produce Cu(I) in situ. As Cu(I) is unstable in aqueous solvents, stabilizing ligands are effective for improving the reaction outcome, especially if tris(benzyltriazolylmethyl)amine (TBTA) is used. The reaction can be run in a variety of solvents, and mixtures of water and a variety of (partially) miscible organic solvents including alcohols, DMSO, DMF, tBuOH and acetone. Owing to the powerful coordinating ability of nitriles towards Cu(I), it is best to avoid acetonitrile as the solvent. The starting reagents need not be completely soluble for the reaction to be successful. In many cases, the product can simply be filtered from the solution as the only purification step required.
NH-1,2,3-triazoles are also prepared from alkynes in a sequence called the Banert cascade.
The utility of the Cu(I)-catalyzed click reaction has also been demonstrated in the polymerization reaction of a bis-azide and a bis-alkyne with copper(I) and TBTA to a conjugated fluorene based polymer. The degree of polymerization easily exceeds 50. With a stopper molecule such as phenyl azide, well-defined phenyl end-groups are obtained.
The copper-mediated azide-alkyne cycloaddition is receiving widespread use in material and surface sciences. Most variations in coupling polymers with other polymers or small molecules have been explored. Current shortcomings are that the terminal alkyne appears to participate in free-radical polymerizations. This requires protection of the terminal alkyne with a trimethyl silyl protecting group and subsequent deprotection after the radical reaction are completed. Similarly the use of organic solvents, copper (I) and inert atmospheres to do the cycloaddition with many polymers makes the "click" label inappropriate for such reactions. An aqueous protocol for performing the cycloaddition with free-radical polymers is highly desirable.
The CuAAC click reaction also effectively couples polystyrene and bovine serum albumin (BSA). The result is an amphiphilic biohybrid. BSA contains a thiol group at Cys-34 which is functionalized with an alkyne group. In water the biohybrid micelles with a diameter of 30 to 70 nanometer form aggregates.
Copper catalysts
The use of a Cu catalyst in water was an improvement over the same reaction first popularized by Rolf Huisgen in the 1970s, which he ran at elevated temperatures. The traditional reaction is slow and thus requires high temperatures. However, the azides and alkynes are both kinetically stable.
As mentioned above, copper-catalysed click reactions work essentially on terminal alkynes. The Cu species undergo metal insertion reaction into the terminal alkynes. The Cu(I) species may either be introduced as preformed complexes, or are otherwise generated in the reaction pot itself by one of the following ways:
A Cu2+ compound is added to the reaction in presence of a reducing agent (e.g. sodium ascorbate) which reduces the Cu from the (+2) to the (+1) oxidation state. The advantage of generating the Cu(I) species in this manner is it eliminates the need of a base in the reaction. Also the presence of reducing agent makes up for any oxygen which may have gotten into the system. Oxygen oxidises the Cu(I) to Cu(II) which impedes the reaction and results in low yields. One of the more commonly used Cu compounds is CuSO4.
Oxidation of Cu(0) metal
Halides of copper may be used where solubility is an issue. However, the iodide and bromide Cu salts require either the presence of amines or higher temperatures.
Commonly used solvents are polar aprotic solvents such as THF, DMSO, acetonitrile, DMF as well as in non-polar aprotic solvents such as toluene. Neat solvents or a mixture of solvents may be used.
DIPEA (N,N-Diisopropylethylamine) and Et3N (triethylamine) are commonly used bases.
Mechanism
A mechanism for the reaction has been suggested based on density functional theory calculations. Copper is a 1st row transition metal. It has the electronic configuration [Ar] 3d10 4s1. The copper (I) species generated in situ forms a pi complex with the triple bond of a terminal alkyne. In the presence of a base, the terminal hydrogen, being the most acidic, is deprotonated first to give a Cu acetylide intermediate. Studies have shown that the reaction is second order with respect to Cu. It has been suggested that the transition state involves two copper atoms. One copper atom is bonded to the acetylide while the other Cu atom serves to activate the azide. The metal center coordinates with the electrons on the nitrogen atom. The azide and the acetylide are not coordinated to the same Cu atom in this case. The ligands employed are labile and are weakly coordinating. The azide displaces one ligand to generate a copper-azide-acetylide complex. At this point cyclization takes place. This is followed by protonation; the source of proton being the hydrogen which was pulled off from the terminal acetylene by the base. The product is formed by dissociation and the catalyst ligand complex is regenerated for further reaction cycles.
The reaction is assisted by the copper, which, when coordinated with the acetylide lowers the pKa of the alkyne C-H by up to 9.8 units. Thus under certain conditions, the reaction may be carried out even in the absence of a base.
In the uncatalysed reaction the alkyne remains a poor electrophile. Thus high energy barriers lead to slow reaction rates.
Ligand assistance
The ligands employed are usually labile i.e. they can be displaced easily. Though the ligand plays no direct role in the reaction the presence of a ligand has its advantages.
The ligand protects the Cu ion from interactions leading to degradation and formation of side products and also prevents the oxidation of the Cu(I) species to the Cu(II). Furthermore, the ligand functions as a proton acceptor thus eliminating the need of a base.
Ruthenium catalysis
The ruthenium-catalysed 1,3-dipolar azide-alkyne cycloaddition (RuAAC) gives the 1,5-triazole.
Unlike CuAAC in which only terminal alkynes reacted, in RuAAC both terminal and internal alkynes can participate in the reaction. This suggests that ruthenium acetylides are not involved in the catalytic cycle.
The proposed mechanism suggests that in the first step, the spectator ligands undergo displacement reaction to produce an activated complex which is converted, through oxidative coupling of an alkyne and an azide to the ruthenium containing metallacycle (Ruthenacycle). The new C-N bond is formed between the more electronegative and less sterically demanding carbon of the alkyne and the terminal nitrogen of the azide. The metallacycle intermediate then undergoes reductive elimination releasing the aromatic triazole product and regenerating the catalyst or the activated complex for further reaction cycles.
Cp*RuCl(PPh3)2, Cp*Ru(COD) and Cp*[RuCl4] are commonly used ruthenium catalysts. Catalysts containing cyclopentadienyl (Cp) group are also used. However, better results are observed with the pentamethylcyclopentadienyl(Cp*) version. This may be due to the sterically demanding Cp* group which facilitates the displacement of the spectator ligands.
Silver catalysis
Recently, the discovery of a general Ag(I)-catalyzed azide–alkyne cycloaddition reaction (Ag-AAC) leading to 1,4-triazoles is reported. Mechanistic features are similar to the generally accepted mechanism of the copper(I)-catalyzed process. Silver(I)-salts alone are not sufficient to promote the cycloaddition. However the ligated Ag(I) source has proven to be exceptional for AgAAC reaction.
Curiously, pre-formed silver acetylides do not react with azides; however, silver acetylides do react with azides under catalysis with copper(I).
References
Cycloadditions
Name reactions
ms:Tindak balas klik | Azide-alkyne Huisgen cycloaddition | [
"Chemistry"
] | 2,441 | [
"Name reactions",
"Ring forming reactions",
"Organic reactions"
] |
1,855,722 | https://en.wikipedia.org/wiki/Acousto-optic%20modulator | An acousto-optic modulator (AOM), also called a Bragg cell or an acousto-optic deflector (AOD), uses the acousto-optic effect to diffract and shift the frequency of light using sound waves (usually at radio-frequency). They are used in lasers for Q-switching, telecommunications for signal modulation, and in spectroscopy for frequency control. A piezoelectric transducer is attached to a material such as glass. An oscillating electric signal drives the transducer to vibrate, which creates sound waves in the material. These can be thought of as moving periodic planes of expansion and compression that change the index of refraction. Incoming light scatters (see Brillouin scattering) off the resulting periodic index modulation and interference occurs similar to Bragg diffraction. The interaction can be thought of as a three-wave mixing process resulting in sum-frequency generation or difference-frequency generation between phonons and photons.
Principles of operation
A typical AOM operates under Bragg condition, where the incident light comes at Bragg angle from the perpendicular of the sound wave's propagation.
Diffraction
When the incident light beam is at Bragg angle, a diffraction pattern emerges where an order of diffracted beam occurs at each angle θ that satisfies:
Here, is the order of diffraction, is the wavelength of light in vacuum, and is the wavelength of the sound. Note that m = 0 order travels in the same direction as the incident beam.
Diffraction from a sinusoidal modulation in a thin crystal mostly results in the diffraction orders. Cascaded diffraction in medium thickness crystals leads to higher orders of diffraction. In thick crystals with weak modulation, only phasematched orders are diffracted; this is called Bragg diffraction. The angular deflection can range from 1 to 5000 beam widths (the number of resolvable spots). Consequently, the deflection is typically limited to tens of milliradians.
The angular separation between adjacent orders for Bragg diffraction is twice the Bragg angle, i.e.
Intensity
The amount of light diffracted by the sound wave depends on the intensity of the sound. Hence, the intensity of the sound can be used to modulate the intensity of the light in the diffracted beam. Typically, the intensity that is diffracted into order can be varied between 15% and 99% of the input light intensity. Likewise, the intensity of the order can be varied between 0% and 80%.
An expression of the efficiency in order is:
where the external phase excursion
To obtain the same efficiency for different wavelength, the RF power in the AOM has to be proportional to the square of the wavelength of the optical beam. Note that this formula also tells us that, when we start at a high RF power , it might be higher than the first peak in the sine squared function, in which case as we increase , we would settle at the second peak with a very high RF power, leading to overdriving the AOM and potential damage to the crystal or other components. To avoid this problem, one should always start with a very low RF power, and slowly increase it to settle at the first peak.
Note that there are two configurations that satisfies Bragg Condition: If the incident beam's wavevector's component on the sound wave's propagation direction goes against the sound wave, the Bragg diffraction/scattering process will result in the maximum efficiency into m = +1 order, which has a positive frequency shift; However, if the incident beam goes along the sound wave, the maximum diffraction efficiency into order is achieved, which has a negative frequency shift.
Frequency
One difference from Bragg diffraction is that the light is scattering from moving planes. A consequence of this is the frequency of the diffracted beam in order will be Doppler-shifted by an amount equal to the frequency of the sound wave .
This frequency shift can be also understood by the fact that energy and momentum (of the photons and phonons) are conserved in the scattering process. A typical frequency shift varies from 27 MHz, for a less-expensive AOM, to 1 GHz, for a state-of-the-art commercial device. In some AOMs, two acoustic waves travel in opposite directions in the material, creating a standing wave. In this case the spectrum of the diffracted beam contains multiple frequency shifts, in any case integer multiples of the frequency of the sound wave.
Phase
In addition, the phase of the diffracted beam will also be shifted by the phase of the sound wave. The phase can be changed by an arbitrary amount.
Polarization
Collinear transverse acoustic waves or perpendicular longitudinal waves can change the polarization. The acoustic waves induce a birefringent phase-shift, much like in a Pockels cell. The acousto-optic tunable filter, especially the dazzler, which can generate variable pulse shapes, is based on this principle.
Mode-locking
Acousto-optic modulators are much faster than typical mechanical devices such as tiltable mirrors. The time it takes an AOM to shift the exiting beam in is roughly limited to the transit time of the sound wave across the beam (typically 5 to 100 ns). This is fast enough to create active modelocking in an ultrafast laser. When faster control is necessary electro-optic modulators are used. However, these require very high voltages (e.g. 1...10 kV), whereas AOMs offer more deflection range, simple design, and low power consumption (less than 3 W).
Applications
Q-switching
Regenerative amplifiers
Cavity dumping
Modelocking
Laser Doppler vibrometer
Film scanner
Confocal microscopy
Synthetic array heterodyne detection
Hyperspectral Imaging
See also
Acousto-optics
Acousto-optic deflector
Acousto-optical spectrometer
Electro-optic modulator
Jeffree cell
Liquid crystal tunable filter
Photoelasticity
Pockels effect
References
External links
Olympus Microscopy Resource Center
Optical devices | Acousto-optic modulator | [
"Materials_science",
"Engineering"
] | 1,278 | [
"Glass engineering and science",
"Optical devices"
] |
1,855,974 | https://en.wikipedia.org/wiki/Tautological%20one-form | In mathematics, the tautological one-form is a special 1-form defined on the cotangent bundle of a manifold In physics, it is used to create a correspondence between the velocity of a point in a mechanical system and its momentum, thus providing a bridge between Lagrangian mechanics and Hamiltonian mechanics (on the manifold ).
The exterior derivative of this form defines a symplectic form giving the structure of a symplectic manifold. The tautological one-form plays an important role in relating the formalism of Hamiltonian mechanics and Lagrangian mechanics. The tautological one-form is sometimes also called the Liouville one-form, the Poincaré one-form, the canonical one-form, or the symplectic potential. A similar object is the canonical vector field on the tangent bundle.
Definition in coordinates
To define the tautological one-form, select a coordinate chart on and a canonical coordinate system on Pick an arbitrary point By definition of cotangent bundle, where and The tautological one-form is given by
with and being the coordinate representation of
Any coordinates on that preserve this definition, up to a total differential (exact form), may be called canonical coordinates; transformations between different canonical coordinate systems are known as canonical transformations.
The canonical symplectic form, also known as the Poincaré two-form, is given by
The extension of this concept to general fibre bundles is known as the solder form. By convention, one uses the phrase "canonical form" whenever the form has a unique, canonical definition, and one uses the term "solder form", whenever an arbitrary choice has to be made. In algebraic geometry and complex geometry the term "canonical" is discouraged, due to confusion with the canonical class, and the term "tautological" is preferred, as in tautological bundle.
Coordinate-free definition
The tautological 1-form can also be defined rather abstractly as a form on phase space. Let be a manifold and be the cotangent bundle or phase space. Let
be the canonical fiber bundle projection, and let
be the induced tangent map. Let be a point on Since is the cotangent bundle, we can understand to be a map of the tangent space at :
That is, we have that is in the fiber of The tautological one-form at point is then defined to be
It is a linear map
and so
Symplectic potential
The symplectic potential is generally defined a bit more freely, and also only defined locally: it is any one-form such that ; in effect, symplectic potentials differ from the canonical 1-form by a closed form.
Properties
The tautological one-form is the unique one-form that "cancels" pullback. That is, let
be a 1-form on is a section For an arbitrary 1-form on the pullback of by is, by definition, Here, is the pushforward of Like is a 1-form on The tautological one-form is the only form with the property that for every 1-form on
So, by the commutation between the pull-back and the exterior derivative,
Action
If is a Hamiltonian on the cotangent bundle and is its Hamiltonian vector field, then the corresponding action is given by
In more prosaic terms, the Hamiltonian flow represents the classical trajectory of a mechanical system obeying the Hamilton-Jacobi equations of motion. The Hamiltonian flow is the integral of the Hamiltonian vector field, and so one writes, using traditional notation for action-angle variables:
with the integral understood to be taken over the manifold defined by holding the energy constant:
On Riemannian and Pseudo-Riemannian Manifolds
If the manifold has a Riemannian or pseudo-Riemannian metric then corresponding definitions can be made in terms of generalized coordinates. Specifically, if we take the metric to be a map
then define
and
In generalized coordinates on one has
and
The metric allows one to define a unit-radius sphere in The canonical one-form restricted to this sphere forms a contact structure; the contact structure may be used to generate the geodesic flow for this metric.
References
Ralph Abraham and Jerrold E. Marsden, Foundations of Mechanics, (1978) Benjamin-Cummings, London See section 3.2.
Symplectic geometry
Hamiltonian mechanics
Lagrangian mechanics | Tautological one-form | [
"Physics",
"Mathematics"
] | 890 | [
"Theoretical physics",
"Lagrangian mechanics",
"Classical mechanics",
"Hamiltonian mechanics",
"Dynamical systems"
] |
20,776,409 | https://en.wikipedia.org/wiki/George%20Streisinger | George Streisinger (December 27, 1927 – August 11, 1984) was an American molecular biologist and co-founder of the Institute of Molecular Biology at the University of Oregon. He was the first person to clone a vertebrate, cloning zebrafish in his University of Oregon laboratory. He also pioneered work in the genetics of the T-even bacterial viruses. In 1972, along with William Franklin Dove he was awarded a Guggenheim Fellowship award, and in 1975 he was selected as a member of the National Academy of Sciences, making him the second Oregonian to receive the distinction. The University of Oregon's Institute of Molecular Biology named their main building "Streisinger Hall" in his honor.
Personal History
George Streisinger was born in Budapest, Hungary, on December 27, 1927. Because they were Jewish, in 1937, his family left Budapest for New York to escape Nazi persecution. Streisinger attended New York public schools and graduated from the Bronx High School of Science in 1944. He obtained a B.S. degree from Cornell University in 1950, and a Ph.D. from the University of Illinois in 1953. He completed postdoctoral studies at the California Institute of Technology from 1953 to 1956. He married Lotte Sielman in 1949. Streisinger accepted a post at the University of Oregon Institute of Molecular Biology in Eugene in 1960. Streisinger was well known as an innovative professor in and out of the classroom, conscripting a dance class to illustrate protein synthesis, and often requested beginning and non-major biology students. He was very politically active, organizing grass-roots resistance to the Vietnam war and legislative opposition to John Kennedy's civil defense program. He testified to successfully ban mutagenic herbicides in Douglas fir reforestation, and led and won a battle to exclude secret war department research from the University of Oregon campus.
His wife, Lotte, was a noted artist and community activist, and the founder of the Eugene Saturday Market, the inspiration for the Portland Oregon Saturday Market.
Research
Following his graduation from Cornell, George under- took graduate studies in the genetics of T-even coliphage with Salvador Luria in the Bacteriology Department of the University of Illinois. His studies revealed phenotypic mixing, in which a phage with a host-range genotype of one phage type was found in a particle who was phenotypically dissimilar. When published in 1956, these studies had profound impact on the study of viral biology.
During his postdoc at Caltech, with Jean Weigle, he undertook further studies on T2 × T4 hybrids, which led to the discovery of DNA modification (by glucosylation).
At the University of Oregon, Streisinger pioneered the study of zebrafish in his lab. Zebrafish can be genetically modified easily, and researchers can modify them to mimic the traits of certain diseases. In analyzing these created diseases, scientists seek solutions to diseases which affect humans. Over 9,000 researchers in 1,551 labs throughout 31 countries study zebrafish, and many of them received their initial training at the University of Oregon.
References
Cornell University alumni
Jewish American scientists
American people of Hungarian-Jewish descent
1927 births
1984 deaths
American molecular biologists
Cloning
20th-century American Jews | George Streisinger | [
"Engineering",
"Biology"
] | 670 | [
"Cloning",
"Genetic engineering"
] |
20,781,979 | https://en.wikipedia.org/wiki/Andronov%E2%80%93Pontryagin%20criterion | The Andronov–Pontryagin criterion is a necessary and sufficient condition for the stability of dynamical systems in the plane. It was derived by Aleksandr Andronov and Lev Pontryagin in 1937.
Statement
A dynamical system
where is a -vector field on the plane, ,
is orbitally topologically stable if and only if the following two conditions hold:
All equilibrium points and periodic orbits are hyperbolic.
There are no saddle connections.
The same statement holds if the vector field is defined on the unit disk and is transversal to the boundary.
Clarifications
Orbital topological stability of a dynamical system means that for any sufficiently small perturbation (in the C1-metric), there exists a homeomorphism close to the identity map which transforms the orbits of the original dynamical system to the orbits of the perturbed system (cf structural stability).
The first condition of the theorem is known as global hyperbolicity. A zero of a vector field v, i.e. a point x0 where v(x0)=0, is said to be hyperbolic if none of the eigenvalues of the linearization of v at x0 is purely imaginary. A periodic orbit of a flow is said to be hyperbolic if none of the eigenvalues of the Poincaré return map at a point on the orbit has absolute value one.
Finally, saddle connection refers to a situation where an orbit from one saddle point enters the same or another saddle point, i.e. the unstable and stable separatrices are connected (cf homoclinic orbit and heteroclinic orbit).
See also
Peixoto's theorem
References
Cited in .
. See Theorem 2.5.
Dynamical systems | Andronov–Pontryagin criterion | [
"Physics",
"Mathematics"
] | 360 | [
"Mechanics",
"Dynamical systems"
] |
20,783,558 | https://en.wikipedia.org/wiki/Water%20politics%20in%20the%20Jordan%20River%20basin | Water politics in the Jordan River basin refers to political issues of water within the Jordan River drainage basin, including competing claims and water usage, and issues of riparian rights of surface water along transnational rivers, as well as the availability and usage of ground water. Water resources in the region are scarce, and these issues directly affect the five political subdivisions (Israel, the West Bank, Lebanon, Syria and Jordan) located within and bordering the basin, which were created since the collapse, during World War I, of the former single controlling entity, the Ottoman Empire. Because of the scarcity of water and a unique political context, issues of both supply and usage outside the physical limits of the basin have been included historically.
The Jordan river basin and its water are central issues of both the Arab–Israeli conflict (including Israeli–Palestinian conflict), as well as the more recent Syrian civil war. The Jordan River is long and, over most of its distance, flows at elevations below sea level. Its waters originate from the high precipitation areas in and near the Anti-Lebanon Mountains in the north, and flow through the Sea of Galilee and Jordan River Valley ending in the Dead Sea at an elevation of minus 400 metres, in the south.
Geography of Jordan basin
Downstream of the Sea of Galilee, where the main tributaries enter the Jordan Valley from the east, the valley bottom widens to about . This area is characterized by higher alluvial or beach terraces paralleling the river; this area is known as the Ghor (or Ghawr). These terraces are locally incised by side wadis or rivers forming a maze of ravines, alternating with sharp crests and rises, with towers, pinnacles and a badlands morphology.
At a lower elevation is the active Jordan River floodplain, the zhor (or Zur), with a wildly meandering course, which accounts for the excessive length of the river in comparison to the straight-line distance to reach the Dead Sea. Small dams were built along the river within the Zhor, turning the former thickets of reeds, tamarisk, willows, and white poplars into irrigated fields. After flowing through the Zur, the Jordan drains into the Dead Sea across a broad, gently sloping delta.
In the upper Jordan river basin, upstream of the Sea of Galilee, the tributaries include:
The Hasbani (), Snir (), which flows from Lebanon.
The Banias (), Hermon (), arising from a spring at Banias near the foot of Mount Hermon.
The Dan (), Leddan (), whose source is also at the base of Mount Hermon.
Berdara (), or Braghith (), The Iyon or Ayoun (), a smaller stream which also flows from Lebanon.
The lower Jordan River tributaries include:
The Jalud in the Beth Shean valley
The Yarmouk River, which originates on the south-eastern slopes of Mount Hermon and the Hauran Plateau, forms the southern limit of the Golan Heights and flows into the Jordan River below the Sea of Galilee. It also defines portions of the border between Jordan and Syria, as well as a shorter portion between Jordan and Israel.
The Zarqa River, the Biblical Jabbok
Jabesh (Wadi Yabis) named after Jabesh-Gilead
Hydrology of the Jordan River
The riparian rights to the Jordan River are shared by 4 different countries: Lebanon, Syria, Jordan, Israel as well as the Palestinian territories; although Israel as the occupying authority has refused to give up any of the water resources to the Palestinian National Authority. The Jordan River originates near the borders of three countries, Israel, Lebanon, and Syria, with most of the water derived from the Anti-Lebanon Mountains and Mount Hermon to the north and east. Three spring-fed headwater rivers converge to form the Jordan River in the north:
The Hasbani River, which rises in south Lebanon, with an average annual flow of 138 million cubic metres,
The Dan River, in Israel, averaging 245 million cubic metres per year, and
The Banias River flowing from the Golan Heights, averaging 121 million cubic metres per year.
These streams converge six kilometres inside Israel and flow south to the Sea of Galilee, wholly within Israel.
Water quality is variable in the river basin. The three tributaries of the upper Jordan have a low salinity of about 20 ppm. The salinity of water in Lake Tiberias ranges from 240 ppm in the upper end of the lake (marginal for irrigation water), to 350 ppm (too high for sensitive citrus fruits) where it discharges back into the Jordan River. The salt comes from the saline subterranean springs. These springs pass through the beds of ancient seas and then flow into Lake Tiberias, as well as the groundwater sources that feed into the lower Jordan. Downstream of Tiberias, the salinity of the tributary Yarmouk River is also satisfactory, at 100 ppm, but the lower Jordan river becomes progressively more saline as it flows south. It reaches twenty-five percent salinity (250,000 ppm) where it flows in the Dead Sea, which is about seven times saltier than the ocean.
As a resource for freshwater the Jordan River drainage system is vital for most of the population of Palestine, Israel and Jordan, and to a lesser extent in Lebanon and Syria who are able to use water from other national sources. (Although Syrian riparian rights to the Euphrates has been severely restricted by Turkey's dam building programme, a series of 21 dams and 17 hydroelectric stations built on the Euphrates and Tigris rivers, in the 1980s, 90s and projected to be completed in 2010, in order to provide irrigation water and hydroelectricity to the arid area of southeastern Turkey.) The CIA analysis in the 1980s placed the Middle East on the list of possible conflict zones because of water issues. Twenty per cent of the region’s population lack access to adequate potable water and 35% of the population lack appropriate sanitation.
Sharing water resources involves the issue of water use, water rights, and distribution of amounts. The Palestinian National Authority wished to expand and develop the agricultural sector in the West Bank to decrease their dependency on the Israeli labour market, while Israel have prevented an increase in the irrigation of the West Bank. Jordan also wishes to expand its agricultural sector so as to be able to achieve food security.
On 21 May 1997 the UN General Assembly adopted a Convention on the Law of Non-navigational Uses of International Watercourses.
The articles establish two principles for the use of international watercourses (other than navigation): "equitable and reasonable utilization". and "the 'due diligence' obligation not to cause significant harm." Equitable and reasonable utilization requires taking into account all relevant factors and circumstances, including:
(a) Geographic, hydrographic, hydrological, climatic, ecological and other factors of a natural character;
(b) The social and economic needs of the watercourse States concerned;
(c) The population dependent on the watercourse in each watercourse State;
(d) The effects of the use or uses of the watercourses in one watercourse State on other watercourse States;
(e) Existing and potential uses of the watercourse;
(f) Conservation, protection, development and economy of use of the water resources of the watercourse and the costs of measures taken to that effect;
(g) The availability of alternatives, of comparable value, to a particular planned or existing use.
Historical timeline
Ottoman and Mandatory periods
Studies of regional water resources and their development, in modern terms, date from the early 1900s during the period of Ottoman rule; they also follow in light of a significant engineering milestone and resource development achievement. Based largely on geographic, engineering and economic considerations many of these plans included common components, but political considerations and international events would soon follow.
After the First World War, the Jordan River Basin began to be seen as a problem of quantitative allocations. In the late 1930s and mid-1940s, Transjordan and the World Zionist Organization commissioned mutually exclusive competing water resource studies. The Transjordanian study, performed by Michael G. Ionides, concluded that the available water resources are not sufficient to sustain a Jewish state which would be the destination for Jewish immigration. The Zionist study, by the American engineer Walter Clay Lowdermilk, concluded that by diverting water from the Jordan basin to support agriculture and residential development in the Negev, a Jewish state supporting 4 million new immigrants would be sustainable.
Below is a brief timeline summarizing policy attempts related to sharing water in the Jordan River Basin between 1922 and 1940s.
Post-Mandatory period
At the end of the 1948 Arab Israeli War with the signing of the General Armistice Agreements in 1949, both Israel and Jordan embarked on implementing their competing initiatives to utilize the water resources in the areas under their control.
The first "Master Plan for Irrigation in Israel" was drafted in 1950 and approved by a Board of Consultants (of the USA) on 8 March 1956. The main features of the Master Plan was the construction of the Israeli National Water Carrier (NWC), a project for the integration of all major regional projects into the Israeli national grid. Tahal – Water Planning for Israel Ltd., an Israeli public corporate body, was established in 1952, being largely responsible for planning of water development, drainage, etc., at the national level within Israel, including the NWC project which was commissioned in 1965.
In 1952, the Bunger plan was issued by Jordan in collaboration with UNRWA and US Technical Cooperation Agency's Point IV program, aiming to provide water to 100,000 resettled Palestinian refugees, to be relocated into Northern Jordan. The plan included construction of major Maqarin dam over Yarmouk river to store some 500 million cubic meters of water and serve Jordan and Syria, allowing Jordan to avoid from storing water in the mostly Israeli-controlled Lake Tiberias. The Maqarin dam was also designated to provide electricity, while a smaller dam at Adasiya was supposed to divert Yarmouk-originated water to the Jordanian East Ghor Canal, aimed to irrigate Jordanian areas east to the Jordan river. A plan was also issued concerning the West Ghor Canal, envisioning a siphone to irrigate also the West Bank. In March 1953, Jordan and UNRWA signed a preliminary agreement to implement the Bunger plan. Shortly, in June 1953, Jordan and Syria signed a complementary treaty in this regard. Despite the expected objection of Israel, Jordan moved with the plan and in July 1953 allocated funding for the project in collaboration with UNRWA and US Government, pending later agreement with Israel. The Israeli government protested to US over the Maqarin dam plan, over not taking into account its rights on the Yarmouk waters downstream. While Israel convinced the US to pause the project until the issue is resolved, it also offered its eagerness to discuss it with the Arab governments.
In 1953, Israel began construction of a water carrier to take water from the Sea of Galilee to the populated center and agricultural south of the country, while Jordan concluded an agreement with Syria, known as the Bunger plan, to dam the Yarmouk River near Maqarin, and utilize its waters to irrigate Jordanian territory, before they could flow to the Sea of Galilee. Military clashes ensued, and US President Dwight Eisenhower dispatched ambassador Johnston to the region to work out a plan that would regulate water usage.
Below is a brief timeline summarizing policy attempts related to sharing water in the Jordan River Basin between 1951 and 1955.
Between 1955 and the beginning of the Oslo Process, there was little attempt at policy making in regards to shared bodies of water.
Six-Day War and aftermath
On 10 June 1967, the last day of the Six-Day War, Golani Brigade forces quickly invaded the village of Banias where a caliphate era Syrian fort stood. Eshkol's priority on the Syrian front was control of the water sources.
Regional stagnation (1980s)
In 1980, Syria unilaterally started a programme of dam building along the Yarmouk.
The southern slopes of Mount Hermon (Jebel esh-Sheikh) as well as the Western Golan Heights, were unilaterally annexed by Israel in 1981.
In 1988, the Syrian-Jordanian agreement on development of the Yarmouk was blocked when Israel, as a riparian right holder, refused to ratify the plan and the World Bank withheld funding. Israel's augmented its Johnson plan allocation of 25,000,000 m³/yr by a further 45,000,000–75,000,000 m³/yr.
Jordanian-Israeli peace deal and aftermath
The water agreement formed a part of the broader political treaty which was signed between Israel and Jordan in 1994, and the articles relating to water in this agreement did not correspond to Jordan’s rights to water as they had originally been claimed. The nature and significance of the wider 1994 treaty meant that the water aspect was forced to cede importance and priority in negotiations, giving way to areas such as borders and security in terms of armed force, which were perceived by decision-makers as being the most integral issues to the settlement. Main points from the water sharing in the Jordan/Israel Peace treaty.
Jordan being a country that borders on the Jordan has riparian rights to water from the Jordan basin and upper Jordan tributaries. Due to the water diversion projects the flow to the river Jordan has been reduced from 1,300 million–1,500 million cubic metres to 250 million–300 million cubic metres. Where the water quality has been further reduced as the flow of the river Jordan is made of run-off from agricultural irrigation and saline springs.
Problems can be seen to have emerged in 1999, when the treaty’s limitations were revealed by events concerning water shortages in the Jordan basin. A reduced supply of water to Israel due to drought meant that, in turn, Israel which is responsible for providing water to Jordan, decreased its water provisions to the country, provoking a diplomatic disagreement between the two and bringing the water component of the treaty back into question.
Israel's complaints that the reduction in water from the tributaries to the river Jordan caused by the Jordanian-Syrian dam look to go unheeded due to the conflict of interest between Israel and her neighbours.
Syrian Civil War and its effect on Jordan basin
The dramatic drought, which hit Levant between 1998 and 2012, was identified by scientists to be the most severe in 900 years. The dramatic effect of the drought on southern Syria is proposed as one of the factors which led to the eruption of the Syrian Civil War.
Historically, prior to the eruption of Syrian War in 2011, the Syrian government had developed a series of 21 dams in the Yarmouk drainage basin to divert water into large reservoirs used for irrigation of agricultural land. Jordan had built a large dam of its own on the Yarmouk, the Al-Wehda Dam, in order to exploit the water for its own agriculture. However, prior to the Syrian War, the amount of water it collected by Jordanian dam had fallen as Syria dammed the river upstream. While the Yarmouk flows into the Jordan River, most of its water has been used in Syria and Jordan before reaching the river. Since the civil war broke out, hundreds of thousands of refugees have fled the area of southern Syria, many of whom were farmers. Most fled to refugee camps in Jordan. As a result, much more water now flows in the Yarmouk River and thus greater quantities of water are reaching the parts of the river that flow through Jordan, and later into Israel as well.
Jordan basin
Banias
The Syria-Lebanon-Palestine boundary was a product of the post-World War I Anglo-French partition of Ottoman Syria. British forces had advanced to a position at Tel Hazor against Turkish troops in 1918 and wished to incorporate all the sources of the Jordan River within the British controlled Palestine. Due to the French inability to establish administrative control, the frontier between Syria and Palestine was fluid. Following the Paris Peace Conference of 1919, and the unratified and later annulled Treaty of Sèvres, stemming from the San Remo conference, the 1920 boundary extended the British controlled area to north of the Sykes Picot line, a straight line between the mid point of the Sea of Galilee and Nahariya. In 1920 the French managed to assert authority over the Arab nationalist movement and after the Battle of Maysalun, King Faisal was deposed. The international boundary between Palestine and Syria was finally agreed by Great Britain and France in 1923 in conjunction with the Treaty of Lausanne, after Britain had been given a League of Nations mandate for Palestine in 1922. Banyas (on the Quneitra/Tyre road) was within the French Mandate of Syria. The border was set 750 metres south of the spring.
In 1941 Australian forces occupied Banyas in the advance to the Litani during the Syria-Lebanon Campaign; Free French and Indian forces also invaded Syria in the Battle of Kissoué. Banias's fate in this period was left in a state of limbo since Syria had come under British military control. After the cessation of World War II hostilities, and at the time Syria was granted Independence (April 1946), the former mandate powers, France and Britain, bilaterally signed an agreement to pass control of Banias to the British mandate of Palestine. This was done against the expressed wishes of the Syrian government who declared France's signature to be invalid. While Syria maintained its claim on Banias in this period, it was administered from Jerusalem.
Following the 1948 Arab Israeli War, and the signing of the General Armistice Agreements in 1949, and DMZs included in the Armistice with Syria in July 1949, were "not to be interpreted as having any relation whatsoever to ultimate territorial arrangements." Israel claimed sovereignty over the Demilitarised zones (DMZs), on the basis that, "it was always part of the British Mandated Territory of Palestine." Moshe Dayan and Yosef Tekoah adopted a policy of Israeli control of the DMZ and water sources at the expense of Israel’s international image. The Banias spring remained under Syrian control, while the Banias River flowed through the contested Demilitarized Zone (DMZ) and into Israel.
Hasbani
The Hasbani River derives most of its discharge from two springs in Lebanon the Wazzani and the Haqzbieh, the latter being a group of springs on the uppermost Hasbani. The Hasbani runs for in Lebanon before crossing the border and joining with the Banias and Dan Rivers at a point in northern Israel, to form the River Jordan. For about four kilometres downstream of Ghajar, the Hasbani forms the border between Lebanon and northern Israel.
The Wazzani's and the Haqzbieh's combined discharge averages 138 million m³ per year. About 20% of the Hasbani flow emerges from the Wazzani Spring at Ghajar, close to the Lebanese Israeli border, about 3 kilometres west of the base of Mount Hermon. The contribution of the spring is very important, because it is the only continuous year-round flow in the river in either Lebanon or Israel.
Utilization of water resources in the area, including the Hasbani, has been a source of conflict and was one of the factors leading to the 1967 Six-Day War. The Hasbani was included in the Jordan Valley Unified Water Plan, proposed in 1955 by special US envoy Eric Johnston. Under the plan, Lebanon was allocated usage of 35 million cubic metres annually from it. The plan was rejected by the Arab League.
In 2001 the Lebanese government installed a small pumping station with a 10 cm bore to extract water to supply Ghajar village. In March 2002 Lebanon also diverted part of the Hasbani to supply Wazzani village. An action that Ariel Sharon said was a "casus belli" and could lead to war.
Dan
The Dan River is the largest tributary of the Jordan river, whose source is located at the base of Mount Hermon. Until the 1967 Six-Day War, the Dan River was the only source of the river Jordan wholly within Israeli territory. Its flow provides up to 238 million cubic metres of water annually to the Hulah Valley. In 1966 this was a cause of dispute between water planners and conservationists, with the latter prevailing after three years of court adjudication and appeals. The result was a conservation project of about at the source of the river called the Tel Dan Reserve.
Huleh marshes
In 1951 the tensions in the area were raised when, in the lake Huleh area (10 km from Banias), Israel initiated a project to drain the marsh land to bring into cultivation. The project caused a conflict of interests between the Israeli government and the Palestinian Arab villages in the area and drew Syrian complaints to the United Nations. On 30 March in a meeting chaired by David Ben-Gurion the Israeli government decided to assert Israeli sovereignty over the DMZs, consequently 800 inhabitants of the villages were forcibly evacuated from the DMZ. From 1951 Israel refused to attend the meetings of the Israel/Syria Mixed Armistice Commission. This refusal on the part of Israel not only constituted a flagrant violation of the General Armistice Agreement, but also contributed to an increase of tension in the area. The Security Council itself strongly condemned the attitude of Israel, in its resolution of 18 May 1951, as being "inconsistent with the objectives and intent of the Armistice Agreement"
Under UN auspices and with encouragement from the Eisenhower administration 9 meetings took place between 15 and 27 January 1953, to regularise administration of the 3 DMZs. At the eighth meeting Syria offered to adjust the armistice lines, and cede to Israel's 70% of the DMZ, in exchange for a return to the pre 1946 international border in the Jordan basin area, with Banias water resources returning uncontested to Syrian sovereignty. On 26 April, the Israeli cabinet met to consider the Syrian suggestions; with head of Israel’s Water Planning Authority, Simha Blass, in attendance. Blass noted that while the land to be ceded to Syria was not suitable for cultivation, the Syrian map did not suit Israel’s water development plan. Blass explained that the movement of the international boundary in the area of Banias would affect Israel’s water rights. The Israeli cabinet rejected the Syrian proposals but decided to continue the negotiations by making changes to the accord and placing conditions on the Syrian proposals. The Israeli conditions took into account Blass’s position over water rights and Syria rejected the Israeli counteroffer.
On 4 June 1953 Jordan and Syria concluded a bilateral plan to store surface water at Maqarin (completed in 2006 as Al-Wehda Dam), so as to be able to use the water resources of the Yarmouk river in the Yarmouk-Jordan valley plan, funded through the Technical Cooperation Agency of the United States of America, the UNRWA and Jordan.
Part of the Hula marshes were re-flooded in 1994 due to the negative effects from the original drainage plan.
Regional projects
Israeli National Water Carrier project
In September 1953, Israel unilaterally started a water diversion project within the Jordan River basin to divert water from the Jordan River at Jacob's Ford (B'not Yacov) to help irrigate the coastal Sharon Plain and eventually the Negev desert. The diversion project consisted of a nine-mile (14 km) channel midway between the Huleh Marshes and Lake Galilee (Lake Tiberias) in the central DMZ to be rapidly constructed. Syria claimed that it would dry up of Syrian land. The UNTSO Chief of Staff Major General Vagn Bennike of Denmark noted that the project was denying water to two Palestinian water mills, was drying up Palestinian farm land and was a substantial military benefit to Israel against Syria. The US cut off aid to Israel. The Israeli response was to increase work. UN Security Council Resolution 100 "deemed it desirable" for Israel to suspend work started on 2 September "pending urgent examination of the question by the Council". Israel finally backed off by moving the intake out of the DMZ and for the next three years the US kept its economic sanctions by threatening to end aid channelled to Israel by the Foreign Operations Administration and insisting on tying the aid with Israel's behaviour. The Security Council ultimately rejected Syrian claims that the work was a violation of the Armistice Agreements and drainage works were resumed and the work was completed in 1957. This caused shelling from Syria and friction with the Eisenhower Administration; the diversion was moved to the southwest to Eshed Kinrot into the Israeli National Water Carrier project, designed by Tahal and constructed by Mekorot.
Jordan Valley Unified Water Plan
1955 US ambassador Eric Johnston negotiated the Jordan Valley Unified Water Plan. The plan was for the unified development of the Jordan Valley water resources based on an earlier plan commissioned by United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA). Modeled upon the Tennessee Valley Authority development plan, it was approved by technical water committees of all the regional riparian countries – Israel, Jordan, Lebanon and Syria. The plan was formally rejected by the Arab Higher Committee, but Nasser, the Egyptian president, assured the Americans that the Arabs would not exceed the water quotas prescribed by the Johnston plan. Jordan undertook to abide by their allocations under the plan. The plan was initially un-ratified by Israel, but after the US linked the Johnston plan to aid, also agreed to accept the allocation provisions.
except for the above withdrawals
*the waters of the Yarmouk River will be available for the unconditional use of the Kingdom of the [sic] Jordan
** and the waters of the Jordan River will be for unconditional use of Israel.
The East Ghor canal formed part of a larger project – the Greater Yarmouk project – which envisioned two storage dams on the Yarmouk, and a West Ghor Canal, on the West Bank of the Jordan. These projects were never built, due to Israel's occupation of the West Bank of the Jordan River during the Six-Day War. After the Six-Day War, The PLO operated from bases within Jordan, and launched several attacks on Israeli settlements in the Jordan valley, including attacks on water facilities. Israel responded with raids in Jordan, in an attempt to force Hussein of Jordan of Jordan to rein in the PLO. The canal was the target of at least 4 of these raids, and was virtually knocked out of commission. The United states intervened to resolve the conflict, and the canal was repaired after Hussein undertook to stop PLO activity in the area.
Headwater Diversion Plan
First summit of Arab Heads of State was convened in Cairo between 13 and 17 January 1964, called by Nasser the Egyptian president, to discuss a common policy to confront Israel's national water carrier project which was nearing completion. The second Arab League summit conference voted on a plan which would have circumvent and frustrated it. The Arab and North African states chose to divert the Jordan headwaters rather than the use of direct military intervention. The heads of State of the Arab League considered two options:
The diversion of the Hasbani to the Litani combined with the diversion of the Banias to the Yarmouk,
The diversion of both the Hasbani and the Banias to the Yarmouk.
The Arab league plan selected was for the Hasbani and Banias waters to be diverted to Mukhaiba and stored. The scheme was only marginally feasible, was technically difficult and expensive. Arab political considerations were cited to justify the diversion scheme. In January 1964 an Arab League summit meeting convened in Cairo and decided:
The establishment of Israel is the basic threat that the Arab nation in its entirety has agreed to forestall. And Since the existence of Israel is a danger that threatens the Arab nation, the diversion of the Jordan waters by it multiplies the dangers to Arab existence. Accordingly, the Arab states have to prepare the plans necessary for dealing with the political, economic and social aspects, so that if necessary results are not achieved, collective Arab military preparations, when they are not completed, will constitute the ultimate practical means for the final liquidation of Israel.
After the 2nd Arab summit conference in Cairo of January 1964 (with the backing of all 13 Arab League members), Syria in a joint project with Lebanon and Jordan, started the development of the water resources of Banias for a canal along the slopes of the Golan toward the Yarmouk River. While Lebanon was to construct a canal form the Hasbani River to Banias and complete the scheme. The project was to divert 20 to 30 million cubic metres of water from the river Jordan tributaries to Syria and Jordan for the development of Syria and Jordan. The Syrian construction of the Banias to Yarmouk canal got under way in 1965. Once completed, the diversion of the flow would have transported the water into a dam at Mukhaiba for use by Jordan and Syria before the waters of the Banias Stream entered Israel and the Sea of Galilee. Lebanon also started a canal to divert the waters of the Hasbani, whose source is in Lebanon, into the Banias. The Hasbani and Banias diversion works would have had the effect of reducing the capacity of Israel's carrier by about 35% and Israel's overall water supply by about 11%. Israel declared that it would regard such diversion as an infringement of its sovereign rights. The Finance of the project was through contributions by Saudi Arabia and Egypt. This led to military intervention from Israel, first with tank and artillery fire and then, as the Syrians shifted the works further southwards, with airstrikes.
Notes
Further reading
Spiegel, Steven L. (1985) The Other Arab-Israeli Conflict: Making America's Middle East Policy, from Truman to Reagan University of Chicago Press,
External links
Historical Developmental Plans of the Jordan River Basin
UN Document Flow rates of the Jordan River and its tributaries 1953, with estimation of costs for the "Jordan Valley Unified Water Plan".
.
Jordan River
-
Jordan River
Jordan River
Jordan River
Jordan River
Jordan River
Jordan River Politics
Jordan River
Politics of the Arab–Israeli conflict
Politics of the Israeli–Palestinian conflict | Water politics in the Jordan River basin | [
"Engineering"
] | 6,198 | [
"Irrigation projects"
] |
2,572,375 | https://en.wikipedia.org/wiki/Current%E2%80%93voltage%20characteristic | A current–voltage characteristic or I–V curve (current–voltage curve) is a relationship, typically represented as a chart or graph, between the electric current through a circuit, device, or material, and the corresponding voltage, or potential difference, across it.
In electronics
In electronics, the relationship between the direct current (DC) through an electronic device and the DC voltage across its terminals is called a current–voltage characteristic of the device. Electronic engineers use these charts to determine basic parameters of a device and to model its behavior in an electrical circuit. These characteristics are also known as I–V curves, referring to the standard symbols for current and voltage.
In electronic components with more than two terminals, such as vacuum tubes and transistors, the current–voltage relationship at one pair of terminals may depend on the current or voltage on a third terminal. This is usually displayed on a more complex current–voltage graph with multiple curves, each one representing the current–voltage relationship at a different value of current or voltage on the third terminal.
For example the diagram at right shows a family of I–V curves for a MOSFET as a function of drain voltage with overvoltage (VGS − Vth) as a parameter.
The simplest I–V curve is that of a resistor, which according to Ohm's law exhibits a linear relationship between the applied voltage and the resulting electric current; the current is proportional to the voltage, so the I–V curve is a straight line through the origin with positive slope. The reciprocal of the slope is equal to the resistance.
The I–V curve of an electrical component can be measured with an instrument called a curve tracer. The transconductance and Early voltage of a transistor are examples of parameters traditionally measured from the device's I–V curve.
Types of I–V curves
The shape of an electrical component's characteristic curve reveals much about its operating properties. I–V curves of different devices can be grouped into categories:
Active vs passive: Devices which have I–V curves which are limited to the first and third quadrants of the I–V plane, passing through the origin, are passive components (loads), that consume electric power from the circuit. Examples are resistors and electric motors. Conventional current always flows through these devices in the direction of the electric field, from the positive voltage terminal to the negative, so the charges lose potential energy in the device, which is converted to heat or some other form of energy.
In contrast, devices with I–V curves which pass through the second or fourth quadrants are active components, power sources, which can produce electric power. Examples are batteries and generators. When it is operating in the second or fourth quadrant, current is forced to flow through the device from the negative to the positive voltage terminal, against the opposing force of the electric field, so the electric charges are gaining potential energy. Thus the device is converting some other form of energy into electric energy.
Linear vs nonlinear: A straight line through the origin represents a linear circuit element, while a curved line represents a nonlinear element. For example, resistors, capacitors, and inductors are linear, while diodes and transistors are nonlinear. An I–V curve which is a straight line through the origin with positive slope represents a linear or ohmic resistor, the most common type of resistance encountered in circuits. It obeys Ohm's law; the current is proportional to the applied voltage over a wide range. Its resistance, equal to the reciprocal of the slope of the line, is constant. A curved I–V line represents a nonlinear resistance, such as a diode. In this type the resistance varies with the applied voltage or current.
Negative resistance vs positive resistance: If the I–V curve has a positive slope (increasing to the right) throughout, it represents a positive resistance. An I–V curve that is nonmonotonic (having peaks and valleys) represents a device which has negative resistance. Regions of the curve which have a negative slope (declining to the right) represent operating regions where the device has negative differential resistance, while regions of positive slope represent positive differential resistance. Negative resistance devices can be used to make amplifiers and oscillators. Tunnel diodes and Gunn diodes are examples of components that have negative resistance.
Hysteresis vs single-valued: Devices which have hysteresis; that is, in which the current–voltage relation depends not only on the present applied input but also on the past history of inputs, have I–V curves consisting of families of closed loops. Each branch of the loop is marked with a direction represented by an arrow. Examples of devices with hysteresis include iron-core inductors and transformers, thyristors such as SCRs and DIACs, and gas-discharge tubes such as neon lights.
In electrophysiology
While I–V curves are applicable to any electrical system, they find wide use in the field of biological electricity, particularly in the sub-field of electrophysiology. In this case, the voltage refers to the voltage across a biological membrane, a membrane potential, and the current is the flow of charged ions through channels in this membrane. The current is determined by the conductances of these channels.
In the case of ionic current across biological membranes, currents are measured from inside to outside. That is, positive currents, known as "outward current", corresponding to positively charged ions crossing a cell membrane from the inside to the outside, or a negatively charged ion crossing from the outside to the inside. Similarly, currents with a negative value are referred to as "inward current", corresponding to positively charged ions crossing a cell membrane from the outside to the inside, or a negatively charged ion crossing from inside to outside.
The figure to the right shows an I–V curve that is more relevant to the currents in excitable biological membranes (such as a neuronal axon).
The blue line shows the I–V relationship for the potassium ion. It is linear, indicating no voltage-dependent gating of the potassium ion channel. The yellow line shows the I–V relationship for the sodium ion. It is not linear, indicating that the sodium ion channel is voltage-dependent. The green line indicates the I–V relationship derived from summing the sodium and potassium currents. This approximates the actual membrane potential and current relationship of a cell containing both types of channel.
See also
Maximum power point tracking
Voltammetry
References
Electrical resistance and conductance
Electronic engineering
MOSFETs | Current–voltage characteristic | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 1,340 | [
"Computer engineering",
"Physical quantities",
"Quantity",
"Electronic engineering",
"Electrical engineering",
"Wikipedia categories named after physical quantities",
"Electrical resistance and conductance"
] |
2,573,213 | https://en.wikipedia.org/wiki/Integrable%20system | In mathematics, integrability is a property of certain dynamical systems. While there are several distinct formal definitions, informally speaking, an integrable system is a dynamical system with sufficiently many conserved quantities, or first integrals, that its motion is confined to a submanifold
of much smaller dimensionality than that of its phase space.
Three features are often referred to as characterizing integrable systems:
the existence of a maximal set of conserved quantities (the usual defining property of complete integrability)
the existence of algebraic invariants, having a basis in algebraic geometry (a property known sometimes as algebraic integrability)
the explicit determination of solutions in an explicit functional form (not an intrinsic property, but something often referred to as solvability)
Integrable systems may be seen as very different in qualitative character from more generic dynamical systems,
which are more typically chaotic systems. The latter generally have no conserved quantities, and are asymptotically intractable, since an arbitrarily small perturbation in initial conditions may lead to arbitrarily large deviations in their trajectories over a sufficiently large time.
Many systems studied in physics are completely integrable, in particular, in the Hamiltonian sense, the key example being multi-dimensional harmonic oscillators. Another standard example is planetary motion about either one fixed center (e.g., the sun) or two. Other elementary examples include the motion of a rigid body about its center of mass (the Euler top) and the motion of an axially symmetric rigid body about a point in its axis of symmetry (the Lagrange top).
In the late 1960s, it was realized that there are completely integrable systems in physics having an infinite number of degrees of freedom, such as some models of shallow water waves (Korteweg–de Vries equation), the Kerr effect in optical fibres, described by the nonlinear Schrödinger equation, and certain integrable many-body systems, such as the Toda lattice. The modern theory of integrable systems was revived with the numerical discovery of solitons by Martin Kruskal and Norman Zabusky in 1965, which led to the inverse scattering transform method in 1967.
In the special case of Hamiltonian systems, if there are enough independent Poisson commuting first integrals for the flow parameters to be able to serve as a coordinate system on the invariant level sets (the leaves of the Lagrangian foliation), and if the flows are complete and the energy level set is compact, this implies the Liouville–Arnold theorem; i.e., the existence of action-angle variables. General dynamical systems have no such conserved quantities; in the case of autonomous Hamiltonian systems, the energy is generally the only one, and on the energy level sets, the flows are typically chaotic.
A key ingredient in characterizing integrable systems is the Frobenius theorem, which states that a system is Frobenius integrable (i.e., is generated by an integrable distribution) if, locally, it has a foliation by maximal integral manifolds. But integrability, in the sense of dynamical systems, is a global property, not a local one, since it requires that the foliation be a regular one, with the leaves embedded submanifolds.
Integrability does not necessarily imply that generic solutions can be explicitly expressed in terms of some known set of special functions; it is an intrinsic property of the geometry and topology of the system, and the nature of the dynamics.
General dynamical systems
In the context of differentiable dynamical systems, the notion of integrability refers to the existence of invariant, regular foliations; i.e., ones whose leaves are embedded submanifolds of the smallest possible dimension that are invariant under the flow. There is thus a variable notion of the degree of integrability, depending on the dimension of the leaves of the invariant foliation. This concept has a refinement in the case of Hamiltonian systems, known as complete integrability in the sense of Liouville (see below), which is what is most frequently referred to in this context.
An extension of the notion of integrability is also applicable to discrete systems such as lattices. This definition can be adapted to describe evolution equations that either are systems of differential equations or finite difference equations.
The distinction between integrable and nonintegrable dynamical systems has the qualitative implication of regular motion vs. chaotic motion and hence is an intrinsic property, not just a matter of whether a system can be explicitly integrated in an exact form.
Hamiltonian systems and Liouville integrability
In the special setting of Hamiltonian systems, we have the notion of integrability in the Liouville sense. (See the Liouville–Arnold theorem.) Liouville integrability means that there exists a regular foliation of the phase space by invariant manifolds such that the Hamiltonian vector fields associated with the invariants of the foliation span the tangent distribution. Another way to state this is that there exists a maximal set of functionally independent Poisson commuting invariants (i.e., independent functions on the phase space whose Poisson brackets with the Hamiltonian of the system, and with each other, vanish).
In finite dimensions, if the phase space is symplectic (i.e., the center of the Poisson algebra consists only of constants), it must have even dimension and the maximal number of independent Poisson commuting invariants (including the Hamiltonian itself) is . The leaves of the foliation are totally isotropic with respect to the symplectic form and such a maximal isotropic foliation is called Lagrangian. All autonomous Hamiltonian systems (i.e. those for which the Hamiltonian and Poisson brackets are not explicitly time-dependent) have at least one invariant; namely, the Hamiltonian itself, whose value along the flow is the energy. If the energy level sets are compact, the leaves of the Lagrangian foliation are tori, and the natural linear coordinates on these are called "angle" variables. The cycles of the canonical -form are called the action variables, and the resulting canonical coordinates are called action-angle variables (see below).
There is also a distinction between complete integrability, in the Liouville sense, and partial integrability, as well as a notion of superintegrability and maximal superintegrability. Essentially, these distinctions correspond to the dimensions of the leaves of the foliation. When the number of independent Poisson commuting invariants is less than maximal (but, in the case of autonomous systems, more than one), we say the system is partially integrable. When there exist further functionally independent invariants, beyond the maximal number that can be Poisson commuting, and hence the dimension of the leaves of the invariant foliation is less than n, we say the system is superintegrable. If there is a regular foliation with one-dimensional leaves (curves), this is called maximally superintegrable.
Action-angle variables
When a finite-dimensional Hamiltonian system is completely integrable in the Liouville sense,
and the energy level sets are compact, the flows are complete, and the leaves of the invariant foliation are tori. There then exist, as mentioned above, special sets of canonical coordinates on the phase space known as action-angle variables,
such that the invariant tori are the joint level sets of the action variables. These thus provide a complete set of invariants of the Hamiltonian flow (constants of motion), and the angle variables are the natural periodic coordinates on the tori. The motion on the invariant tori, expressed in terms of these canonical coordinates, is linear in the angle variables.
The Hamilton–Jacobi approach
In canonical transformation theory, there is the Hamilton–Jacobi method, in which solutions to Hamilton's equations are sought by first finding a complete solution of the associated Hamilton–Jacobi equation. In classical terminology, this is described as determining a transformation to a canonical set of coordinates consisting of completely ignorable variables; i.e., those in which there is no dependence of the Hamiltonian on a complete set of canonical "position" coordinates, and hence the corresponding canonically conjugate momenta are all conserved quantities. In the case of compact energy level sets, this is the first step towards determining the action-angle variables. In the general theory of partial differential equations of Hamilton–Jacobi type, a complete solution (i.e. one that depends on n independent constants of integration, where n is the dimension of the configuration space), exists in very general cases, but only in the local sense. Therefore, the existence of a complete solution of the Hamilton–Jacobi equation is by no means a characterization of complete integrability in the Liouville sense. Most cases that can be "explicitly integrated" involve a complete separation of variables, in which the separation constants provide the complete set of integration constants that are required. Only when these constants can be reinterpreted, within the full phase space setting, as the values of a complete set of Poisson commuting functions restricted to the leaves of a Lagrangian foliation, can the system be regarded as completely integrable in the Liouville sense.
Solitons and inverse spectral methods
A resurgence of interest in classical integrable systems came with the discovery, in the late 1960s, that solitons, which are strongly stable, localized solutions of partial differential equations like the Korteweg–de Vries equation (which describes 1-dimensional non-dissipative fluid dynamics in shallow basins), could be understood by viewing these equations as infinite-dimensional integrable Hamiltonian systems. Their study leads to a very fruitful approach for "integrating" such systems, the inverse scattering transform and more general inverse spectral methods (often reducible to Riemann–Hilbert problems),
which generalize local linear methods like Fourier analysis to nonlocal linearization, through the solution of associated integral equations.
The basic idea of this method is to introduce a linear operator that is determined by the position in phase space and which evolves under the dynamics of the system in question in such a way that its "spectrum" (in a suitably generalized sense) is invariant under the evolution, cf. Lax pair. This provides, in certain cases, enough invariants, or "integrals of motion" to make the system completely integrable. In the case of systems having an infinite number of degrees of freedom, such as the KdV equation, this is not sufficient to make precise the property of Liouville integrability. However, for suitably defined boundary conditions, the spectral transform can, in fact, be interpreted as a transformation to completely ignorable coordinates, in which the conserved quantities form half of a doubly infinite set of canonical coordinates, and the flow linearizes in these. In some cases, this may even be seen as a transformation to action-angle variables, although typically only a finite number of the "position" variables are actually angle coordinates, and the rest are noncompact.
Hirota bilinear equations and τ-functions
Another viewpoint that arose in the modern theory of integrable systems originated in
a calculational approach pioneered by Ryogo Hirota, which involved replacing
the original nonlinear dynamical system with a bilinear system of constant coefficient
equations for an auxiliary quantity, which later came to be known as the
τ-function. These are now referred to as the Hirota equations. Although originally appearing just as a calculational device, without any clear relation
to the inverse scattering approach, or the Hamiltonian structure, this nevertheless gave a very direct method from which important classes of solutions such as solitons could be derived.
Subsequently, this was interpreted by Mikio Sato and his students, at first for the case of
integrable hierarchies of PDEs, such as the Kadomtsev–Petviashvili hierarchy, but then
for much more general classes of integrable hierarchies, as a sort of universal phase space approach, in which, typically, the commuting dynamics were viewed simply as determined by a fixed (finite or infinite) abelian group action on a (finite or infinite) Grassmann manifold.
The τ-function was viewed as the determinant
of a projection operator from elements of the group orbit to some origin within the Grassmannian,
and the Hirota equations as expressing the Plücker relations, characterizing the
Plücker embedding of the Grassmannian in the projectivization of a suitably
defined (infinite) exterior space, viewed as a fermionic Fock space.
Quantum integrable systems
There is also a notion of quantum integrable systems.
In the quantum setting, functions on phase space must be replaced by self-adjoint operators on a Hilbert space, and the notion of Poisson commuting functions replaced by commuting operators. The notion of conservation laws must be specialized to local conservation laws. Every Hamiltonian has an infinite set of conserved quantities given by projectors to its energy eigenstates. However, this does not imply any special dynamical structure.
To explain quantum integrability, it is helpful to consider the free particle setting. Here all dynamics are one-body reducible. A quantum system is said to be integrable if the dynamics are two-body reducible. The Yang–Baxter equation is a consequence of this reducibility and leads to trace identities which provide an infinite set of conserved quantities. All of these ideas are incorporated into the quantum inverse scattering method where the algebraic Bethe ansatz can be used to obtain explicit solutions. Examples of quantum integrable models are the Lieb–Liniger model, the Hubbard model and several variations on the Heisenberg model. Some other types of quantum integrability are known in explicitly time-dependent quantum problems, such as the driven Tavis-Cummings model.
Exactly solvable models
In physics, completely integrable systems, especially in the infinite-dimensional setting, are often referred to as exactly solvable models. This obscures the distinction between integrability, in the Hamiltonian sense, and the more general dynamical systems sense.
There are also exactly solvable models in statistical mechanics, which are more closely related to quantum integrable systems than classical ones. Two closely related methods: the Bethe ansatz approach, in its modern sense, based on the Yang–Baxter equations and the quantum inverse scattering method, provide quantum analogs of the inverse spectral methods. These are equally important in the study of solvable models in statistical mechanics.
An imprecise notion of "exact solvability" as meaning: "The solutions can be expressed explicitly in terms of some previously known functions" is also sometimes used, as though this were an intrinsic property of the system itself, rather than the purely calculational feature that we happen to have some "known" functions available, in terms of which the solutions may be expressed. This notion has no intrinsic meaning, since what is meant by "known" functions very often is defined precisely by the fact that they satisfy certain given equations, and the list of such "known functions" is constantly growing. Although such a characterization of "integrability" has no intrinsic validity, it often implies the sort of regularity that is to be expected in integrable systems.
List of some well-known integrable systems
Classical mechanical systems
Calogero–Moser–Sutherland model
Central force motion (exact solutions of classical central-force problems)
Geodesic motion on ellipsoids
Harmonic oscillator
Integrable Clebsch and Steklov systems in fluids
Lagrange, Euler, and Kovalevskaya tops
Neumann oscillator
Two center Newtonian gravitational motion
Integrable lattice models
Ablowitz–Ladik lattice
Toda lattice
Volterra lattice
Integrable systems in 1 + 1 dimensions
AKNS system
Benjamin–Ono equation
Boussinesq equation (water waves)
Camassa–Holm equation
Classical Heisenberg ferromagnet model (spin chain)
Degasperis–Procesi equation
Dym equation
Garnier integrable system
Kaup–Kupershmidt equation
Krichever–Novikov equation
Korteweg–de Vries equation
Landau–Lifshitz equation (continuous spin field)
Nonlinear Schrödinger equation
Nonlinear sigma models
Sine–Gordon equation
Thirring model
Three-wave equation
Integrable PDEs in 2 + 1 dimensions
Davey–Stewartson equation
Ishimori equation
Kadomtsev–Petviashvili equation
Novikov–Veselov equation
Integrable PDEs in 3 + 1 dimensions
The Belinski–Zakharov transform generates a Lax pair for the Einstein field equations; general solutions are termed gravitational solitons, of which the Schwarzschild metric, the Kerr metric and some gravitational wave solutions are examples.
Exactly solvable statistical lattice models
8-vertex model
Gaudin model
Ising model in 1- and 2-dimensions
Ice-type model of Lieb
Quantum Heisenberg model
See also
Hitchin system
Related areas
Mathematical physics
Soliton
Painleve transcendents
Statistical mechanics
Integrable algorithm
Some key contributors (since 1965)
References
Further reading
External links
"SIDE - Symmetries and Integrability of Difference Equations", a conference devoted to the study of integrable difference equations and related topics.
Notes
Dynamical systems
Hamiltonian mechanics
Partial differential equations | Integrable system | [
"Physics",
"Mathematics"
] | 3,685 | [
"Integrable systems",
"Theoretical physics",
"Classical mechanics",
"Hamiltonian mechanics",
"Mechanics",
"Dynamical systems"
] |
2,574,067 | https://en.wikipedia.org/wiki/Radiative%20process | In particle physics, a radiative process refers to one elementary particle emitting another and continuing to exist. This typically happens when a fermion emits a boson such as a gluon or photon.
See also
Bremsstrahlung
Radiation
Particle physics
References | Radiative process | [
"Physics",
"Chemistry"
] | 58 | [
"Transport phenomena",
"Physical phenomena",
"Waves",
"Radiation",
"Particle physics",
"Particle physics stubs"
] |
2,574,174 | https://en.wikipedia.org/wiki/Acoustic%20emission | Acoustic emission (AE) is the phenomenon of radiation of acoustic (elastic) waves in solids that occurs when a material undergoes irreversible changes in its internal structure, for example as a result of crack formation or plastic deformation due to aging, temperature gradients, or external mechanical forces.
In particular, AE occurs during the processes of mechanical loading of materials and structures accompanied by structural changes that generate local sources of elastic waves. This results in small surface displacements of a material produced by elastic or stress waves generated when the accumulated elastic energy in a material or on its surface is released rapidly.
The mechanism of emission of the primary elastic pulse AE (act or event AE) may have a different physical nature. The figure shows the mechanism of the AE act (event) during the nucleation of a microcrack due to the breakthrough of the dislocations pile-up (dislocation is a linear defect in the crystal lattice of a material) across the boundary in metals with a body-centered cubic (bcc) lattice under mechanical loading, as well as time diagrams of the stream of AE acts (events) (1) and the stream of recorded AE signals (2).
The AE method makes it possible to study the kinetics of processes at the earliest stages of microdeformation, dislocation nucleation and accumulation of microcracks. Roughly speaking, each crack seems to "scream" about its growth. This makes it possible to diagnose the moment of crack origin itself by the accompanying AE. In addition, for each crack that has already arisen, there is a certain critical size, depending on the properties of the material. Up to this size, the crack grows very slowly (sometimes for decades) through a huge number of small discrete jumps accompanied by AE radiation. After the crack reaches a critical size, catastrophic destruction occurs, because its further growth is already at a speed close to half the speed of sound in the material of the structure. Taking with the help of special highly sensitive equipment and measuring in the simplest case the intensity of dNa/dt (quantity per unit of time), as well as the total number of acts (events) of AE, Na, it is possible to experimentally estimate the growth rate, crack length and predict the proximity of destruction according to AE data.
The waves generated by sources of AE are of practical interest in structural health monitoring (SHM), quality control, system feedback, process monitoring, and other fields. In SHM applications, AE is typically used to detect, locate, and characterise damage.
Phenomena
Acoustic emission is the transient elastic waves within a material, caused by the rapid release of localized stress energy. An event source is the phenomenon which releases elastic energy into the material, which then propagates as an elastic wave. Acoustic emissions can be detected in frequency ranges under 1 kHz, and have been reported at frequencies up to 100 MHz, but most of the released energy is within the 1 kHz to 1 MHz range. Rapid stress-releasing events generate a spectrum of stress waves starting at 0 Hz, and typically falling off at several MHz.
The three major applications of AE techniques are: 1) source location – determine the locations where an event source occurred; 2) material mechanical performance – evaluate and characterize materials and structures; and 3) health monitoring – monitor the safe operation of a structure, for example, bridges, pressure containers, pipelines, etc.
More recent research has focused on using AE to not only locate but also to characterise the source mechanisms such as crack growth, friction, delamination, matrix cracking, etc. This would give AE the ability to tell the end user what source mechanism is present and allow them to determine whether structural repairs are necessary.
Employing proper signal processing and analysis allows for the possibility to gain a deeper understanding of the elastic wave signals and their relation to processes occurring within structures.
A significant expansion of the capabilities and an increase in the reliability of the AE diagnostic method is provided by the use of statistical methods for analyzing random event streams (for example, the random Poisson stream model)
The frequency domain representation of a signal obtained through Fast Fourier transform (FFT) provides information about the signal's magnitude and frequency content.
AE can be related to an irreversible release of energy. It can also be generated from sources not involving material failure, including friction, cavitation, and impact.
Uses
The application of acoustic emission to nondestructive testing of materials typically takes place between 20 kHz and 1 MHz. Unlike conventional ultrasonic testing, AE tools are designed for monitoring acoustic emissions produced by the material during failure or stress, and not on the material's effect on externally generated waves. Part failure can be documented during unattended monitoring. The monitoring of the level of AE activity during multiple load cycles forms the basis for many AE safety inspection methods, that allow the parts undergoing inspection to remain in service.
The technique is used, for example, to study the formation of cracks during the welding process, as opposed to locating them after the weld has been formed with the more familiar ultrasonic testing technique.
In materials under active stress, such as some components of an airplane during flight, transducers mounted in an area can detect the formation of a crack at the moment it begins propagating. A group of transducers can be used to record signals and then locate the precise area of their origin by measuring the time for the sound to reach different transducers.
Long-term continuous monitoring for acoustic emissions is valuable for detecting cracks forming in pressure vessels and pipelines transporting liquids under high pressures. Standards for the use of acoustic emission for nondestructive testing of pressure vessels have been developed by the ASME, ISO, and the European Community.
This technique is used for estimation of corrosion in reinforced concrete structures.
Currently, the AE method is actively used in the tasks of monitoring and diagnostics of objects of nuclear power engineering, aviation, rocket and space technology, railway transport, historical artifacts (for example, the Tsar Bell in the Moscow Kremlin), as well as other products and objects of responsible purpose.
AE sensing can potentially be utilised to monitor the state of health of lithium-ion batteries, particularly in the detection and characterisation of parasitic mechano-electrochemical events, such as electrode electrochemical grinding, phase transitions, and gas evolution. The piezoelectric sensor is employed to receive acoustic signals released by battery materials during operation.
In addition to nondestructive testing, acoustic emission monitoring has applications in process monitoring. Applications where acoustic emission monitoring has successfully been used include detecting anomalies in fluidized beds and end points in batch granulation.
See also
Accelerometer
Seismometer
References
External links and further reading
History of the Latin American Working Group on Acoustic Emission
Wolfgang Sachse, Kusuo Yamaguchi, James Roget, AEWG (Association) books.google.co.uk 100 page entries from search criteria: AE within this text ()
Development of China Acoustic Emission Research Institute
Acoustics
Materials science
Nondestructive testing | Acoustic emission | [
"Physics",
"Materials_science",
"Engineering"
] | 1,439 | [
"Applied and interdisciplinary physics",
"Classical mechanics",
"Acoustics",
"Materials science",
"Nondestructive testing",
"Materials testing",
"nan"
] |
2,574,259 | https://en.wikipedia.org/wiki/Aggregated%20diamond%20nanorod | Aggregated diamond nanorods, or ADNRs, are a nanocrystalline form of diamond, also known as nanodiamond or hyperdiamond.
Discovery
Nanodiamond or hyperdiamond was produced by compression of graphite in 2003 by a group of researchers in Japan and in the same work, published in Nature, it was shown to be much harder than bulk diamond. Later, it was also produced by compression of fullerene and confirmed to be the hardest and least compressible known material, with an isothermal bulk modulus of 491 gigapascals (GPa), while a conventional diamond has a modulus of 442–446 GPa; these results were inferred from X-ray diffraction data, which also indicated that ADNRs are 0.3% denser than regular diamond. The same group later described ADNRs as "having a hardness and Young's modulus comparable to that of natural diamond, but with 'superior wear resistance'".
Hardness
A <111> surface (normal to the largest diagonal of a cube) of pure diamond has a hardness value of 167±6 GPa when scratched with a nanodiamond tip, while the nanodiamond sample itself has a value of 310 GPa when tested with a nanodiamond tip. However, the test only works properly with a tip made of harder material than the sample being tested due to cracking. This means that the true value for nanodiamond is likely lower than 310 GPa. Due to its hardness, a hyperdiamond could possibly exceed 10 on the Mohs scale of mineral hardness.
Synthesis
ADNRs (hyperdiamonds/nanodiamonds) are produced by compressing fullerite powder—a solid form of allotropic carbon fullerene—by either of two somewhat similar methods. One uses a diamond anvil cell and applied pressure ~37 GPa without heating the cell. In another method, fullerite is compressed to lower pressures (2–20 GPa) and then heated to a temperature in the range of . Extreme hardness of what now appears likely to have been nanodiamonds was reported by researchers in the 1990s. The material is a series of interconnected diamond nanorods, with diameters of between 5 and 20 nanometres and lengths of around 1 micrometre each.
Nanodiamond aggregates ca. 1 mm in size also form in nature, from graphite upon meteoritic impact, such as that of the Popigai impact structure in Siberia, Russia.
See also
References
External links
The invention of aggregated diamond nanorods at Physorg.com
Nanomaterials
Allotropes of carbon
Superhard materials | Aggregated diamond nanorod | [
"Physics",
"Chemistry",
"Materials_science"
] | 554 | [
"Allotropes of carbon",
"Allotropes",
"Materials",
"Superhard materials",
"Nanotechnology",
"Nanomaterials",
"Matter"
] |
2,574,377 | https://en.wikipedia.org/wiki/Diffusion-weighted%20magnetic%20resonance%20imaging | Diffusion-weighted magnetic resonance imaging (DWI or DW-MRI) is the use of specific MRI sequences as well as software that generates images from the resulting data that uses the diffusion of water molecules to generate contrast in MR images. It allows the mapping of the diffusion process of molecules, mainly water, in biological tissues, in vivo and non-invasively. Molecular diffusion in tissues is not random, but reflects interactions with many obstacles, such as macromolecules, fibers, and membranes. Water molecule diffusion patterns can therefore reveal microscopic details about tissue architecture, either normal or in a diseased state. A special kind of DWI, diffusion tensor imaging (DTI), has been used extensively to map white matter tractography in the brain.
Introduction
In diffusion weighted imaging (DWI), the intensity of each image element (voxel) reflects the best estimate of the rate of water diffusion at that location. Because the mobility of water is driven by thermal agitation and highly dependent on its cellular environment, the hypothesis behind DWI is that findings may indicate (early) pathologic change. For instance, DWI is more sensitive to early changes after a stroke than more traditional MRI measurements such as T1 or T2 relaxation rates. A variant of diffusion weighted imaging, diffusion spectrum imaging (DSI), was used in deriving the Connectome data sets; DSI is a variant of diffusion-weighted imaging that is sensitive to intra-voxel heterogeneities in diffusion directions caused by crossing fiber tracts and thus allows more accurate mapping of axonal trajectories than other diffusion imaging approaches.
Diffusion-weighted images are very useful to diagnose vascular strokes in the brain. It is also used more and more in the staging of non-small-cell lung cancer, where it is a serious candidate to replace positron emission tomography as the 'gold standard' for this type of disease. Diffusion tensor imaging is being developed for studying the diseases of the white matter of the brain as well as for studies of other body tissues (see below). DWI is most applicable when the tissue of interest is dominated by isotropic water movement e.g. grey matter in the cerebral cortex and major brain nuclei, or in the body—where the diffusion rate appears to be the same when measured along any axis. However, DWI also remains sensitive to T1 and T2 relaxation. To entangle diffusion and relaxation effects on image contrast, one may obtain quantitative images of the diffusion coefficient, or more exactly the apparent diffusion coefficient (ADC). The ADC concept was introduced to take into account the fact that the diffusion process is complex in biological tissues and reflects several different mechanisms.
Diffusion tensor imaging (DTI) is important when a tissue—such as the neural axons of white matter in the brain or muscle fibers in the heart—has an internal fibrous structure analogous to the anisotropy of some crystals. Water will then diffuse more rapidly in the direction aligned with the internal structure (axial diffusion), and more slowly as it moves perpendicular to the preferred direction (radial diffusion). This also means that the measured rate of diffusion will differ depending on the direction from which an observer is looking.
Diffusion Basis Spectrum Imaging (DBSI) further separates DTI signals into discrete anisotropic diffusion tensors and a spectrum of isotropic diffusion tensors to better differentiate sub-voxel cellular structures. For example, anisotropic diffusion tensors correlate to axonal fibers, while low isotropic diffusion tensors correlate to cells and high isotropic diffusion tensors correlate to larger structures (such as the lumen or brain ventricles). DBSI has been shown to differentiate some types of brain tumors and multiple sclerosis with higher specificity and sensitivity than conventional DTI. DBSI has also been useful in determining microstructure properties of the brain.
Traditionally, in diffusion-weighted imaging (DWI), three gradient-directions are applied, sufficient to estimate the trace of the diffusion tensor or 'average diffusivity', a putative measure of edema. Clinically, trace-weighted images have proven to be very useful to diagnose vascular strokes in the brain, by early detection (within a couple of minutes) of the hypoxic edema.
More extended DTI scans derive neural tract directional information from the data using 3D or multidimensional vector algorithms based on six or more gradient directions, sufficient to compute the diffusion tensor. The diffusion tensor model is a rather simple model of the diffusion process, assuming homogeneity and linearity of the diffusion within each image voxel. From the diffusion tensor, diffusion anisotropy measures such as the fractional anisotropy (FA), can be computed. Moreover, the principal direction of the diffusion tensor can be used to infer the white-matter connectivity of the brain (i.e. tractography; trying to see which part of the brain is connected to which other part).
Recently, more advanced models of the diffusion process have been proposed that aim to overcome the weaknesses of the diffusion tensor model. Amongst others, these include q-space imaging and generalized diffusion tensor imaging.
Mechanism
Diffusion imaging is an MRI method that produces in vivo magnetic resonance images of biological tissues sensitized with the local characteristics of molecular diffusion, generally water (but other moieties can also be investigated using MR spectroscopic approaches).
MRI can be made sensitive to the motion of molecules. Regular MRI acquisition utilizes the behavior of protons in water to generate contrast between clinically relevant features of a particular subject. The versatile nature of MRI is due to this capability of producing contrast related to the structure of tissues at the microscopic level. In a typical -weighted image, water molecules in a sample are excited with the imposition of a strong magnetic field. This causes many of the protons in water molecules to precess simultaneously, producing signals in MRI. In -weighted images, contrast is produced by measuring the loss of coherence or synchrony between the water protons. When water is in an environment where it can freely tumble, relaxation tends to take longer. In certain clinical situations, this can generate contrast between an area of pathology and the surrounding healthy tissue.
To sensitize MRI images to diffusion, the magnetic field strength (B1) is varied linearly by a pulsed field gradient. Since precession is proportional to the magnet strength, the protons begin to precess at different rates, resulting in dispersion of the phase and signal loss. Another gradient pulse is applied in the same magnitude but with opposite direction to refocus or rephase the spins. The refocusing will not be perfect for protons that have moved during the time interval between the pulses, and the signal measured by the MRI machine is reduced. This "field gradient pulse" method was initially devised for NMR by Stejskal and Tanner who derived the reduction in signal due to the application of the pulse gradient related to the amount of diffusion that is occurring through the following equation:
where is the signal intensity without the diffusion weighting, is the signal with the gradient, is the gyromagnetic ratio, is the strength of the gradient pulse, is the duration of the pulse, is the time between the two pulses, and finally, is the diffusion-coefficient.
In order to localize this signal attenuation to get images of diffusion one has to combine the pulsed magnetic field gradient pulses used for MRI (aimed at localization of the signal, but those gradient pulses are too weak to produce a diffusion related attenuation) with additional "motion-probing" gradient pulses, according to the Stejskal and Tanner method. This combination is not trivial, as cross-terms arise between all gradient pulses. The equation set by Stejskal and Tanner then becomes inaccurate and the signal attenuation must be calculated, either analytically or numerically, integrating all gradient pulses present in the MRI sequence and their interactions. The result quickly becomes very complex given the many pulses present in the MRI sequence, and as a simplification, Le Bihan suggested gathering all the gradient terms in a "b factor" (which depends only on the acquisition parameters) so that the signal attenuation simply becomes:
Also, the diffusion coefficient, , is replaced by an apparent diffusion coefficient, , to indicate that the diffusion process is not free in tissues, but hindered and modulated by many mechanisms (restriction in closed spaces, tortuosity around obstacles, etc.) and that other sources of IntraVoxel Incoherent Motion (IVIM) such as blood flow in small vessels or cerebrospinal fluid in ventricles also contribute to the signal attenuation.
At the end, images are "weighted" by the diffusion process: In those diffusion-weighted images (DWI) the signal is more attenuated the faster the diffusion and the larger the b factor is. However, those diffusion-weighted images are still also sensitive to T1 and T2 relaxivity contrast, which can sometimes be confusing. It is possible to calculate "pure" diffusion maps (or more exactly ADC maps where the ADC is the sole source of contrast) by collecting images with at least 2 different values, and , of the b factor according to:
Although this ADC concept has been extremely successful, especially for clinical applications, it has been challenged recently, as new, more comprehensive models of diffusion in biological tissues have been introduced. Those models have been made necessary, as diffusion in tissues is not free. In this condition, the ADC seems to depend on the choice of b values (the ADC seems to decrease when using larger b values), as the plot of ln(S/So) is not linear with the b factor, as expected from the above equations. This deviation from a free diffusion behavior is what makes diffusion MRI so successful, as the ADC is very sensitive to changes in tissue microstructure. On the other hand, modeling diffusion in tissues is becoming very complex. Among most popular models are the biexponential model, which assumes the presence of 2 water pools in slow or intermediate exchange and the cumulant-expansion (also called Kurtosis) model,
which does not necessarily require the presence of 2 pools.
Diffusion model
Given the concentration and flux , Fick's first law gives a relationship between the flux and the concentration gradient:
where D is the diffusion coefficient. Then, given conservation of mass, the continuity equation relates the time derivative of the concentration with the divergence of the flux:
Putting the two together, we get the diffusion equation:
Magnetization dynamics
With no diffusion present, the change in nuclear magnetization over time is given by the classical Bloch equation
which has terms for precession, T2 relaxation, and T1 relaxation.
In 1956, H.C. Torrey mathematically showed how the Bloch equations for magnetization would change with the addition of diffusion. Torrey modified Bloch's original description of transverse magnetization to include diffusion terms and the application of a spatially varying gradient. Since the magnetization is a vector, there are 3 diffusion equations, one for each dimension. The Bloch-Torrey equation is:
where is now the diffusion tensor.
For the simplest case where the diffusion is isotropic the diffusion tensor is a multiple of the identity:
then the Bloch-Torrey equation will have the solution
The exponential term will be referred to as the attenuation . Anisotropic diffusion will have a similar solution for the diffusion tensor, except that what will be measured is the apparent diffusion coefficient (ADC). In general, the attenuation is:
where the terms incorporate the gradient fields , , and .
Grayscale
The standard grayscale of DWI images is to represent increased diffusion restriction as brighter.
ADC image
An apparent diffusion coefficient (ADC) image, or an ADC map, is an MRI image that more specifically shows diffusion than conventional DWI, by eliminating the T2 weighting that is otherwise inherent to conventional DWI. ADC imaging does so by acquiring multiple conventional DWI images with different amounts of DWI weighting, and the change in signal is proportional to the rate of diffusion. Contrary to DWI images, the standard grayscale of ADC images is to represent a smaller magnitude of diffusion as darker.
Cerebral infarction leads to diffusion restriction, and the difference between images with various DWI weighting will therefore be minor, leading to an ADC image with low signal in the infarcted area. A decreased ADC may be detected minutes after a cerebral infarction. The high signal of infarcted tissue on conventional DWI is a result of its partial T2 weighting.
Diffusion tensor imaging
Diffusion tensor imaging (DTI) is a magnetic resonance imaging technique that enables the measurement of the restricted diffusion of water in tissue in order to produce neural tract images instead of using this data solely for the purpose of assigning contrast or colors to pixels in a cross-sectional image. It also provides useful structural information about muscle—including heart muscle—as well as other tissues such as the prostate.
In DTI, each voxel has one or more pairs of parameters: a rate of diffusion and a preferred direction of diffusion—described in terms of three-dimensional space—for which that parameter is valid. The properties of each voxel of a single DTI image are usually calculated by vector or tensor math from six or more different diffusion weighted acquisitions, each obtained with a different orientation of the diffusion sensitizing gradients. In some methods, hundreds of measurements—each making up a complete image—are made to generate a single resulting calculated image data set. The higher information content of a DTI voxel makes it extremely sensitive to subtle pathology in the brain. In addition the directional information can be exploited at a higher level of structure to select and follow neural tracts through the brain—a process called tractography.
A more precise statement of the image acquisition process is that the image-intensities at each position are attenuated, depending on the strength (b-value) and direction of the so-called magnetic diffusion gradient, as well as on the local microstructure in which the water molecules diffuse. The more attenuated the image is at a given position, the greater diffusion there is in the direction of the diffusion gradient. In order to measure the tissue's complete diffusion profile, one needs to repeat the MR scans, applying different directions (and possibly strengths) of the diffusion gradient for each scan.
Mathematical foundation—tensors
Diffusion MRI relies on the mathematics and physical interpretations of the geometric quantities known as tensors. Only a special case of the general mathematical notion is relevant to imaging, which is based on the concept of a symmetric matrix. Diffusion itself is tensorial, but in many cases the objective is not really about trying to study brain diffusion per se, but rather just trying to take advantage of diffusion anisotropy in white matter for the purpose of finding the orientation of the axons and the magnitude or degree of anisotropy. Tensors have a real, physical existence in a material or tissue so that they do not move when the coordinate system used to describe them is rotated. There are numerous different possible representations of a tensor (of rank 2), but among these, this discussion focuses on the ellipsoid because of its physical relevance to diffusion and because of its historical significance in the development of diffusion anisotropy imaging in MRI.
The following matrix displays the components of the diffusion tensor:
The same matrix of numbers can have a simultaneous second use to describe the shape and orientation of an ellipse and the same matrix of numbers can be used simultaneously in a third way for matrix mathematics to sort out eigenvectors and eigenvalues as explained below.
Physical tensors
The idea of a tensor in physical science evolved from attempts to describe the quantity of physical properties. The first properties they were applied to were those that can be described by a single number, such as temperature. Properties that can be described this way are called scalars; these can be considered tensors of rank 0, or 0th-order tensors. Tensors can also be used to describe quantities that have directionality, such as mechanical force. These quantities require specification of both magnitude and direction, and are often represented with a vector. A three-dimensional vector can be described with three components: its projection on the x, y, and z axes. Vectors of this sort can be considered tensors of rank 1, or 1st-order tensors.
A tensor is often a physical or biophysical property that determines the relationship between two vectors. When a force is applied to an object, movement can result. If the movement is in a single direction, the transformation can be described using a vector—a tensor of rank 1. However, in a tissue, diffusion leads to movement of water molecules along trajectories that proceed along multiple directions over time, leading to a complex projection onto the Cartesian axes. This pattern is reproducible if the same conditions and forces are applied to the same tissue in the same way. If there is an internal anisotropic organization of the tissue that constrains diffusion, then this fact will be reflected in the pattern of diffusion. The relationship between the properties of driving force that generate diffusion of the water molecules and the resulting pattern of their movement in the tissue can be described by a tensor. The collection of molecular displacements of this physical property can be described with nine components—each one associated with a pair of axes xx, yy, zz, xy, yx, xz, zx, yz, zy. These can be written as a matrix similar to the one at the start of this section.
Diffusion from a point source in the anisotropic medium of white matter behaves in a similar fashion. The first pulse of the Stejskal Tanner diffusion gradient effectively labels some water molecules and the second pulse effectively shows their displacement due to diffusion. Each gradient direction applied measures the movement along the direction of that gradient. Six or more gradients are summed to get all the measurements needed to fill in the matrix, assuming it is symmetric above and below the diagonal (red subscripts).
In 1848, Henri Hureau de Sénarmont applied a heated point to a polished crystal surface that had been coated with wax. In some materials that had "isotropic" structure, a ring of melt would spread across the surface in a circle. In anisotropic crystals the spread took the form of an ellipse. In three dimensions this spread is an ellipsoid. As Adolf Fick showed in the 1850s, diffusion exhibits many of the same patterns as those seen in the transfer of heat.
Mathematics of ellipsoids
At this point, it is helpful to consider the mathematics of ellipsoids. An ellipsoid can be described by the formula: . This equation describes a quadric surface. The relative values of a, b, and c determine if the quadric describes an ellipsoid or a hyperboloid.
As it turns out, three more components can be added as follows: . Many combinations of a, b, c, d, e, and f still describe ellipsoids, but the additional components (d, e, f) describe the rotation of the ellipsoid relative to the orthogonal axes of the Cartesian coordinate system. These six variables can be represented by a matrix similar to the tensor matrix defined at the start of this section (since diffusion is symmetric, then we only need six instead of nine components—the components below the diagonal elements of the matrix are the same as the components above the diagonal). This is what is meant when it is stated that the components of a matrix of a second order tensor can be represented by an ellipsoid—if the diffusion values of the six terms of the quadric ellipsoid are placed into the matrix, this generates an ellipsoid angled off the orthogonal grid. Its shape will be more elongated if the relative anisotropy is high.
When the ellipsoid/tensor is represented by a matrix, we can apply a useful technique from standard matrix mathematics and linear algebra—that is to "diagonalize" the matrix. This has two important meanings in imaging. The idea is that there are two equivalent ellipsoids—of identical shape but with different size and orientation. The first one is the measured diffusion ellipsoid sitting at an angle determined by the axons, and the second one is perfectly aligned with the three Cartesian axes. The term "diagonalize" refers to the three components of the matrix along a diagonal from upper left to lower right (the components with red subscripts in the matrix at the start of this section). The variables , , and are along the diagonal (red subscripts), but the variables d, e and f are "off diagonal". It then becomes possible to do a vector processing step in which we rewrite our matrix and replace it with a new matrix multiplied by three different vectors of unit length (length=1.0). The matrix is diagonalized because the off-diagonal components are all now zero. The rotation angles required to get to this equivalent position now appear in the three vectors and can be read out as the x, y, and z components of each of them. Those three vectors are called "eigenvectors" or characteristic vectors. They contain the orientation information of the original ellipsoid. The three axes of the ellipsoid are now directly along the main orthogonal axes of the coordinate system so we can easily infer their lengths. These lengths are the eigenvalues or characteristic values.
Diagonalization of a matrix is done by finding a second matrix that it can be multiplied with followed by multiplication by the inverse of the second matrix—wherein the result is a new matrix in which three diagonal (xx, yy, zz) components have numbers in them but the off-diagonal components (xy, yz, zx) are 0. The second matrix provides eigenvector information.
Measures of anisotropy and diffusivity
In present-day clinical neurology, various brain pathologies may be best detected by looking at particular measures of anisotropy and diffusivity. The underlying physical process of diffusion causes a group of water molecules to move out from a central point, and gradually reach the surface of an ellipsoid if the medium is anisotropic (it would be the surface of a sphere for an isotropic medium). The ellipsoid formalism functions also as a mathematical method of organizing tensor data. Measurement of an ellipsoid tensor further permits a retrospective analysis, to gather information about the process of diffusion in each voxel of the tissue.
In an isotropic medium such as cerebrospinal fluid, water molecules are moving due to diffusion and they move at equal rates in all directions. By knowing the detailed effects of diffusion gradients we can generate a formula that allows us to convert the signal attenuation of an MRI voxel into a numerical measure of diffusion—the diffusion coefficient D. When various barriers and restricting factors such as cell membranes and microtubules interfere with the free diffusion, we are measuring an "apparent diffusion coefficient", or ADC, because the measurement misses all the local effects and treats the attenuation as if all the movement rates were solely due to Brownian motion. The ADC in anisotropic tissue varies depending on the direction in which it is measured. Diffusion is fast along the length of (parallel to) an axon, and slower perpendicularly across it.
Once we have measured the voxel from six or more directions and corrected for attenuations due to T2 and T1 effects, we can use information from our calculated ellipsoid tensor to describe what is happening in the voxel. If you consider an ellipsoid sitting at an angle in a Cartesian grid then you can consider the projection of that ellipse onto the three axes. The three projections can give you the ADC along each of the three axes ADCx, ADCy, ADCz. This leads to the idea of describing the average diffusivity in the voxel which will simply be
We use the i subscript to signify that this is what the isotropic diffusion coefficient would be with the effects of anisotropy averaged out.
The ellipsoid itself has a principal long axis and then two more small axes that describe its width and depth. All three of these are perpendicular to each other and cross at the center point of the ellipsoid. We call the axes in this setting eigenvectors and the measures of their lengths eigenvalues. The lengths are symbolized by the Greek letter λ. The long one pointing along the axon direction will be λ1 and the two small axes will have lengths λ2 and λ3. In the setting of the DTI tensor ellipsoid, we can consider each of these as a measure of the diffusivity along each of the three primary axes of the ellipsoid. This is a little different from the ADC since that was a projection on the axis, while λ is an actual measurement of the ellipsoid we have calculated.
The diffusivity along the principal axis, λ1 is also called the longitudinal diffusivity or the axial diffusivity or even the parallel diffusivity λ∥. Historically, this is closest to what Richards originally measured with the vector length in 1991. The diffusivities in the two minor axes are often averaged to produce a measure of radial diffusivity
This quantity is an assessment of the degree of restriction due to membranes and other effects and proves to be a sensitive measure of degenerative pathology in some neurological conditions. It can also be called the perpendicular diffusivity ().
Another commonly used measure that summarizes the total diffusivity is the Trace—which is the sum of the three eigenvalues,
where is a diagonal matrix with eigenvalues , and on its diagonal.
If we divide this sum by three we have the mean diffusivity,
which equals ADCi since
where is the matrix of eigenvectors and is the diffusion tensor.
Aside from describing the amount of diffusion, it is often important to describe the relative degree of anisotropy in a voxel. At one extreme would be the sphere of isotropic diffusion and at the other extreme would be a cigar or pencil shaped very thin prolate spheroid. The simplest measure is obtained by dividing the longest axis of the ellipsoid by the shortest = (λ1/λ3). However, this proves to be very susceptible to measurement noise, so increasingly complex measures were developed to capture the measure while minimizing the noise. An important element of these calculations is the sum of squares of the diffusivity differences = (λ1 − λ2)2 + (λ1 − λ3)2 + (λ2 − λ3)2. We use the square root of the sum of squares to obtain a sort of weighted average—dominated by the largest component. One objective is to keep the number near 0 if the voxel is spherical but near 1 if it is elongate. This leads to the fractional anisotropy or FA which is the square root of the sum of squares (SRSS) of the diffusivity differences, divided by the SRSS of the diffusivities. When the second and third axes are small relative to the principal axis, the number in the numerator is almost equal the number in the denominator. We also multiply by so that FA has a maximum value of 1. The whole formula for FA looks like this:
where
The fractional anisotropy can also be separated into linear, planar, and spherical measures depending on the "shape" of the diffusion ellipsoid. For example, a "cigar" shaped prolate ellipsoid indicates a strongly linear anisotropy, a "flying saucer" or oblate spheroid represents diffusion in a plane, and a sphere is indicative of isotropic diffusion, equal in all directions. If the eigenvalues of the diffusion vector are sorted such that , then the measures can be calculated as follows:
For the linear case, where ,
For the planar case, where ,
For the spherical case, where ,
Each measure lies between 0 and 1 and they sum to unity. An additional anisotropy measure can used to describe the deviation from the spherical case:
There are other metrics of anisotropy used, including the relative anisotropy (RA):
and the volume ratio (VR):
Applications
The most common application of conventional DWI (without DTI) is in acute brain ischemia. DWI directly visualizes the ischemic necrosis in cerebral infarction in the form of a cytotoxic edema, appearing as a high DWI signal within minutes of arterial occlusion. With perfusion MRI detecting both the infarcted core and the salvageable penumbra, the latter can be quantified by DWI and perfusion MRI.
Another application area of DWI is in oncology. Tumors are in many instances highly cellular, giving restricted diffusion of water, and therefore appear with a relatively high signal intensity in DWI. DWI is commonly used to detect and stage tumors, and also to monitor tumor response to treatment over time. DWI can also be collected to visualize the whole body using a technique called 'diffusion-weighted whole-body imaging with background body signal suppression' (DWIBS). Some more specialized diffusion MRI techniques such as diffusion kurtosis imaging (DKI) have also been shown to predict the response of cancer patients to chemotherapy treatment.
The principal application is in the imaging of white matter where the location, orientation, and anisotropy of the tracts can be measured. The architecture of the axons in parallel bundles, and their myelin sheaths, facilitate the diffusion of the water molecules preferentially along their main direction. Such preferentially oriented diffusion is called anisotropic diffusion.
The imaging of this property is an extension of diffusion MRI. If a series of diffusion gradients (i.e. magnetic field variations in the MRI magnet) are applied that can determine at least 3 directional vectors (use of 6 different gradients is the minimum and additional gradients improve the accuracy for "off-diagonal" information), it is possible to calculate, for each voxel, a tensor (i.e. a symmetric positive definite 3×3 matrix) that describes the 3-dimensional shape of diffusion. The fiber direction is indicated by the tensor's main eigenvector. This vector can be color-coded, yielding a cartography of the tracts' position and direction (red for left-right, blue for superior-inferior, and green for anterior-posterior). The brightness is weighted by the fractional anisotropy which is a scalar measure of the degree of anisotropy in a given voxel. Mean diffusivity (MD) or trace is a scalar measure of the total diffusion within a voxel. These measures are commonly used clinically to localize white matter lesions that do not show up on other forms of clinical MRI.
Applications in the brain:
Tract-specific localization of white matter lesions such as trauma and in defining the severity of diffuse traumatic brain injury. The localization of tumors in relation to the white matter tracts (infiltration, deflection), has been one of the most important initial applications. In surgical planning for some types of brain tumors, surgery is aided by knowing the proximity and relative position of the corticospinal tract and a tumor.
Diffusion tensor imaging data can be used to perform tractography within white matter. Fiber tracking algorithms can be used to track a fiber along its whole length (e.g. the corticospinal tract, through which the motor information transit from the motor cortex to the spinal cord and the peripheral nerves). Tractography is a useful tool for measuring deficits in white matter, such as in aging. Its estimation of fiber orientation and strength is increasingly accurate, and it has widespread potential implications in the fields of cognitive neuroscience and neurobiology.
The use of DTI for the assessment of white matter in development, pathology and degeneration has been the focus of over 2,500 research publications since 2005. It promises to be very helpful in distinguishing Alzheimer's disease from other types of dementia. Applications in brain research include the investigation of neural networks in vivo, as well as in connectomics.
Applications for peripheral nerves:
Brachial plexus: DTI can differentiate normal nerves (as shown in the tractogram of the spinal cord and brachial plexus and 3D 4k reconstruction here) from traumatically injured nerve roots.
Cubital Tunnel Syndrome: metrics derived from DTI (FA and RD) can differentiate asymptomatic adults from those with compression of the ulnar nerve at the elbow
Carpal Tunnel Syndrome: Metrics derived from DTI (lower FA and MD) differentiate healthy adults from those with carpal tunnel syndrome
Research
Early in the development of DTI based tractography, a number of researchers pointed out a flaw in the diffusion tensor model. The tensor analysis assumes that there is a single ellipsoid in each imaging voxel—as if all of the axons traveling through a voxel traveled in exactly the same direction. This is often true, but it can be estimated that in more than 30% of the voxels in a standard resolution brain image, there are at least two different neural tracts traveling in different directions that pass through each other. In the classic diffusion ellipsoid tensor model, the information from the crossing tract appears as noise or unexplained decreased anisotropy in a given voxel.
David Tuch was among the first to describe a solution to this problem. The idea is best understood by conceptually placing a kind of geodesic dome around each image voxel. This icosahedron provides a mathematical basis for passing a large number of evenly spaced gradient trajectories through the voxel—each coinciding with one of the apices of the icosahedron. We can then look into the voxel from a large number of different directions (typically 40 or more). We use "n-tuple" tessellations to add more evenly spaced apices to the original icosahedron (20 faces)—an idea that also had its precedents in paleomagnetism research several decades earlier. We want to know which direction lines turn up the maximum anisotropic diffusion measures. If there is a single tract, there will be only two maxima, pointing in opposite directions. If two tracts cross in the voxel, there will be two pairs of maxima, and so on. We can still use tensor mathematics to use the maxima to select groups of gradients to package into several different tensor ellipsoids in the same voxel, or use more complex higher-rank tensor analyses, or we can do a true "model free" analysis that picks the maxima, and then continue to do the tractography.
The Q-Ball method of tractography is an implementation in which David Tuch provides a mathematical alternative to the tensor model. Instead of forcing the diffusion anisotropy data into a group of tensors, the mathematics used deploys both probability distributions and some classic geometric tomography and vector mathematics developed nearly 100 years ago—the Funk Radon Transform.
Note, there is ongoing debate about the best way to preprocess DW-MRI. Several in-vivo studies have shown that the choice of software and functions applied (directed at correcting artefacts arising from e.g. motion and eddy-currents) have a meaningful impact on the DTI parameter estimates from tissue. Consequently, this is the topic of a multinational study directed by the diffusion-study group of the ISMRM.
Summary
For DTI, it is generally possible to use linear algebra, matrix mathematics and vector mathematics to process the analysis of the tensor data.
In some cases, the full set of tensor properties is of interest, but for tractography it is usually necessary to know only the magnitude and orientation of the primary axis or vector. This primary axis—the one with the greatest length—is the largest eigenvalue and its orientation is encoded in its matched eigenvector. Only one axis is needed as it is assumed the largest eigenvalue is aligned with the main axon direction to accomplish tractography.
See also
Connectogram
Connectome
Tractography
Explanatory notes
References
External links
PNRC: About Diffusion MRI
White Matter Atlas
Imaging
Tensors
Neuroimaging
Magnetic resonance imaging
de:Diffusions-Tensor-Bildgebung | Diffusion-weighted magnetic resonance imaging | [
"Chemistry",
"Engineering"
] | 7,630 | [
"Tensors",
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
2,574,486 | https://en.wikipedia.org/wiki/Fubini%E2%80%93Study%20metric | In mathematics, the Fubini–Study metric (IPA: /fubini-ʃtuːdi/) is a Kähler metric on a complex projective space CPn endowed with a Hermitian form. This metric was originally described in 1904 and 1905 by Guido Fubini and Eduard Study.
A Hermitian form in (the vector space) Cn+1 defines a unitary subgroup U(n+1) in GL(n+1,C). A Fubini–Study metric is determined up to homothety (overall scaling) by invariance under such a U(n+1) action; thus it is homogeneous. Equipped with a Fubini–Study metric, CPn is a symmetric space. The particular normalization on the metric depends on the application. In Riemannian geometry, one uses a normalization so that the Fubini–Study metric simply relates to the standard metric on the (2n+1)-sphere. In algebraic geometry, one uses a normalization making CPn a Hodge manifold.
Construction
The Fubini–Study metric arises naturally in the quotient space construction of complex projective space.
Specifically, one may define CPn to be the space consisting of all complex lines in Cn+1, i.e., the quotient of Cn+1\{0} by the equivalence relation relating all complex multiples of each point together. This agrees with the quotient by the diagonal group action of the multiplicative group C* = C \ {0}:
This quotient realizes Cn+1\{0} as a complex line bundle over the base space CPn. (In fact this is the so-called tautological bundle over CPn.) A point of CPn is thus identified with an equivalence class of (n+1)-tuples [Z0,...,Zn] modulo nonzero complex rescaling; the Zi are called homogeneous coordinates of the point.
Furthermore, one may realize this quotient mapping in two steps: since multiplication by a nonzero complex scalar z = R eiθ can be uniquely thought of as the composition of a dilation by the modulus R followed by a counterclockwise rotation about the origin by an angle , the quotient mapping Cn+1 → CPn splits into two pieces.
where step (a) is a quotient by the dilation Z ~ RZ for R ∈ R+, the multiplicative group of positive real numbers, and step (b) is a quotient by the rotations Z ~ eiθZ.
The result of the quotient in (a) is the real hypersphere S2n+1 defined by the equation |Z|2 = |Z0|2 + ... + |Zn|2 = 1. The quotient in (b) realizes CPn = S2n+1/S1, where S1 represents the group of rotations. This quotient is realized explicitly by the famous Hopf fibration S1 → S2n+1 → CPn, the fibers of which are among the great circles of .
As a metric quotient
When a quotient is taken of a Riemannian manifold (or metric space in general), care must be taken to ensure that the quotient space is endowed with a metric that is well-defined. For instance, if a group G acts on a Riemannian manifold (X,g), then in order for the orbit space X/G to possess an induced metric, must be constant along G-orbits in the sense that for any element h ∈ G and pair of vector fields we must have g(Xh,Yh) = g(X,Y).
The standard Hermitian metric on Cn+1 is given in the standard basis by
whose realification is the standard Euclidean metric on R2n+2. This metric is not invariant under the diagonal action of C*, so we are unable to directly push it down to CPn in the quotient. However, this metric is invariant under the diagonal action of S1 = U(1), the group of rotations. Therefore, step (b) in the above construction is possible once step (a) is accomplished.
The Fubini–Study metric is the metric induced on the quotient CPn = S2n+1/S1, where carries the so-called "round metric" endowed upon it by restriction of the standard Euclidean metric to the unit hypersphere.
In local affine coordinates
Corresponding to a point in CPn with homogeneous coordinates , there is a unique set of n coordinates such that
provided ; specifically, . The form an affine coordinate system for CPn in the coordinate patch . One can develop an affine coordinate system in any of the coordinate patches by dividing instead by in the obvious manner. The n+1 coordinate patches cover CPn, and it is possible to give the metric explicitly in terms of the affine coordinates on . The coordinate derivatives define a frame of the holomorphic tangent bundle of CPn, in terms of which the Fubini–Study metric has Hermitian components
where |z|2 = |z1|2 + ... + |zn|2. That is, the Hermitian matrix of the Fubini–Study metric in this frame is
Note that each matrix element is unitary-invariant: the diagonal action will leave this matrix unchanged.
Accordingly, the line element is given by
In this last expression, the summation convention is used to sum over Latin indices i,j that range from 1 to n.
The metric can be derived from the following Kähler potential:
as
Using homogeneous coordinates
An expression is also possible in the notation of homogeneous coordinates, commonly used to describe projective varieties of algebraic geometry: Z = [Z0:...:Zn]. Formally, subject to suitably interpreting the expressions involved, one has
Here the summation convention is used to sum over Greek indices α β ranging from 0 to n, and in the last equality the standard notation for the skew part of a tensor is used:
Now, this expression for ds2 apparently defines a tensor on the total space of the tautological bundle Cn+1\{0}. It is to be understood properly as a tensor on CPn by pulling it back along a holomorphic section σ of the tautological bundle of CPn. It remains then to verify that the value of the pullback is independent of the choice of section: this can be done by a direct calculation.
The Kähler form of this metric is
where the are the Dolbeault operators.
The pullback of this is clearly independent of the choice of holomorphic section. The quantity log|Z|2 is the Kähler potential (sometimes called the Kähler scalar) of CPn.
In bra-ket coordinate notation
In quantum mechanics, the Fubini–Study metric is also known as the Bures metric. However, the Bures metric is typically defined in the notation of mixed states, whereas the exposition below is written in terms of a pure state. The real part of the metric is (a quarter of) the Fisher information metric.
The Fubini–Study metric may be written using the bra–ket notation commonly used in quantum mechanics. To explicitly equate this notation to the homogeneous coordinates given above, let
where is a set of orthonormal basis vectors for Hilbert space, the are complex numbers, and is the standard notation for a point in the projective space CPn in homogeneous coordinates. Then, given two points and in the space, the distance (length of a geodesic) between them is
or, equivalently, in projective variety notation,
Here, is the complex conjugate of . The appearance of in the denominator is a reminder that and likewise were not normalized to unit length; thus the normalization is made explicit here. In Hilbert space, the metric can be interpreted as the angle between two vectors; thus it is occasionally called the quantum angle. The angle is real-valued, and runs from 0 to .
The infinitesimal form of this metric may be quickly obtained by taking , or equivalently, to obtain
In the context of quantum mechanics, CP1 is called the Bloch sphere; the Fubini–Study metric is the natural metric for the geometrization of quantum mechanics. Much of the peculiar behaviour of quantum mechanics, including quantum entanglement and the Berry phase effect, can be attributed to the peculiarities of the Fubini–Study metric.
The n = 1 case
When n = 1, there is a diffeomorphism given by stereographic projection. This leads to the "special" Hopf fibration S1 → S3 → S2. When the Fubini–Study metric is written in coordinates on CP1, its restriction to the real tangent bundle yields an expression of the ordinary "round metric" of radius 1/2 (and Gaussian curvature 4) on S2.
Namely, if z = x + iy is the standard affine coordinate chart on the Riemann sphere CP1 and x = r cos θ, y = r sin θ are polar coordinates on C, then a routine computation shows
where is the round metric on the unit 2-sphere. Here φ, θ are "mathematician's spherical coordinates" on S2 coming from the stereographic projection r tan(φ/2) = 1, tan θ = y/x. (Many physics references interchange the roles of φ and θ.)
The Kähler form is
Choosing as vierbeins and , the Kähler form simplifies to
Applying the Hodge star to the Kähler form, one obtains
implying that K is harmonic.
The n = 2 case
The Fubini–Study metric on the complex projective plane CP2 has been proposed as a gravitational instanton, the gravitational analog of an instanton. The metric, the connection form and the curvature are readily computed, once suitable real 4D coordinates are established. Writing for real Cartesian coordinates, one then defines polar coordinate one-forms on the 4-sphere (the quaternionic projective line) as
The are the standard left-invariant one-form coordinate frame on the Lie group ; that is, they obey for and cyclic permutations.
The corresponding local affine coordinates are and then provide
with the usual abbreviations that and .
The line element, starting with the previously given expression, is given by
The vierbeins can be immediately read off from the last expression:
That is, in the vierbein coordinate system, using roman-letter subscripts, the metric tensor is Euclidean:
Given the vierbein, a spin connection can be computed; the Levi-Civita spin connection is the unique connection that is torsion-free and covariantly constant, namely, it is the one-form that satisfies the torsion-free condition
and is covariantly constant, which, for spin connections, means that it is antisymmetric in the vierbein indexes:
The above is readily solved; one obtains
The curvature 2-form is defined as
and is constant:
The Ricci tensor in veirbein indexes is given by
where the curvature 2-form was expanded as a four-component tensor:
The resulting Ricci tensor is constant
so that the resulting Einstein equation
can be solved with the cosmological constant .
The Weyl tensor for Fubini–Study metrics in general is given by
For the n = 2 case, the two-forms
are self-dual:
Curvature properties
In the n = 1 special case, the Fubini–Study metric has constant sectional curvature identically equal to 4, according to the equivalence with the 2-sphere's round metric (which given a radius R has sectional curvature ). However, for n > 1, the Fubini–Study metric does not have constant curvature. Its sectional curvature is instead given by the equation
where is an orthonormal basis of the 2-plane σ, the mapping J : TCPn → TCPn is the complex structure on CPn, and is the Fubini–Study metric.
A consequence of this formula is that the sectional curvature satisfies for all 2-planes . The maximum sectional curvature (4) is attained at a holomorphic 2-plane — one for which J(σ) ⊂ σ — while the minimum sectional curvature (1) is attained at a 2-plane for which J(σ) is orthogonal to σ. For this reason, the Fubini–Study metric is often said to have "constant holomorphic sectional curvature" equal to 4.
This makes CPn a (non-strict) quarter pinched manifold; a celebrated theorem shows that a strictly quarter-pinched simply connected n-manifold must be homeomorphic to a sphere.
The Fubini–Study metric is also an Einstein metric in that it is proportional to its own Ricci tensor: there exists a constant ; such that for all i,j we have
This implies, among other things, that the Fubini–Study metric remains unchanged up to a scalar multiple under the Ricci flow. It also makes CPn indispensable to the theory of general relativity, where it serves as a nontrivial solution to the vacuum Einstein field equations.
The cosmological constant for CPn is given in terms of the dimension of the space:
Product metric
The common notions of separability apply for the Fubini–Study metric. More precisely, the metric is separable on the natural product of projective spaces, the Segre embedding. That is, if is a separable state, so that it can be written as , then the metric is the sum of the metric on the subspaces:
where and are the metrics, respectively, on the subspaces A and B.
Connection and curvature
The fact that the metric can be derived from the Kähler potential means that the Christoffel symbols and the curvature tensors contain a lot of symmetries, and can be given a particularly simple form: The Christoffel symbols, in the local affine coordinates, are given by
The Riemann tensor is also particularly simple:
The Ricci tensor is
See also
Non-linear sigma model
Kaluza–Klein theory
Arakelov height
References
.
Projective geometry
Complex manifolds
Symplectic geometry
Structures on manifolds
Quantum mechanics | Fubini–Study metric | [
"Physics"
] | 2,994 | [
"Theoretical physics",
"Quantum mechanics"
] |
2,574,661 | https://en.wikipedia.org/wiki/Bergius%20process | The Bergius process is a method of production of liquid hydrocarbons for use as synthetic fuel by hydrogenation of high-volatile bituminous coal at high temperature and pressure. It was first developed by Friedrich Bergius in 1913. In 1931 Bergius was awarded the Nobel Prize in Chemistry for his development of high-pressure chemistry.
Process
The coal is finely ground and dried in a stream of hot gas. The dry product is mixed with heavy oil recycled from the process. A catalyst is typically added to the mixture. A number of catalysts have been developed over the years, including tungsten or molybdenum disulfide, tin or nickel oleate, and others. Alternatively, iron sulfide present in the coal may have sufficient catalytic activity for the process, which was the original Bergius process.
The mixture is pumped into a reactor. The reaction occurs at between 400 and 500 °C and 20 to 70 MPa hydrogen pressure. The reaction produces heavy oils, middle oils, gasoline, and gases. The overall reaction can be summarized as follows:
(where x = Degrees of Unsaturation)
The immediate product from the reactor must be stabilized by passing it over a conventional hydrotreating catalyst. The product stream is high in cycloalkanes and aromatics, low in alkanes (paraffins) and very low in alkenes (olefins). The different fractions can be passed to further processing (cracking, reforming) to output synthetic fuel of desirable quality. If passed through a process such as platforming, most of the cycloalkanes are converted to aromatics and the recovered hydrogen recycled to the process. The liquid product from Platforming will contain over 75% aromatics and has a Research Octane Number (RON) of over 105.
Overall, about 97% of input carbon fed directly to the process can be converted into synthetic fuel. However, any carbon used in generating hydrogen will be lost as carbon dioxide, so reducing the overall carbon efficiency of the process.
There is a residue of unreactive tarry compounds mixed with ash from the coal and catalyst. To minimise the loss of carbon in the residue stream, it is necessary to have a low-ash feed. Typically the coal should be <10% ash by weight. The hydrogen required for the process can be also produced from coal or the residue by steam reforming. A typical hydrogen demand is ~80 kg hydrogen per ton of dry, ash-free coal. Generally, this process is similar to hydrogenation. The output is at three levels: heavy oil, middle oil, gasoline. The middle oil is hydrogenated in order to get more gasoline and the heavy oil is mixed with the coal again and the process restarts. In this way, heavy oil and middle oil fractions are also reused in this process.
The most recent evolution of Bergius' work is the 2-stage hydroliquefaction plant at Wilsonville AL which operated during 1981-85. Here a coal extract was prepared under heat and hydrogen pressure using finely pulverized coal and recycle donor solvent. As the coal molecule is broken down, free radicals are formed which are immediately stabilized by absorption of H atoms from the donor solvent. Extract then passes to a catalytic ebullated-bed hydrocracker (H-Oil unit) fed by additional hydrogen, forming lower molecular weight hydrocarbons and splitting off sulfur, oxygen and nitrogen originally present in the coal. Part of the liquid product is hydrogenated donor solvent which is returned to Stage I. The balance of liquid product is fractionated by distillation yielding various boiling range products and an ashy residue. Ashy residue goes to a Kerr-McGee critical solvent deashing unit which yields additional liquid product and a high-ash material containing unreacted coal and heavy residuum, which in a commercial plant would be gasified to make the H2 needed to feed the process. Parameters can be adjusted to avoid directly gasifying any of the coal entering the plant. Alternative versions of the plant configuration could use L-C Fining and/or an antisolvent deashing unit. Typical species in the donor solvent are fused-ring aromatics (tetrahydronaphthalene and up) or the analogous heterocycles.
History
Friedrich Bergius developed the process during his habilitation. A technique for the high-pressure and high-temperature chemistry of carbon-containing substrates yielded in a patent in 1913. In this process liquid hydrocarbons used as synthetic fuel are produced by hydrogenation of lignite (brown coal). He developed the process well before the commonly known Fischer–Tropsch process. Karl Goldschmidt invited him to build an industrial plant at his factory the Th. Goldschmidt AG (now known as Evonik Industries) in 1914. The production began only in 1919, after World War I ended, when the need for fuel was already declining. The technical problems, inflation and the constant criticism of Franz Joseph Emil Fischer, which changed to support after a personal demonstration of the process, made the progress slow, and Bergius sold his patent to BASF, where Carl Bosch worked on it. Before World War II several plants were built with an annual capacity of 4 million tons of synthetic fuel. These plants were extensively used during World War II to supply Germany with fuel and lubricants.
Use
Coal hydrogenation is not used commercially any more.
The Bergius process was extensively used by Brabag, a cartel firm of Nazi Germany. Plants that used the process were targeted for bombing during the Oil Campaign of World War II. At present there are no plants operating the Bergius Process or its derivatives commercially. The largest demonstration plant was the 200 ton per day plant at Bottrop, Germany, operated by Ruhrkohle, which ceased operation in 1993. There are reports of a Chinese company constructing a plant with a capacity of 4 000 ton per day. It was expected to become operational in 2007, but there has been no confirmation that this was achieved.
Towards the end of World War II the United States began heavily financing research into converting coal to gasoline, including money to build a series of pilot plants. The project was enormously helped by captured German technology. One plant using the Bergius process was built in Louisiana, Missouri and began operation about 1946. Located along the Mississippi river, this plant was producing gasoline in commercial quantities by 1948. The Louisiana process method produced automobile gasoline at a price slightly higher than, but comparable to, petroleum-based gasoline but of a higher quality. The facility was shut down in 1953 by the Eisenhower administration, allegedly after intense lobbying by the oil industry.
See also
Synthetic Liquid Fuels Program
Fischer–Tropsch process
Karrick process
Coal-water slurry fuel
References
External links
The Early Days of Coal Research, U.S. Department of Energy webpage
Coal
Catalysis
Synthetic fuel technologies
German inventions
1913 in science | Bergius process | [
"Chemistry"
] | 1,410 | [
"Catalysis",
"Chemical kinetics",
"Petroleum technology",
"Synthetic fuel technologies"
] |
2,574,694 | https://en.wikipedia.org/wiki/Cone%20beam%20reconstruction | In microtomography X-ray scanners, cone beam reconstruction is one of two common scanning methods, the other being Fan beam reconstruction.
Cone beam reconstruction uses a 2-dimensional approach for obtaining projection data. Instead of utilizing a single row of detectors, as fan beam methods do, a cone beam systems uses a standard charge-coupled device camera, focused on a scintillator material. The scintillator converts X-ray radiation to visible light, which is picked up by the camera and recorded. The method has enjoyed widespread implementation in microtomography, and is also used in several larger-scale systems.
An X-ray source is positioned across from the detector, with the object being scanned in between. (This is essentially the same setup used for an ordinary X-ray fluoroscope).
Projections from different angles are obtained in one of two ways. In one method, the object being scanned is rotated. This has the advantage of simplicity in implementation; a rotating stage results in little complexity. The second method involves rotating the X-ray source and camera around the object, as is done in ordinary CT scanning and SPECT imaging. This adds complexity, size and cost to the system, but removes the need to rotate the object.
The method is referred to as cone-beam reconstruction because the X-rays are emitted from the source as a cone-shaped beam. In other words, it begins as a tight beam at the source, and expands as it moves away.
See also
Computed tomography
Industrial CT scanning
Tomographic reconstruction
References
Medical imaging
Medical physics
X-ray computed tomography | Cone beam reconstruction | [
"Physics"
] | 324 | [
"Applied and interdisciplinary physics",
"Medical physics"
] |
2,576,183 | https://en.wikipedia.org/wiki/Fluoride%20volatility | Fluoride volatility is the tendency of highly fluorinated molecules to vaporize at comparatively low temperatures. Heptafluorides, hexafluorides and pentafluorides have much lower boiling points than the lower-valence fluorides. Most difluorides and trifluorides have high boiling points, while most tetrafluorides and monofluorides fall in between. The term "fluoride volatility" is jargon used particularly in the context of separation of radionuclides.
Volatility and valence
Valences for the majority of elements are based on the highest known fluoride.
Roughly, fluoride volatility can be used to remove elements with a valence of 5 or greater: uranium, neptunium, plutonium, metalloids (tellurium, antimony), nonmetals (selenium), halogens (iodine, bromine), and the middle transition metals (niobium, molybdenum, technetium, ruthenium, and possibly rhodium). This fraction includes the actinides most easily reusable as nuclear fuel in a thermal reactor, and the two long-lived fission products best suited to disposal by transmutation, Tc-99 and I-129, as well as Se-79.
Noble gases (xenon, krypton) are volatile even without fluoridation, and will not condense except at much lower temperatures.
Left behind are alkali metals (caesium, rubidium), alkaline earth metals (strontium, barium), lanthanides, the remaining actinides (americium, curium), remaining transition metals (yttrium, zirconium, palladium, silver) and post-transition metals (tin, indium, cadmium). This fraction contains the fission products that are radiation hazards on a scale of decades (Cs-137, Sr-90, Sm-151), the four remaining long-lived fission products Cs-135, Zr-93, Pd-107, Sn-126 of which only the last emits strong radiation, most of the neutron poisons, and the higher actinides (americium, curium, californium) that are radiation hazards on a scale of hundreds or thousands of years and are difficult to work with because of gamma radiation but are fissionable in a fast reactor. Americium finds use in ionization smoke detectors while californium is used as a spontaneous fission based neutron source. Curium has only very limited uses outside nuclear reactors. Fissionable but non-fissile actinoids can be used or disposed of in a subcritical nuclear reactor using an external neutron source such as an Accelerator Driven System.
Reprocessing methods
Uranium oxides react with fluorine to form gaseous uranium hexafluoride, most of the plutonium reacts to form gaseous plutonium hexafluoride, a majority of fission products (especially electropositive elements: lanthanides, strontium, barium, yttrium, caesium) form nonvolatile fluorides. Few metals in the fission products (the transition metals niobium, ruthenium, technetium, molybdenum, and the halogen iodine) form volatile (boiling point <200 °C) fluorides that accompany the uranium and plutonium hexafluorides, together with inert gases. Distillation is then used to separate the uranium hexafluoride from the mixture.
The nonvolatile alkaline fission products and minor actinides fraction is most suitable for further processing with 'dry' electrochemical processing (pyrochemical) non-aqueous methods. The lanthanide fluorides are difficult to dissolve in the nitric acid used for aqueous reprocessing methods, such as PUREX, DIAMEX and SANEX, which use solvent extraction. Fluoride volatility is only one of several pyrochemical processes designed to reprocess used nuclear fuel.
The Řež nuclear research institute at Řež in the Czech Republic tested screw dosers that fed ground uranium oxide (simulating used fuel pellets) into a fluorinator where the particles were burned in fluorine gas to form uranium hexafluoride.
Hitachi has developed a technology, called FLUOREX, which combines fluoride volatility, to extract uranium, with more traditional solvent extraction (PUREX), to extract plutonium and other transuranics. The FLUOREX-based fuel cycle is intended for use with the Reduced moderation water reactor.
Some fluorides are water soluble while others aren't (see the solubility table) and can be separated in aqueous solution. However, all aqueous processes that take place without complete removal of tritium (a common product of ternary fission) prior to addition of water will contaminate the water with tritiated water which is difficult to remove from water. Some elements which form soluble florides form insoluble chlorides. Addition of a suitable soluble chloride (e.g. sodium chloride) will salt out those cations. One example is silver (I) fluoride (water soluble) which forms silver chloride precipitate upon addition of a soluble chloride.
AgF + NaCl -> AgCl (v) + NaF
Some fluorides react aggressively with water and may form highly corrosive hydrogen fluoride. This needs to be taken into account if aqueous processes involving fluorides are to be used.
If desired, a series of further anion-additions similar to the :de:Kationentrennungsgang can be used to separate out different cations for disposal, further processing or use.
Table of relevant properties
See also
FLiNaK
Molten salt reactor
Notes
Missing top fluorides:
PrF4 (because it decomposes at 90 °C)
TbF4 (because it decomposes at 300 °C)
CeF4 (because it decomposes at 600 °C)
Without stable fluorides: Kr
References
External links
Study of Electrochemical Processes for Separation of the Actinides and Lanthanides in Molten Fluoride Media (PDF)
Low-pressure distillation of a portion of the fuel carrier salt from the Molten Salt Reactor Experiment (PDF)
Use of the Fluoride Volatility Process to Extract Technetium from Transmuted Spent Nuclear Fuel (PDF)
A Peer Review of the Strategy for Characterizing Transuranics and Technetium Contamination in Depleted Uranium Hexafluoride Tails Cylinders (PDF)
PHYSICAL CONSTANTS OF INORGANIC COMPOUNDS (PDF)
Nuclear reprocessing
Nuclear chemistry
Fluorides | Fluoride volatility | [
"Physics",
"Chemistry"
] | 1,436 | [
"Nuclear chemistry",
"Salts",
"nan",
"Nuclear physics",
"Fluorides"
] |
2,576,261 | https://en.wikipedia.org/wiki/Electron%20avalanche | An electron avalanche is a process in which a number of free electrons in a transmission medium are subjected to strong acceleration by an electric field and subsequently collide with other atoms of the medium, thereby ionizing them (impact ionization). This releases additional electrons which accelerate and collide with further atoms, releasing more electrons—a chain reaction. In a gas, this causes the affected region to become an electrically conductive plasma.
The avalanche effect was discovered by John Sealy Townsend in his work between 1897 and 1901, and is also known as the Townsend discharge.
Electron avalanches are essential to the dielectric breakdown process within gases. The process can culminate in corona discharges, streamers, leaders, or in a spark or continuous arc that completely bridges the gap between the electrical conductors that are applying the voltage. The process extends to huge sparks — streamers in lightning discharges propagate by formation of electron avalanches created in the high potential gradient ahead of the streamers' advancing tips. Once begun, avalanches are often intensified by the creation of photoelectrons as a result of ultraviolet radiation emitted by the excited medium's atoms in the aft-tip region.
The process can also be used to detect ionizing radiation by using the gas multiplication effect of the avalanche process. This is the ionisation mechanism of the Geiger–Müller tube and, to a limited extent, of the proportional counter and is also used in spark chambers and other wire chambers.
Analysis
A plasma begins with a rare natural 'background' ionization event of a neutral air molecule, perhaps as the result of photoexcitation or background radiation. If this event occurs within an area that has a high potential gradient, the positively charged ion will be strongly attracted toward, or repelled away from, an electrode depending on its polarity, whereas the electron will be accelerated in the opposite direction. Because of the huge mass difference, electrons are accelerated to a much higher velocity than ions.
High-velocity electrons often collide with neutral atoms inelastically, sometimes ionizing them. In a chain-reaction — or an 'electron avalanche' — additional electrons recently separated from their positive ions by the strong potential gradient, cause a large cloud of electrons and positive ions to be momentarily generated by just a single initial electron. However, free electrons are easily captured by neutral oxygen or water vapor molecules (so-called electronegative gases), forming negative ions. In air at STP, free electrons exist for only about 11 nanoseconds before being captured. Captured electrons are effectively removed from play — they can no longer contribute to the avalanche process. If electrons are being created at a rate greater than they are being lost to capture, their number rapidly multiplies, a process characterized by exponential growth. The degree of multiplication that this process can provide is huge, up to several million-fold depending on the situation. The multiplication factor M is given by
Where X1 and X2 are the positions that the multiplication is being measured between, and α is the ionization constant. In other words, one free electron at position X1 will result in M free electrons at position X2. Substituting the voltage gradients into this equation results in
Where V is the applied voltage, VBR is the breakdown voltage and n is an empirically derived value between 2 and 6. As can be seen from this formula, the multiplication factor is very highly dependent on the applied voltage, and as the voltage nears the breakdown voltage of the material, the multiplication factor approaches infinity and the limiting factor becomes the availability of charge carriers.
Avalanche sustenance requires a reservoir of charge to sustain the applied voltage, as well as a continual source of triggering events. A number of mechanisms can sustain this process, creating avalanche after avalanche, to create a corona current. A secondary source of plasma electrons is required as the electrons are always accelerated by the field in one direction, meaning that avalanches always proceed linearly toward or away from an electrode. The dominant mechanism for the creation of secondary electrons depends on the polarity of a plasma. In each case, the energy emitted as photons by the initial avalanche is used to ionise a nearby gas molecule creating another accelerable electron. What differs is the source of this electron. When one or more electron avalanches occur between two electrodes of sufficient size, complete avalanche breakdown can occur, culminating in an electrical spark that bridges the gap.
See also
Townsend discharge
Avalanche breakdown
Avalanche diode
Corona discharge
Multipactor
Geiger–Müller tube
Geiger counter
Spark chamber
Wire chamber
Runaway breakdown
Relativistic runaway electron avalanche
References
External links
Breakdown effects in semiconductors
Electrical breakdown | Electron avalanche | [
"Physics"
] | 942 | [
"Physical phenomena",
"Electrical phenomena",
"Electrical breakdown"
] |
2,576,438 | https://en.wikipedia.org/wiki/Aflatoxin%20total%20synthesis | Aflatoxin total synthesis concerns the total synthesis of a group of organic compounds called aflatoxins. These compounds occur naturally in several fungi. As with other chemical compound targets in organic chemistry, the organic synthesis of aflatoxins serves various purposes. Traditionally it served to prove the structure of a complex biocompound in addition to evidence obtained from spectroscopy. It also demonstrates new concepts in organic chemistry (reagents, reaction types) and opens the way to molecular derivatives not found in nature. And for practical purposes, a synthetic biocompound is a commercial alternative to isolating the compound from natural resources. Aflatoxins in particular add another dimension because it is suspected that they have been mass-produced in the past from biological sources as part of a biological weapons program.
The synthesis of racemic aflatoxin B1 has been reported by Buechi et al. in 1967 and that of racemic aflatoxin B2 by Roberts et al. in 1968 The group of Barry Trost of Stanford University is responsible for the enantioselective total synthesis of (+)-Aflatoxin B1 and B2a in 2003. In 2005 the group of E. J. Corey of Harvard University presented the enantioselective synthesis of Aflatoxin B2.
Aflatoxin B2 synthesis
The total synthesis of Aflatoxin B2 is a multistep sequence that begins with a [2+3]cycloaddition between the quinone 1 and the 2,3-Dihydrofuran. This reaction is catalyzed by a CBS catalyst and is enantioselective. The next step is the orthoformylation of reaction product 2 in a Duff reaction. The hydroxyl group in 3 is esterified with triflic anhydride which adds a triflate protecting group. This step enables a Grignard reaction of the aldehyde group in 4 with methylmagnesiumbromide to the alcohol 5 which is then oxidized with the Dess-Martin periodinane to the ketone 6. A Baeyer-Villiger oxidation converts the ketone to an ester (7) and a reduction with Raney nickel converts the ester into an alcohol and removes the triflic acid group. In the final step the coumarin skeleton is added to 9 by a combined coupling reaction with zinc carbonate of the vinyl bromide in 8 and a transesterification step between the phenol group and the ethyl ester group.
References
Total synthesis
Aflatoxins | Aflatoxin total synthesis | [
"Chemistry"
] | 527 | [
"Total synthesis",
"Chemical synthesis"
] |
2,576,676 | https://en.wikipedia.org/wiki/Steelcase | Steelcase Inc. is an international manufacturer of furniture, casegoods, seating, and storage and partitioning systems for offices, hospitals, classrooms, and residential interiors. It is headquartered in Grand Rapids, Michigan, United States.
History
Originally known as The Metal Office Furniture Company, Steelcase was founded by Peter Martin Wege in 1912. Prior to starting the company, Wege had filed approximately 25 patents related to the sheet metal and fireproofing industries. The Metal Office Furniture Company's first products included fireproof metal safes and four-drawer metal filing cabinets.
In 1914, the company received its first product patent for "The Victor", a fireproof steel wastebasket. The Victor gained popularity due to its light weight—achieved through a patented process of bending flat steel at right angles to create boxes—and its ability to prevent fires at a time when smoking was common indoors, particularly in the workplace. In 1915, the company began manufacturing and distributing steel desks after designing and producing 200 for Boston's first skyscraper, the Custom House Tower. In 1937, the company collaborated with Frank Lloyd Wright on office furniture for the Johnson Wax Headquarters. The partnership lasted two years and resulted in some of the first modern workstations.
The name Steelcase was a result of an advertising campaign to promote metal office furniture over wood and was trademarked in 1921. The company officially changed its name to Steelcase, Inc. in 1954.
The company became an industry leader in the late 1960s due to the volume of its sales. Steelcase expanded into new markets during the 1970s, including Asia, Europe, and North Africa. In 1973, the company debuted the Series 9000 furniture line, a panel-based office system that became a best seller and the company's flagship brand. That same year, the company delivered the largest single furniture shipment to the then-new Sears Tower. The delivery included 43,565 pieces of furniture and furnished 44 floors.
During the 1980s and 1990s, Steelcase was working closely with architects and interior designers to develop products as well the company's own workspace in Grand Rapids. The company's current headquarters were built in 1983 on 901 44th St. SE in Grand Rapids, Michigan. In 1989, Steelcase opened the pyramid-shaped Steelcase Inc. Corporate Development Center. The center contained ten research laboratories and workspaces meant to encourage interdisciplinary collaboration on product development. Steelcase vacated the Pyramid in 2010, and the Pyramid was sold to Switch (company) in 2016. In 1996, Steelcase became the majority stakeholder in design firm IDEO and the firm's CEO, David M. Kelley, became Steelcase's vice president of technical discovery and innovation. Steelcase sold its shares back to IDEO's managers starting in 2007.
In 1996, Steelcase was found at fault in a patent infringement suit brought against them by Haworth, Inc., another furniture company. Steelcase was ordered to pay $211.5 million in damages and interest, thus ending a 17-year dispute with Haworth.
Steelcase became a publicly traded company in 1998 under the symbol SCS. During the 2000s, Steelcase reorganized its workforce and began integrating modern technologies in its products. In 2000, the company opened Steelcase University, a center for ongoing employee development and learning. Steelcase's wood furniture plant in Caledonia, MI earned LEED certification in 2001, becoming the first plant to receive the certification. In 2002, Steelcase partnered with IBM to create BlueSpace, a "smart office" prototype designed using new office technologies. In 2010, Steelcase and IDEO launched new models for higher education classrooms called LearnLabs.
In January 2016 the company recalled 12 models of Steelcase "Rocky" style swivel chairs manufactured between 2005 and 2015, due to fall hazard.
Noteworthy products
Steelcase released Multiple 15 desks in 1946, which introduced standardized desk sizing and became a universal industry standard. Series 9000 was released in 1973 and became Steelcase's most popular line of office systems. The Leap chair, introduced in 1999, sold 5,000 units a week during its first year and became the company's most popular release. The ergonomic office chair was designed with eight adjustable areas for users to control, including chair height, armrest positioning, lumbar support, seat depth, and back positioning. The chair was developed over four years, cost $35 million to design, and resulted in 11 academic studies and 23 patents. The company released the Gesture chair in 2013, which is designed to support the way workers naturally sit.
Steelcase innovates the industry with the 1945 Metal Office Furniture Company path in an attempt to be more sustainable. The idea started when Steelcase saw the need for furniture to be personalized for custom size spaces with the ability to be able to fix a broken part if necessary. This series then came to be over 200 compatible arrangements for tables and desks. The process for this simple assembly of parts for the new design was to repair, replace or recycle as many times as the user needs.
Brands
Subsidiaries include AMQ, Coalesse, Halcon, Orangebox, Smith System, and Viccarbe, as well as several other brands such as Steelcase Health and Education. The company established an office accessories brand called Details in 1990. In 1993, Steelcase launched Turnstone, a line of furniture designed for small businesses and home offices. Designtex, which produces interior textiles and upholstery, was acquired in 1998. Nurture was founded in 2006 to create products for the health care industry, including furniture and interiors for waiting rooms, offices, and clinics. The brand became Steelcase Health in 2014.
Steelcase merged three of its subsidiaries (Brayton International, Metro Furniture and Vecta) to form Coalesse in 2008. Coalesse products are meant for what the company calls "live/work" spaces, a result of the frequent overlap of home and office in modern working habits.
Company culture
In 1985, Steelcase purchased the Meyer May House designed by Frank Lloyd Wright and restored it, opening it to the public in 1987. A corporate art program has resulted in a collection including pieces by Pablo Picasso, Andy Warhol and Dale Chihuly.
The company employs a research group called WorkSpace Futures to study workplace trends. In 2010, Steelcase underwent a three-year project to update its Grand Rapids headquarters to promote employee productivity and employee well-being, including redesigning a cafeteria into an all-purpose work environment that provides food service and space for meetings, socializing, and independent work.
Steelcase's sustainability efforts have included reducing packaging, using regional facilities to reduce shipping distance, cutting greenhouse gas emissions and water consumption, and a goal to reduce its environmental footprint by 25 percent by 2020. As of 2012, Steelcase had reduced its waste by 80 percent, greenhouse gas emissions by 37 percent and water consumption by 54 percent since 2006. According to the company's WorkFutures group, the company also analyzes its supply chain and materials chemistry to determine product sustainability. As of 2014, the company led its industry in Cradle to Cradle-certified products. In 2016, Steelcase employees volunteered 38,913 hours and the Steelcase Foundation donated more than US$5.7 million.
Steelcase became Carbon Neutral on August 25, 2020, with the plan of becoming Carbon negative (eliminating more carbon than they produce) by 2030. As a company they have a focus on green chemistry and have stopped manufacturing with many chemicals like Polyvinyl chloride (PVC).
Awards
Company Awards
The company won the Editors' Choice award at the 2014 NeoCon product competition for "Quiet Spaces", a series of workspaces designed for introverts and a collaboration with Susan Cain, author of Quiet: The Power of Introverts in a World That Can't Stop Talking.
Steelcase was named The World's Most Admired Company by Forbes in 2018, 2019 and 2020. They earned the 2020 Civic Award.
Design Awards
2014 Steelcase's SOTO II Worktools won a Silver Award in the Office Accessories category from Editor's Choice.
2018 Best Large Showroom and Best of Competition at NeoCon
2019 Steelcase won the Red Dot Award in 2019 for their SILQ chair design.
2021 Best of NeoCon Gold and Best of NeoCon Innovation Awards
References
External links
Furniture companies of the United States
Manufacturing companies based in Grand Rapids, Michigan
Industrial design
Manufacturing companies established in 1912
1912 establishments in Michigan
Companies listed on the New York Stock Exchange | Steelcase | [
"Engineering"
] | 1,732 | [
"Industrial design",
"Design engineering",
"Design"
] |
2,576,885 | https://en.wikipedia.org/wiki/Field-replaceable%20unit | A field-replaceable unit (FRU) is a printed circuit board, part, or assembly that can be quickly and easily removed from a computer or other piece of electronic equipment, and replaced by the user or a technician without having to send the entire product or system to a repair facility. FRUs allow a technician lacking in-depth product knowledge to isolate faults and replace faulty components. The granularity of FRUs in a system impacts total cost of ownership and support, including the costs of stocking spare parts, where spares are deployed to meet repair time goals, how diagnostic tools are designed and implemented, levels of training for field personnel, whether end-users can do their own FRU replacement, etc.
Other equipment
FRUs are not strictly confined to computers but are also part of many high-end, lower-volume consumer and commercial products. For example, in military aviation, electronic components of line-replaceable units, typically known as shop-replaceable units (SRUs), are repaired at field-service backshops, usually by a "remove and replace" repair procedure, with specialized repair performed at centralized depot or by the OEM.
History
Many vacuum tube computers had FRUs:
Pluggable units containing one or more vacuum tubes and various passive components
Most transistorized and integrated circuit-based computers had FRUs:
Computer modules, circuit boards containing discrete transistors and various passive components. Examples:
IBM SMS cards
DEC System Building Blocks cards
DEC Flip-Chip cards
Circuit boards containing monolithic ICs and/or hybrid ICs, such as IBM SLT cards.
Vacuum tubes themselves are usually FRUs.
For a short period starting in the late 1960s, some television set manufacturers made solid-state televisions with FRUs instead of a single board attached to the chassis. However modern televisions put all the electronics on one large board to reduce manufacturing costs.
Trends
As the sophistication and complexity of multi-replaceable unit electronics in both commercial and consumer industries have increased, many design and manufacturing organizations have expanded the use of the FRU storage device. Storage is no longer limited to simply identification of the FRU itself, but now also comprises back-up copies of critical system information such as system serial numbers, MAC address and even security information. Some systems will fail to function at all without each FRU in the system being ratified at start-up. Today one cannot assume that the FRU storage device is only used to maintain the FRU ID of the part.
See also
Shop-replaceable unit
Line-replaceable unit
Notes
Electronic engineering
Maintenance | Field-replaceable unit | [
"Technology",
"Engineering"
] | 525 | [
"Computer engineering",
"Electronic engineering",
"Maintenance",
"Mechanical engineering",
"Electrical engineering"
] |
772,517 | https://en.wikipedia.org/wiki/Tsiolkovsky%20rocket%20equation | The classical rocket equation, or ideal rocket equation is a mathematical equation that describes the motion of vehicles that follow the basic principle of a rocket: a device that can apply acceleration to itself using thrust by expelling part of its mass with high velocity and can thereby move due to the conservation of momentum.
It is credited to Konstantin Tsiolkovsky, who independently derived it and published it in 1903, although it had been independently derived and published by William Moore in 1810, and later published in a separate book in 1813. Robert Goddard also developed it independently in 1912, and Hermann Oberth derived it independently about 1921.
The maximum change of velocity of the vehicle, (with no external forces acting) is:
where:
is the effective exhaust velocity;
is the specific impulse in dimension of time;
is standard gravity;
is the natural logarithm function;
is the initial total mass, including propellant, a.k.a. wet mass;
is the final total mass without propellant, a.k.a. dry mass.
Given the effective exhaust velocity determined by the rocket motor's design, the desired delta-v (e.g., orbital speed or escape velocity), and a given dry mass , the equation can be solved for the required wet mass :
The required propellant mass is then
The necessary wet mass grows exponentially with the desired delta-v.
History
The equation is named after Russian scientist Konstantin Tsiolkovsky who independently derived it and published it in his 1903 work.
The equation had been derived earlier by the British mathematician William Moore in 1810, and later published in a separate book in 1813.
American Robert Goddard independently developed the equation in 1912 when he began his research to improve rocket engines for possible space flight. German engineer Hermann Oberth independently derived the equation about 1920 as he studied the feasibility of space travel.
While the derivation of the rocket equation is a straightforward calculus exercise, Tsiolkovsky is honored as being the first to apply it to the question of whether rockets could achieve speeds necessary for space travel.
Experiment of the Boat by Tsiolkovsky
In order to understand the principle of rocket propulsion, Konstantin Tsiolkovsky proposed the famous experiment of "the boat". A person is in a boat away from the shore without oars. They want to reach this shore. They notice that the boat is loaded with a certain quantity of stones and have the idea of quickly and repeatedly throwing the stones in succession in the opposite direction. Effectively, the quantity of movement of the stones thrown in one direction corresponds to an equal quantity of movement for the boat in the other direction (ignoring friction / drag).
Derivation
Most popular derivation
Consider the following system:
In the following derivation, "the rocket" is taken to mean "the rocket and all of its unexpended propellant".
Newton's second law of motion relates external forces () to the change in linear momentum of the whole system (including rocket and exhaust) as follows:
where is the momentum of the rocket at time :
and is the momentum of the rocket and exhausted mass at time :
and where, with respect to the observer:
is the velocity of the rocket at time
is the velocity of the rocket at time
is the velocity of the mass added to the exhaust (and lost by the rocket) during time
is the mass of the rocket at time
is the mass of the rocket at time
The velocity of the exhaust in the observer frame is related to the velocity of the exhaust in the rocket frame by:
thus,
Solving this yields:
If and are opposite, have the same direction as , are negligible (since ), and using (since ejecting a positive results in a decrease in rocket mass in time),
If there are no external forces then (conservation of linear momentum) and
Assuming that is constant (known as Tsiolkovsky's hypothesis), so it is not subject to integration, then the above equation may be integrated as follows:
This then yields
or equivalently
or
or
where is the initial total mass including propellant, the final mass, and the velocity of the rocket exhaust with respect to the rocket (the specific impulse, or, if measured in time, that multiplied by gravity-on-Earth acceleration). If is NOT constant, we might not have rocket equations that are as simple as the above forms. Many rocket dynamics researches were based on the Tsiolkovsky's constant hypothesis.
The value is the total working mass of propellant expended.
(delta-v) is the integration over time of the magnitude of the acceleration produced by using the rocket engine (what would be the actual acceleration if external forces were absent). In free space, for the case of acceleration in the direction of the velocity, this is the increase of the speed. In the case of an acceleration in opposite direction (deceleration) it is the decrease of the speed. Of course gravity and drag also accelerate the vehicle, and they can add or subtract to the change in velocity experienced by the vehicle. Hence delta-v may not always be the actual change in speed or velocity of the vehicle.
Other derivations
Impulse-based
The equation can also be derived from the basic integral of acceleration in the form of force (thrust) over mass.
By representing the delta-v equation as the following:
where T is thrust, is the initial (wet) mass and is the initial mass minus the final (dry) mass,
and realising that the integral of a resultant force over time is total impulse, assuming thrust is the only force involved,
The integral is found to be:
Realising that impulse over the change in mass is equivalent to force over propellant mass flow rate (p), which is itself equivalent to exhaust velocity,
the integral can be equated to
Acceleration-based
Imagine a rocket at rest in space with no forces exerted on it (Newton's First Law of Motion). From the moment its engine is started (clock set to 0) the rocket expels gas mass at a constant mass flow rate R (kg/s) and at exhaust velocity relative to the rocket ve (m/s). This creates a constant force F propelling the rocket that is equal to R × ve. The rocket is subject to a constant force, but its total mass is decreasing steadily because it is expelling gas. According to Newton's Second Law of Motion, its acceleration at any time t is its propelling force F divided by its current mass m:
Now, the mass of fuel the rocket initially has on board is equal to m0 – mf. For the constant mass flow rate R it will therefore take a time T = (m0 – mf)/R to burn all this fuel. Integrating both sides of the equation with respect to time from 0 to T (and noting that R = dm/dt allows a substitution on the right) obtains:
Limit of finite mass "pellet" expulsion
The rocket equation can also be derived as the limiting case of the speed change for a rocket that expels its fuel in the form of pellets consecutively, as , with an effective exhaust speed such that the mechanical energy gained per unit fuel mass is given by .
In the rocket's center-of-mass frame, if a pellet of mass is ejected at speed and the remaining mass of the rocket is , the amount of energy converted to increase the rocket's and pellet's kinetic energy is
Using momentum conservation in the rocket's frame just prior to ejection, , from which we find
Let be the initial fuel mass fraction on board and the initial fueled-up mass of the rocket. Divide the total mass of fuel into discrete pellets each of mass . The remaining mass of the rocket after ejecting pellets is then . The overall speed change after ejecting pellets is the sum
Notice that for large the last term in the denominator and can be neglected to give
where and .
As this Riemann sum becomes the definite integral
since the final remaining mass of the rocket is .
Special relativity
If special relativity is taken into account, the following equation can be derived for a relativistic rocket, with again standing for the rocket's final velocity (after expelling all its reaction mass and being reduced to a rest mass of ) in the inertial frame of reference where the rocket started at rest (with the rest mass including fuel being initially), and standing for the speed of light in vacuum:
Writing as allows this equation to be rearranged as
Then, using the identity (here "exp" denotes the exponential function; see also Natural logarithm as well as the "power" identity at logarithmic identities) and the identity (see Hyperbolic function), this is equivalent to
Terms of the equation
Delta-v
Delta-v (literally "change in velocity"), symbolised as Δv and pronounced delta-vee, as used in spacecraft flight dynamics, is a measure of the impulse that is needed to perform a maneuver such as launching from, or landing on a planet or moon, or an in-space orbital maneuver. It is a scalar that has the units of speed. As used in this context, it is not the same as the physical change in velocity of the vehicle.
Delta-v is produced by reaction engines, such as rocket engines, is proportional to the thrust per unit mass and burn time, and is used to determine the mass of propellant required for the given manoeuvre through the rocket equation.
For multiple manoeuvres, delta-v sums linearly.
For interplanetary missions delta-v is often plotted on a porkchop plot which displays the required mission delta-v as a function of launch date.
Mass fraction
In aerospace engineering, the propellant mass fraction is the portion of a vehicle's mass which does not reach the destination, usually used as a measure of the vehicle's performance. In other words, the propellant mass fraction is the ratio between the propellant mass and the initial mass of the vehicle. In a spacecraft, the destination is usually an orbit, while for aircraft it is their landing location. A higher mass fraction represents less weight in a design. Another related measure is the payload fraction, which is the fraction of initial weight that is payload.
Effective exhaust velocity
The effective exhaust velocity is often specified as a specific impulse and they are related to each other by:
where
is the specific impulse in seconds,
is the specific impulse measured in m/s, which is the same as the effective exhaust velocity measured in m/s (or ft/s if g is in ft/s2),
is the standard gravity, 9.80665m/s2 (in Imperial units 32.174ft/s2).
Applicability
The rocket equation captures the essentials of rocket flight physics in a single short equation. It also holds true for rocket-like reaction vehicles whenever the effective exhaust velocity is constant, and can be summed or integrated when the effective exhaust velocity varies. The rocket equation only accounts for the reaction force from the rocket engine; it does not include other forces that may act on a rocket, such as aerodynamic or gravitational forces. As such, when using it to calculate the propellant requirement for launch from (or powered descent to) a planet with an atmosphere, the effects of these forces must be included in the delta-V requirement (see Examples below). In what has been called "the tyranny of the rocket equation", there is a limit to the amount of payload that the rocket can carry, as higher amounts of propellant increment the overall weight, and thus also increase the fuel consumption. The equation does not apply to non-rocket systems such as aerobraking, gun launches, space elevators, launch loops, tether propulsion or light sails.
The rocket equation can be applied to orbital maneuvers in order to determine how much propellant is needed to change to a particular new orbit, or to find the new orbit as the result of a particular propellant burn. When applying to orbital maneuvers, one assumes an impulsive maneuver, in which the propellant is discharged and delta-v applied instantaneously. This assumption is relatively accurate for short-duration burns such as for mid-course corrections and orbital insertion maneuvers. As the burn duration increases, the result is less accurate due to the effect of gravity on the vehicle over the duration of the maneuver. For low-thrust, long duration propulsion, such as electric propulsion, more complicated analysis based on the propagation of the spacecraft's state vector and the integration of thrust are used to predict orbital motion.
Examples
Assume an exhaust velocity of and a of (Earth to LEO, including to overcome gravity and aerodynamic drag).
Single-stage-to-orbit rocket: = 0.884, therefore 88.4% of the initial total mass has to be propellant. The remaining 11.6% is for the engines, the tank, and the payload.
Two-stage-to-orbit: suppose that the first stage should provide a of ; = 0.671, therefore 67.1% of the initial total mass has to be propellant to the first stage. The remaining mass is 32.9%. After disposing of the first stage, a mass remains equal to this 32.9%, minus the mass of the tank and engines of the first stage. Assume that this is 8% of the initial total mass, then 24.9% remains. The second stage should provide a of ; = 0.648, therefore 64.8% of the remaining mass has to be propellant, which is 16.2% of the original total mass, and 8.7% remains for the tank and engines of the second stage, the payload, and in the case of a space shuttle, also the orbiter. Thus together 16.7% of the original launch mass is available for all engines, the tanks, and payload.
Stages
In the case of sequentially thrusting rocket stages, the equation applies for each stage, where for each stage the initial mass in the equation is the total mass of the rocket after discarding the previous stage, and the final mass in the equation is the total mass of the rocket just before discarding the stage concerned. For each stage the specific impulse may be different.
For example, if 80% of the mass of a rocket is the fuel of the first stage, and 10% is the dry mass of the first stage, and 10% is the remaining rocket, then
With three similar, subsequently smaller stages with the same for each stage, gives:
and the payload is 10% × 10% × 10% = 0.1% of the initial mass.
A comparable SSTO rocket, also with a 0.1% payload, could have a mass of 11.1% for fuel tanks and engines, and 88.8% for fuel. This would give
If the motor of a new stage is ignited before the previous stage has been discarded and the simultaneously working motors have a different specific impulse (as is often the case with solid rocket boosters and a liquid-fuel stage), the situation is more complicated.
See also
Delta-v budget
Jeep problem
Mass ratio
Oberth effect - applying delta-v in a gravity well increases the final velocity
Relativistic rocket
Reversibility of orbits
Robert H. Goddard - added terms for gravity and drag in vertical flight
Spacecraft propulsion
Stigler’s law of eponymy
References
External links
How to derive the rocket equation
Relativity Calculator – Learn Tsiolkovsky's rocket equations
Tsiolkovsky's rocket equations plot and calculator in WolframAlpha
Astrodynamics
Eponymous equations of physics
Rocket Equation
Single-stage-to-orbit
Rocket propulsion | Tsiolkovsky rocket equation | [
"Physics",
"Engineering"
] | 3,211 | [
"Astrodynamics",
"Eponymous equations of physics",
"Equations of physics",
"Aerospace engineering"
] |
774,220 | https://en.wikipedia.org/wiki/Diamond%20anvil%20cell | A diamond anvil cell (DAC) is a high-pressure device used in geology, engineering, and materials science experiments. It permits the compression of a small (sub-millimeter-sized) piece of material to extreme pressures, typically up to around 100–200 gigapascals, although it is possible to achieve pressures up to 770 gigapascals (7,700,000 bars or 7.7 million atmospheres).
The device has been used to recreate the pressure existing deep inside planets to synthesize materials and phases not observed under normal ambient conditions. Notable examples include the non-molecular ice X, polymeric nitrogen and metallic phases of xenon, lonsdaleite, and potentially metallic hydrogen.
A DAC consists of two opposing diamonds with a sample compressed between the polished culets (tips). Pressure may be monitored using a reference material whose behavior under pressure is known. Common pressure standards include ruby fluorescence, and various structurally simple metals, such as copper or platinum. The uniaxial pressure supplied by the DAC may be transformed into uniform hydrostatic pressure using a pressure-transmitting medium, such as argon, xenon, hydrogen, helium, paraffin oil or a mixture of methanol and ethanol. The pressure-transmitting medium is enclosed by a gasket and the two diamond anvils. The sample can be viewed through the diamonds and illuminated by X-rays and visible light. In this way, X-ray diffraction and fluorescence; optical absorption and photoluminescence; Mössbauer, Raman and Brillouin scattering; positron annihilation and other signals can be measured from materials under high pressure. Magnetic and microwave fields can be applied externally to the cell allowing nuclear magnetic resonance, electron paramagnetic resonance and other magnetic measurements. Attaching electrodes to the sample allows electrical and magnetoelectrical measurements as well as heating up the sample to a few thousand degrees. Much higher temperatures (up to 7000 K) can be achieved with laser-induced heating, and cooling down to millikelvins has been demonstrated.
Principle
The operation of the diamond anvil cell relies on a simple principle:
where is the pressure, the applied force, and the area. Typical culet sizes for diamond anvils are 100–250 micrometres (μm), such that a very high pressure is achieved by applying a moderate force on a sample with a small area, rather than applying a large force on a large area. Diamond is a very hard and virtually incompressible material, thus minimising the deformation and failure of the anvils that apply the force.
History
The study of materials at extreme conditions, high pressure and high temperature uses a wide array of techniques to achieve these conditions and probe the behavior of material while in the extreme environment. Percy Williams Bridgman, the great pioneer of high-pressure research during the first half of the 20th century, revolutionized the field of high pressures with his development of an opposed anvil device with small flat areas that were pressed one against the other with a lever-arm. The anvils were made of tungsten carbide (WC). This device could achieve pressure of a few gigapascals, and was used in electrical resistance and compressibility measurements.
The first diamond anvil cell was created in 1957-1958. The principles of the DAC are similar to the Bridgman anvils, but in order to achieve the highest possible pressures without breaking the anvils, they were made of the hardest known material: a single crystal diamond. The first prototypes were limited in their pressure range and there was not a reliable way to calibrate the pressure.
The diamond anvil cell became the most versatile pressure generating device that has a single characteristic that sets it apart from the other pressure devices – its optical transparency. This provided the early high pressure pioneers with the ability to directly observe the properties of a material while under pressure. With just the use of an optical microscope, phase boundaries, color changes and recrystallization could be seen immediately, while x-ray diffraction or spectroscopy required time to expose and develop photographic film. The potential for the diamond anvil cell was realized by Alvin Van Valkenburg while he was preparing a sample for IR spectroscopy and was checking the alignment of the diamond faces.
The diamond cell was created at the National Bureau of Standards (NBS) by Charles E. Weir, Ellis R. Lippincott, and Elmer N. Bunting. Within the group, each member focused on different applications of the diamond cell. Van Valkenburg focused on making visual observations, Weir on XRD, Lippincott on IR Spectroscopy. The group members were well experienced in each of their techniques before they began outside collaboration with university researchers such as William A. Bassett and Taro Takahashi at the University of Rochester.
During the first experiments using diamond anvils, the sample was placed on the flat tip of the diamond (the culet) and pressed between the diamond faces. As the diamond faces were pushed closer together, the sample would be pressed and extrude out from the center. Using a microscope to view the sample, it could be seen that a smooth pressure gradient existed across the sample with the outermost portions of the sample acting as a kind of gasket. The sample was not evenly distributed across the diamond culet but localized in the center due to the "cupping" of the diamond at higher pressures. This cupping phenomenon is the elastic stretching of the edges of the diamond culet, commonly referred to as the "shoulder height". Many diamonds were broken during the first stages of producing a new cell or any time an experiment is pushed to higher pressure. The NBS group was in a unique position where almost endless supplies of diamonds were available to them. Customs officials occasionally confiscated diamonds from people attempting to smuggle them into the country. Disposing of such valuable confiscated materials could be problematic given rules and regulations. One solution was simply to make such materials available to people at other government agencies if they could make a convincing case for their use. This became an unrivaled resource as other teams at the University of Chicago, Harvard University, and General Electric entered the high pressure field.
During the following decades DACs have been successively refined, the most important innovations being the use of gaskets and the ruby pressure calibration. The DAC evolved to be the most powerful lab device for generating static high pressure. The range of static pressure attainable today extends to 640 GPa, much higher than the estimated pressures at the Earth's center (~360 GPa).
Components
There are many different DAC designs but all have four main components:
Force-generating device
Relies on the operation of either a lever arm, tightening screws, or pneumatic or hydraulic pressure applied to a membrane. In all cases the force is uniaxial and is applied to the tables (bases) of the two anvils.
Two opposing diamond anvils
Made of high gem quality, flawless diamonds, usually with 16 facets, they typically weigh to carat (25 to 70 mg). The culet (tip) is ground and polished to a hexadecagonal surface parallel to the table. The culets of the two diamonds face one another, and must be perfectly parallel in order to produce uniform pressure and to prevent dangerous strains. Specially selected anvils are required for specific measurements – for example, low diamond absorption and luminescence is required in corresponding experiments.
Gasket
A gasket used in a diamond anvil cell experiment is a thin metal foil, typically 0.3 mm in thickness, which is placed in between the diamonds. Desirable materials for gaskets are strong, stiff metals such as rhenium or tungsten. Steel is frequently used as a cheaper alternative for low pressure experiments. The above-mentioned materials cannot be used in radial geometries where the x-ray beam must pass through the gasket. Since they are not transparent to X-rays, if X-ray illumination through the gasket is required, lighter materials such as beryllium, boron nitride, boron or diamond are used as a gasket. Gaskets are preindented by the diamonds and a hole is drilled in the center of the indentation to create the sample chamber.
Pressure-transmitting medium
The pressure transmitting medium is the compressible fluid that fills the sample chamber and transmits the applied force to the sample. Hydrostatic pressure is preferred for high-pressure experiments because variation in strain throughout the sample can lead to distorted observations of different behaviors. In some experiments stress and strain relationships are investigated and the effects of non-hydrostatic forces are desired. A good pressure medium will remain a soft, compressible fluid to high pressure.
The full range of techniques that are available has been summarized in a tree diagram by William Bassett. The ability to utilize any and all of these techniques hinges on being able to look through the diamonds which was first demonstrated by visual observations.
Measuring pressure
The two main pressure scales used in static high-pressure experiments are X-ray diffraction of a material with a known equation of state and measuring the shift in ruby fluorescence lines. The first began with NaCl, for which the compressibility has been determined by first principles in 1968. The major pitfall of this method of measuring pressure is that the use of X-rays is required. Many experiments do not require X-rays and this presents a major inconvenience to conduct both the intended experiment and a diffraction experiment. In 1971, the NBS high pressure group was set in pursuit of a spectroscopic method for determining pressure. It was found that the wavelength of ruby fluorescence emissions change with pressure; this was easily calibrated against the NaCl scale.
Once pressure could be generated and measured it quickly became a competition for which cells can go the highest. The need for a reliable pressure scale became more important during this race. Shock-wave data for the compressibilities of Cu, Mo, Pd, and Ag were available at this time and could be used to define equations of states up to Mbar pressure. Using these scales these pressures were reported:
Both methods are continually refined and in use today. However, the ruby method is less reliable at high temperature. Well defined equations of state are needed when adjusting temperature and pressure, two parameters that affect the lattice parameters of materials.
Uses
Prior to the invention of the diamond anvil cell, static high-pressure apparatus required large hydraulic presses which weighed several tons and required large specialized laboratories. The simplicity and compactness of the DAC meant that it could be accommodated in a wide variety of experiments. Some contemporary DACs can easily fit into a cryostat for low-temperature measurements, and for use with a superconducting electromagnet. In addition to being hard, diamonds have the advantage of being transparent to a wide range of the electromagnetic spectrum from infrared to gamma rays, with the exception of the far ultraviolet and soft X-rays. This makes the DAC a perfect device for spectroscopic experiments and for crystallographic studies using hard X-rays.
A variant of the diamond anvil, the hydrothermal diamond anvil cell (HDAC) is used in experimental petrology/geochemistry for the study of aqueous fluids, silicate melts, immiscible liquids, mineral solubility and aqueous fluid speciation at geologic pressures and temperatures. The HDAC is sometimes used to examine aqueous complexes in solution using the synchrotron light source techniques XANES and EXAFS. The design of HDAC is very similar to that of DAC, but it is optimized for studying liquids.
Innovative uses
An innovative use of the diamond anvil cell is testing the sustainability and durability of life under high pressures, including the search for life on extrasolar planets. Testing portions of the theory of panspermia (a form of interstellar travel) is one application of DAC. When interstellar objects containing life-forms impact a planetary body, there is high pressure upon impact and the DAC can replicate this pressure to determine if the organisms could survive. Another reason the DAC is applicable for testing life on extrasolar planets is that planetary bodies that hold the potential for life may have incredibly high pressure on their surface.
In 2002, scientists at the Carnegie Institution of Washington examined the pressure limits of life processes. Suspensions of bacteria, specifically Escherichia coli and Shewanella oneidensis, were placed in the DAC, and the pressure was raised to 1.6 GPa, which is more than 16,000 times Earth's surface pressure (985 hPa). After 30 hours, only about 1% of the bacteria survived. The experimenters then added a dye to the solution. If the cells survived the squeezing and were capable of carrying out life processes, specifically breaking down formate, the dye would turn clear. 1.6 GPa is such great pressure that during the experiment the DAC turned the solution into ice-IV, a room-temperature ice. When the bacteria broke down the formate in the ice, liquid pockets would form because of the chemical reaction. The bacteria were also able to cling to the surface of the DAC with their tails.
Skeptics debated whether breaking down formate is enough to consider the bacteria living. Art Yayanos, an oceanographer at the Scripps Institute of Oceanography in La Jolla, California, believes an organism should only be considered living if it can reproduce. Subsequent results from independent research groups have shown the validity of the 2002 work. This is a significant step that reiterates the need for a new approach to the old problem of studying environmental extremes through experiments. There is practically no debate whether microbial life can survive pressures up to 600 MPa, which has been shown over the last decade or so to be valid through a number of scattered publications.
Similar tests were performed with a low-pressure (0.1–600 MPa) diamond anvil cell, which has better imaging quality and signal collection. The studied microbes, Saccharomyces cerevisiae (baker's yeast), continued to grow at pressures of 15–50 MPa, and died at 200 MPa.
Single crystal X-ray diffraction
Good single crystal X-ray diffraction experiments in diamond anvil cells require sample stage to rotate on the vertical axis, omega. Most diamond anvil cells do not feature a large opening that would allow the cell to be rotated to high angles, a 60 degrees opening is considered sufficient for most crystals but larger angles are possible. The first cell to be used for single crystal experiments was designed by a graduate student at the University of Rochester, Leo Merrill. The cell was triangular with beryllium seats that the diamonds were mounted on; the cell was pressurized with screws and guide pins holding everything in place.
High-temperature techniques
Heating in diamond-anvil cells is typically done by two means, external or internal heating. External heating is defined as heating the anvils and would include a number of resistive heaters that are placed around the diamonds or around the cell body. The complementary method does not change the temperature of the anvils and includes fine resistive heaters placed within the sample chamber and laser heating. The main advantage to resistive heating is the precise measurement of temperature with thermocouples, but the temperature range is limited by the properties of the diamond which will oxidize in air at 700 °C The use of an inert atmosphere can extend this range above 1000 °C. A tungsten ring-wire resistive heater inside a BX90 DAC filled with Ar gas was reported to reach 1400 °C. With laser heating the sample can reach temperature above 5000 °C, but the minimum temperature that can be measured when using a laser-heating system is ~1200 °C and the measurement is much less precise. Advances in resistive heating are closing the gap between the two techniques so that systems can be studied from room temperature to beyond 5700 °C with the combination of the two.
Laser heating
The development of laser heating began only 8 years after Charles Weir, of the National Bureau of Standards (NBS), made the first diamond anvil cell and Alvin Van Valkenburg, NBS, realized the potential of being able to see the sample while under pressure. William Bassett and his colleague Taro Takahashi focused a laser beam on the sample while under pressure. The first laser heating system used a single 7 joule pulsed ruby laser that heated the sample to 3000 °C while at 260 kilobars. This was sufficient to convert graphite to diamond. The major flaws within the first system related to control and temperature measurement.
Temperature measurement was initially done by Basset using an optical pyrometer to measure the intensity of the incandescent light from the sample. Colleagues at UC Berkeley were better able to utilize the black-body radiation and more accurately measure the temperature. The hot spot produced by the laser also created large thermal gradients in between the portions of sample that were hit by the focused laser and those that were not. The solution to this problem is ongoing but advances have been made with the introduction of a double-sided approach.
Double-sided heating
The use of two lasers to heat the sample reduces the axial temperature gradient, which allows for thicker samples to be heated more evenly. In order for a double-sided heating system to be successful it is essential that the two lasers are aligned so that they are both focused on the sample position. For in situ heating in diffraction experiments, the lasers need to be focused to the same point in space where the X-ray beam is focused.
Laser heating systems at synchrotron facilities
The European Synchrotron Radiation Facility (ESRF) as well as many other synchrotron facilities as the three major synchrotron user facilities in the United States all have beamlines equipped with laser heating systems. The respective beamlines with laser heating systems are at the ESRF ID27, ID18, and ID24; at the Advanced Photon Source (APS), 13-ID-D GSECARS and 16-ID-B HP-CAT; at the National Synchrotron Light Source, X17B3; and at the Advanced Light Source, 12.2.2. Laser heating has become a routine technique in high-pressure science but the reliability of temperature measurement is still controversial.
Temperature measurement
In the first experiments with laser heating, temperature came from a calibration of laser power made with known melting points of various materials. When using the pulsed ruby laser this was unreliable due to the short pulse. YAG lasers quickly become the standard, heating for relatively long duration, and allowing observation of the sample throughout the heating process. It was with the first use of YAG lasers that Bassett used an optical pyrometer to measure temperatures in the range of 1000 °C to 1600 °C. The first temperature measurements had a standard deviation of 30 °C from the brightness temperature, but due to the small sample size was estimated to be 50 °C with the possibility that the true temperature of the sample being was 200 °C higher than that of the brightness measurement. Spectrometry of the incandescent light became the next method of temperature measurement used in Bassett's group. The energy of the emitted radiation could be compared to known black-body radiation spectra to derive a temperature. Calibration of these systems is done with published melting points or melting points as measured by resistive heating.
Gas loading
Principle
The pressure transmitting medium is an important component in any high-pressure experiment. The medium fills the space within the sample 'chamber' and applies the pressure being transmitted to the medium onto the sample. In a good high-pressure experiment, the medium should maintain a homogeneous distribution of pressure on the sample. In other words, the medium must stay hydrostatic to ensure uniform compressibility of the sample. Once a pressure transmitting medium has lost its hydrostaticity, a pressure gradient forms in the chamber that increases with increasing pressure. This gradient can greatly affect the sample, compromising results. The medium must also be inert, as to not interact with the sample, and stable under high pressures. For experiments with laser heating, the medium should have low thermal conductivity. If an optical technique is being employed, the medium should be optically transparent and for x-ray diffraction, the medium should be a poor x-ray scatterer – as to not contribute to the signal.
Some of the most commonly used pressure transmitting media have been sodium chloride, silicone oil, and a 4:1 methanol-ethanol mixture. Sodium chloride is easy to load and is used for high-temperature experiments because it acts as a good thermal insulator. The methanol-ethanol mixture displays good hydrostaticity to about 10 GPa and with the addition of a small amount of water can be extended to about 15 GPa.
For pressure experiments that exceed 10 GPa, noble gases are preferred. The extended hydrostaticity greatly reduces the pressure gradient in samples at high pressure. Noble gases, such as helium, neon, and argon are optically transparent, thermally insulating, have small X-ray scattering factors, and have good hydrostaticity at high pressures. Even after solidification, noble gases provide quasihydrostatic environments.
Argon is used for experiments involving laser heating because it is chemically insulating. Since it condenses at a temperature above that of liquid nitrogen, it can be loaded cryogenically. Helium and neon have low X-ray scattering factors and are thus used for collecting X-ray diffraction data. Helium and neon also have low shear moduli; minimizing strain on the sample. These two noble gases do not condense above that of liquid nitrogen and cannot be loaded cryogenically. Instead, a high-pressure gas loading system has been developed that employs a gas compression method.
Techniques
In order to load a gas as a sample or pressure transmitting medium, the gas must be in a dense state, as to not shrink the sample chamber once pressure is induced. To achieve a dense state, gases can be liquefied at low temperatures or compressed. Cryogenic loading is a technique that uses liquefied gas as a means of filling the sample chamber. The DAC is directly immersed into the cryogenic fluid that fills the sample chamber. However, there are disadvantages to cryogenic loading. With the low temperatures indicative of cryogenic loading, the sample is subjected to temperatures that could irreversibly change it. Also, the boiling liquid could displace the sample or trap an air bubble in the chamber. It is not possible to load gas mixtures using the cryogenic method due to the different boiling points of most gases. Gas compression technique densifies the gases at room temperature. With this method, most of the problems seen with cryogenic loading are fixed. Also, loading gas mixtures becomes a possibility. The technique uses a vessel or chamber in which the DAC is placed and is filled with gas. Gases are pressurized and pumped into the vessel with a compressor. Once the vessel is filled and the desired pressure is reached the DAC is closed with a clamp system run by motor driven screws.
Components
High-pressure vessel: Vessel in which the diamond anvil cell is loaded.
Clamp device seals the DAC; which is tightened by closure mechanism with motor driven screws.
PLC (programmable logic controller): Controls air flow to the compressor and all valves. The PLC ensures that valves are opened and closed in the correct sequence for accurate loading and safety.
Compressor: Responsible for compression of the gas. The compressor employs a dual-stage air-driven diaphragm design that creates pressure and avoids contamination. Able to achieve 207 MPa of pressure.
Valves: Valves open and close via the PLC to regulate which gases enter the high-pressure vessel.
Burst disks: Two burst disks in the system – one for the high-pressure system and one for the low-pressure system. These disks act as a pressure relief system that protects the system from over-pressurization
Pressure transducers: A pressure sensor for the low- and high-pressure systems. Produces a 0–5 V output over their pressure range.
Pressure meters: Digital displays connected to each pressure transducer and the PLC system.
Vacuum pump and gauges: Cleans the system (by evacuation) before loading.
Optical system: Used visual observation; allowing in situ observations of gasket deformation.
Ruby fluorescence system: Pressure in the sample chamber can be measured during loading using an online ruby fluorescence system. Not all systems have an online ruby fluorescence system for in situ measuring. However, being able to monitor the pressure within the chamber while the DAC is being sealed is advantageous – ensuring the desired pressure is reached (or not over-shot). Pressure is measured by the shift in the laser induced luminescence of rubies in the sample chamber.
See also
Anvil press
D-DIA
Fluid statics
High pressure
Material properties of diamond
Pressure experiment
References
External links
Materials science
Condensed matter physics
Geophysics
Physical chemistry
High pressure science | Diamond anvil cell | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 5,181 | [
"Applied and interdisciplinary physics",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"nan",
"Geophysics",
"High pressure science",
"Physical chemistry",
"Matter"
] |
776,619 | https://en.wikipedia.org/wiki/Ocean%20gyre | In oceanography, a gyre () is any large system of ocean surface currents moving in a circular fashion driven by wind movements. Gyres are caused by the Coriolis effect; planetary vorticity, horizontal friction and vertical friction determine the circulatory patterns from the wind stress curl (torque).
Gyre can refer to any type of vortex in an atmosphere or a sea, even one that is human-created, but it is most commonly used in terrestrial oceanography to refer to the major ocean systems.
Gyre formation
The largest ocean gyres are wind-driven, meaning that their locations and dynamics are controlled by the prevailing global wind patterns: easterlies at the tropics and westerlies at the midlatitudes. These wind patterns result in a wind stress curl that drives Ekman pumping in the subtropics (resulting in downwelling) and Ekman suction in subpolar regions (resulting in upwelling). Ekman pumping results in an increased sea surface height at the center of the gyre and anticyclonic geostrophic currents in subtropical gyres. Ekman suction results in a depressed sea surface height and cyclonic geostrophic currents in subpolar gyres.
Wind-driven ocean gyres are asymmetrical, with stronger flows on their western boundary and weaker flows throughout their interior. The weak interior flow that is typical over most of the gyre is a result of the conservation of potential vorticity. In the shallow water equations (applicable for basin-scale flow as the horizontal length scale is much greater than the vertical length scale), potential vorticity is a function of relative (local) vorticity (zeta), planetary vorticity , and the depth , and is conserved with respect to the material derivative:
In the case of the subtropical ocean gyre, Ekman pumping results in water piling up in the center of the gyre, compressing water parcels. This results in a decrease in , so by the conservation of potential vorticity the numerator must also decrease. It can be further simplified by realizing that, in basin-scale ocean gyres, the relative vorticity is small, meaning that local changes in vorticity cannot account for the decrease in . Thus, the water parcel must change its planetary vorticity accordingly. The only way to decrease the planetary vorticity is by moving the water parcel equatorward, so throughout the majority of subtropical gyres there is a weak equatorward flow. Harald Sverdrup quantified this phenomenon in his 1947 paper, "Wind Driven Currents in a Baroclinic Ocean", in which the (depth-integrated) Sverdrup balance is defined as:
Here, is the meridional mass transport (positive north), is the Rossby parameter, is the water density, and is the vertical Ekman velocity due to wind stress curl (positive up). It can be clearly seen in this equation that for a negative Ekman velocity (e.g., Ekman pumping in subtropical gyres), meridional mass transport (Sverdrup transport) is negative (south, equatorward) in the northern hemisphere (). Conversely, for a positive Ekman velocity (e.g., Ekman suction in subpolar gyres), Sverdrup transport is positive (north, poleward) in the northern hemisphere.
Western intensification
As the Sverdrup balance argues, subtropical ocean gyres have a weak equatorward flow and subpolar ocean gyres have a weak poleward flow over most of their area. However, there must be some return flow that goes against the Sverdrup transport in order to preserve mass balance. In this respect, the Sverdrup solution is incomplete, as it has no mechanism in which to predict this return flow. Contributions by both Henry Stommel and Walter Munk resolved this issue by showing that the return flow of gyres is done through an intensified western boundary current. Stommel's solution relies on a frictional bottom boundary layer which is not necessarily physical in a stratified ocean (currents do not always extend to the bottom).
Munk's solution instead relies on friction between the return flow and the sidewall of the basin. This allows for two cases: one with the return flow on the western boundary (western boundary current) and one with the return flow on the eastern boundary (eastern boundary current). A qualitative argument for the presence of western boundary current solutions over eastern boundary current solutions can be found again through the conservation of potential vorticity. Considering again the case of a subtropical northern hemisphere gyre, the return flow must be northward. In order to move northward (an increase in planetary vorticity ), there must be a source of positive relative vorticity to the system. The relative vorticity in the shallow-water system is:
Here is again the meridional velocity and is the zonal velocity. In the sense of a northward return flow, the zonal component is neglected and only the meridional velocity is important for relative vorticity. Thus, this solution requires that in order to increase the relative vorticity and have a valid northward return flow in the northern hemisphere subtropical gyre.
Due to friction at the boundary, the velocity of flow must go to zero at the sidewall before reaching some maximum northward velocity within the boundary layer and decaying to the southward Sverdrup transport solution far away from the boundary. Thus, the condition that can only be satisfied through a western boundary frictional layer, as the eastern boundary frictional layer forces . One can make similar arguments for subtropical gyres in the southern hemisphere and for subpolar gyres in either hemisphere and see that the result remains the same: the return flow of an ocean gyre is always in the form of a western boundary current.
The western boundary current must transport on the same order of water as the interior Sverdrup transport in a much smaller area. This means western boundary currents are much stronger than interior currents, a phenomenon called "western intensification".
Gyre distribution
Subtropical gyres
There are five major subtropical gyres across the world's oceans: the North Atlantic Gyre, the South Atlantic Gyre, the Indian Ocean Gyre, the North Pacific Gyre, and the South Pacific Gyre. All subtropical gyres are anticyclonic, meaning that in the northern hemisphere they rotate clockwise, while the gyres in the southern hemisphere rotate counterclockwise. This is due to the Coriolis force. Subtropical gyres typically consist of four currents: a westward flowing equatorial current, a poleward flowing, narrow, and strong western boundary current, an eastward flowing current in the midlatitudes, and an equatorward flowing, weaker, and broader eastern boundary current.
North Atlantic Gyre
The North Atlantic Gyre is located in the northern hemisphere in the Atlantic Ocean, between the Intertropical Convergence Zone (ITCZ) in the south and Iceland in the north. The North Equatorial Current brings warm waters west towards the Caribbean and defines the southern edge of the North Atlantic Gyre. Once these waters reach the Caribbean they join the warm waters in the Gulf of Mexico and form the Gulf Stream, a western boundary current. This current then heads north and east towards Europe, forming the North Atlantic Current. The Canary Current flows south along the western coast of Europe and north Africa, completing the gyre circulation. The center of the gyre is the Sargasso Sea, which is characterized by the dense accumulation of Sargassum seaweed.
South Atlantic Gyre
The South Atlantic Gyre is located in the southern hemisphere in the Atlantic Ocean, between the Intertropical Convergence Zone in the north and the Antarctic Circumpolar Current to the south. The South Equatorial Current brings water west towards South America, forming the northern boundary of the South Atlantic gyre. Here, the water moves south in the Brazil Current, the western boundary current of the South Atlantic Gyre. The Antarctic Circumpolar Current forms both the southern boundary of the gyre and the eastward component of the gyre circulation. Eventually, the water reaches the west coast of Africa, where it is brought north along the coast as a part of the eastern boundary Benguela Current, completing the gyre circulation. The Benguela Current experiences the Benguela Niño event, an Atlantic Ocean analogue to the Pacific Ocean's El Niño, and is correlated with a reduction in primary productivity in the Benguela upwelling zone.
Indian Ocean Gyre
The Indian Ocean Gyre, located in the Indian Ocean, is, like the South Atlantic Gyre, bordered by the Intertropical Convergence Zone in the north and the Antarctic Circumpolar Current to the south. The South Equatorial Current forms the northern boundary of the Indian Ocean Gyre as it flows west along the equator towards the east coast of Africa. At the coast of Africa, the South Equatorial Current is split by Madagascar into the Mozambique Current, flowing south through the Mozambique Channel, and the East Madagascar Current, flowing south along the east coast of Madagascar, both of which are western boundary currents. South of Madagascar the two currents join to form the Agulhas Current. The Agulhas Current flows south until it joins the Antarctic Circumpolar Current, which flows east at the southern edge of the Indian Ocean Gyre. Due to the African continent not extending as far south as the Indian Ocean Gyre, some of the water in the Agulhas Current "leaks" into the Atlantic Ocean, with potentially important effects for global thermohaline circulation. The gyre circulation is completed by the north flowing West Australian Current, which forms the eastern boundary of the gyre.
North Pacific Gyre
The North Pacific Gyre, one of the largest ecosystems on Earth, is bordered to the south by the Intertropical Convergence Zone and extending north to roughly 50°N. At the southern boundary of the North Pacific Gyre, the North Equatorial Current flows west along the equator towards southeast Asia. The Kuroshio Current is the western boundary current of the North Pacific Gyre, flowing northeast along the coast of Japan. At roughly 50°N, the flow turns east and becomes the North Pacific Current. The North Pacific Current flows east, eventually bifurcating near the west coast of North America into the northward flowing Alaska Current and the southward flowing California Current. The Alaska Current is the eastern boundary current of the subpolar Alaska Gyre, while the California Current is the eastern boundary current that completes the North Pacific Gyre circulation. Within the North Pacific Gyre is the Great Pacific Garbage Patch, an area of increased plastic waste concentration.
South Pacific Gyre
The South Pacific Gyre, like its northern counterpart, is one of the largest ecosystems on Earth with an area that accounts for around 10% of the global ocean surface area. Within this massive area is Point Nemo, the location on Earth that is farthest away from all continental landmass (2,688 km away from the closest land). The remoteness of this gyre complicates sampling, causing this gyre to be historically under sampled in oceanographic datasets. At the northern boundary of the South Pacific Gyre, the South Equatorial Current flows west towards southeast Asia and Australia. There, it turns south as it flows in the East Australian Current, a western boundary current. The Antarctic Circumpolar Current again returns the water to the east. The flow turns north along the western coast of South America in the Humboldt Current, the eastern boundary current that completes the South Pacific Gyre circulation. Like the North Pacific Gyre, the South Pacific Gyre has an elevated concentration of plastic waste near the center, termed the South Pacific garbage patch. Unlike the North Pacific garbage patch which was first described in 1988, the South Pacific garbage patch was discovered much more recently in 2016 (a testament to the extreme remoteness of the South Pacific Gyre).
Subpolar gyres
Subpolar gyres form at high latitudes (around 60°). Circulation of surface wind and ocean water is cyclonic, counterclockwise in the northern hemisphere and clockwise in the southern hemisphere, around a low-pressure area, such as the persistent Aleutian Low and the Icelandic Low. The wind stress curl in this region drives the Ekman suction, which creates an upwelling of nutrient-rich water from the lower depths.
Subpolar circulation in the southern hemisphere is dominated by the Antarctic Circumpolar Current, due to the lack of large landmasses breaking up the Southern Ocean. There are minor gyres in the Weddell Sea and the Ross Sea, the Weddell Gyre and Ross Gyre, which circulate in a clockwise direction.
North Atlantic Subpolar Gyre
The North Atlantic Subpolar Gyre, located in the North Atlantic Ocean, is characterized by a counterclockwise rotation of surface waters. It plays a crucial role in the global oceanic conveyor belt system, influencing climate and marine ecosystems. The gyre is driven by the convergence of warm, salty waters from the south and cold, fresher waters from the north. As these waters meet, the warm, dense water sinks beneath the lighter, colder water, initiating a complex circulation pattern. The North Atlantic Subpolar Gyre has significant implications for climate regulation, as it helps redistribute heat and nutrients throughout the North Atlantic, influencing weather patterns and supporting diverse marine life. Additionally, changes in the gyre's strength and circulation can impact regional climate variability and may be influenced by broader climate change trends.
The Atlantic Meridional Overturning Circulation (AMOC) is a key component of the global climate system through its transport of heat and freshwater. The North Atlantic Subpolar Gyre is in a region where the AMOC is actively developed and shaped through mixing and water mass transformation. It is a region where large amounts of heat transported northward by the ocean are released into the atmosphere, thereby modifying the climate of northwest Europe. The North Atlantic Subpolar Gyre has a complex topography with a series of basins in which the large-scale circulation is characterized by cyclonic boundary currents and interior recirculation. The North Atlantic Current develops out of the Gulf Stream extension and turns eastward, crossing the Atlantic in a wide band between about 45°N and 55°N creating the southern border of the North Atlantic Subpolar Gyre. There are several branches of the North Atlantic Current, and they flow into an eastern intergyral region in the Bay of Biscay, the Rockall Trough, the Iceland Basin, and the Irminger Sea. Part of the North Atlantic Current flows into the Norwegian Sea, and some recirculate within the boundary currents of the subpolar gyre.
Ross Gyre
The Ross Gyre is located in the Southern Ocean surrounding Antarctica, just outside of the Ross Sea. This gyre is characterized by a clockwise rotation of surface waters, driven by the combined influence of wind, the Earth's rotation, and the shape of the seafloor. The gyre plays a crucial role in the transport of heat, nutrients, and marine life in the Southern Ocean, affecting the distribution of sea ice and influencing regional climate patterns.
The Ross Sea, Antarctica, is a region where the mixing of distinct water masses and complex interactions with the cryosphere lead to the production and export of dense water, with global-scale impacts. which controls the proximity of the warm waters of the Antarctic Circumpolar Current to the Ross Sea continental shelf, where they may drive ice shelf melting and increase sea level. The deepening of sea level pressures over the Southeast Pacific/Amundsen-Bellingshausen Seas generates a cyclonic circulation cell that reduces sea surface heights north of the Ross Gyre via Ekman suction. The relative reduction of sea surface heights to the north facilitates a northeastward expansion of the outer boundary of the Ross Gyre. Further, the gyre is intensified by a westward ocean stress anomaly over its southern boundary. The ensuing southward Ekman transport anomaly raises sea surface heights over the continental shelf and accelerates the westward throughflow by increasing the cross-slope pressure gradient. The sea level pressure center may have a greater impact on the Ross Gyre transport or the throughflow, depending on its location and strength. This gyre has significant effects on interactions in the Southern Ocean between waters of the Antarctic margin, the Antarctic Circumpolar Current, and intervening gyres with a strong seasonal sea ice cover play a major role in the climate system.
The Ross Sea is the southernmost sea on Earth and holds the United States' McMurdo Station and Italian Zuchelli Station. Even though this gyre is located nearby two of the most prominent research stations in the world for Antarctic study, the Ross Gyre remains one of the least sampled gyres in the world.
thumb|Locations of the Weddell & Ross Gyre's and their distribution in the Southern Ocean.
Weddell Gyre
The Weddell Gyre is located in the Southern Ocean surrounding Antarctica, just outside of the Weddell Sea. It is characterized by a clockwise rotation of surface waters, influenced by the combined effects of winds, the Earth's rotation, and the seafloor's topography. Like the Ross Gyre, the Weddell Gyre plays a critical role in the movement of heat, nutrients, and marine life in the Southern Ocean. Insights into the behavior and variability of the Weddell Gyre are crucial for comprehending the interaction between ocean processes in the southern hemisphere and their implications for the global climate system.
This gyre is formed by interactions between the Antarctic Circumpolar Current and the Antarctic Continental Shelf. The Weddell Gyre (WG) is one of the main oceanographic features of the Southern Ocean south of the Antarctic Circumpolar Current which plays an influential role in global ocean circulation as well as gas exchange with the atmosphere. The WG is situated in the Atlantic sector of the Southern Ocean, south of 55–60°S and roughly between 60°W and 30°E (Deacon, 1979). It stretches over the Weddell abyssal plain, where the Weddell Sea is situated, and extends east into the Enderby abyssal plain.
Beaufort Sea Gyre
The anti-cyclonic Beaufort Gyre is the dominant circulation of the Canada Basin and the largest freshwater reservoir in the Arctic Ocean's western and northern sectors. The Gyre is characterized by a large-scale, quasi-permanent, counterclockwise rotation of surface waters within the Beaufort Sea. This gyre functions as a critical mechanism for the transport of heat, nutrients, and sea ice within the Arctic region, thus influencing the physical and biological characteristics of the marine environment. Negative wind stress curl over the region, mediated by the sea ice pack, leads to Ekman pumping, downwelling of isopycnal surfaces, and storage of ~20,000 km3 of freshwater in the upper few hundred meters of the ocean. The gyre gains energy from winds in the south and loses energy in the north over a mean annual cycle. The strong atmospheric circulation in the autumn, combined with significant areas of open water, demonstrates the effect that wind stress has directly on the surface geostrophic currents. The Beaufort Gyre and the Transpolar Drift are interconnected due to their relationship in their role in transporting sea ice across the Arctic Ocean. Their influence on the distribution of freshwater has broad impacts for global sea level rise and climate dynamics.
Biogeochemistry of Gyres
Depending on their location around the world, gyres can be regions of high biological productivity or low productivity. Each gyre has a unique ecological profile but can be grouped by region due to dominating characteristics. Generally, productivity is greater for cyclonic gyres (e.g., subpolar gyres) that drive upwelling through Ekman suction and lesser for anticyclonic gyres (e.g., subtropical gyres) that drive downwelling through Ekman pumping, but this can differ between seasons and regions.
Subtropical gyres are sometimes described as "ocean deserts" or "biological deserts", in reference to arid land deserts where little life exists. Due to their oligotrophic characteristics, warm subtropical gyres have some of the least productive waters per unit surface area in the ocean. The downwelling of water that occurs in subtropical gyres takes nutrients deeper in the ocean, removing them from surface waters. Organic particles can also be removed from surface waters through gravitational sinking, where the particle is too heavy to remain suspended in the water column. However, since subtropical gyres cover 60% of the ocean surface, their relatively low production per unit area is made up for by covering massive areas of the Earth. This means that, despite being areas of relatively low productivity and low nutrients, they play a large role in contributing to the overall amount of ocean production.
In contrast to subtropical gyres, subpolar gyres can have a lot of biological activity due to Ekman suction upwelling driven by wind stress curl. Subpolar gyres in the North Atlantic have a "bloom and crash" pattern following seasonal and storm patterns. The highest productivity in the North Atlantic occurs in boreal spring when there are long days and high levels of nutrients. This is different to the subpolar North Pacific, where almost no phytoplankton bloom occurs and patterns of respiration are more consistent through time than in the North Atlantic.
Nutrient availability
Primary production in the ocean is heavily dependent on the presence of nutrients and the availability of sunlight. Here, nutrients refers to nitrogen, nitrate, phosphate, and silicate, all important nutrients in biogeochemical processes that take place in the ocean. A commonly accepted method for relating different nutrient availabilities to each other in order to describe chemical processes is the Redfield, Ketchum, and Richards (RKR) equation. This equation describes the process of photosynthesis and respiration and the ratios of the nutrients involved.
The RKR Equation for Photosynthesis and Respiration:
106CO2 +16HNO3 +H3PO4 +122H2O ->(CH2O)106(NH3)16H3PO4 +138O2
With the correct ratios of nutrients on the left side of the RKR equation and sunlight, photosynthesis takes place to produce plankton (primary production) and oxygen. Typically, the limiting nutrients to production are nitrogen and phosphorus with nitrogen being the most limiting.
Lack of nutrients in the surface waters of subtropical gyres is related to the strong downwelling and sinking of particles that occurs in these areas as mentioned earlier. However, nutrients are still present in these gyres. These nutrients can come from not only vertical transport, but also lateral transport across gyre fronts. This lateral transport helps make up for the large loss of nutrients due to downwelling and particle sinking. However, the major source of nitrate in the nitrate-limited subtropical gyres is a result of biological, not physical, factors. Nitrogen in subtropical gyres is produced primarily by nitrogen-fixing bacteria, which are common throughout most of the oligotrophic waters of subtropical gyres. These bacteria transform atmospheric nitrogen into bioavailable forms.
High-nutrient, low-chlorophyll regions
The Alaskan Gyre and Western Subarctic Gyre are an iron-limited environment rather than a nitrogen or phosphorus limited environment. This region relies on dust blowing off the state of Alaska and other landmasses nearby to supply iron. Because it is limited by iron instead of nitrogen or phosphorus, it is known as high-nutrient, low-chlorophyll region. Iron limitation in high-nutrient, low-chlorophyll regions results in water that is rich in other nutrients because they have not been removed by the small populations of plankton that live there.
Seasonality in the North Atlantic Subpolar Gyre
The North Atlantic Subpolar Gyre is an important part of the ocean's carbon dioxide drawdown mechanism. The photosynthesis of phytoplankton communities in this area seasonally depletes surface waters of carbon dioxide, removing it through primary production. This primary production occurs seasonally, with the highest amounts happening in summer. Generally, spring is an important time for photosynthesis as the light limitation imposed during winter is lifted and there are high levels of nutrients available. However, in the North Atlantic Subpolar Gyre, spring productivity is low in comparison to expected levels. It is hypothesized that this low productivity is because phytoplankton are less efficiently using light than they do in the summer months.
Trophic levels
Ocean gyres typically contain 5–6 trophic levels. The limiting factor for the number of trophic levels is the size of the phytoplankton, which are generally small in nutrient limited gyres. In low oxygen zones, oligotrophs are a large percentage of the phytoplankton.
At the intermediate level, small fishes and squid (especially ommastrephidae) dominate the nektonic biomass. They are important for the transport of energy from low trophic levels to high trophic levels. In some gyres, ommastrephidae are a major part of many animals' diets and can support the existence of large marine life.
Indigenous knowledge of ocean patterns
Indigenous Traditional Ecological Knowledge recognizes that Indigenous people, as the original caretakers, hold unique relationships with the land and waters. These relationships make TEK difficult to define, as Traditional Knowledge means something different to each person, each community, and each caretaker. The United Nations Declaration on the Rights of Indigenous Peoples begins by reminding readers that “respect for Indigenous knowledge, cultures and traditional practices contributes to sustainable and equitable development and proper management of the environment” Attempts to collect and store this knowledge have been made over the past twenty years. Conglomerates such as The Indigenous Knowledge Social Network (SIKU) https://siku.org/, the Igliniit project, and the Wales Inupiaq Sea Ice Directory have made strides in the inclusion and documentation of indigenous people's thoughts on global climate, oceanographic, and social trends.
One example involves ancient Polynesians and how they discovered and then travelled throughout the Pacific Ocean from modern day Polynesia to Hawaii and New Zealand. Known as wayfinding, navigators would use the stars, winds, and ocean currents to know where they were on the ocean and where they were headed. These navigators were intimately familiar with Pacific currents that create the North Pacific gyre and this way of navigating continues today.
Another example involves the Māori people who came from Polynesia and are an indigenous group in New Zealand. Their way of life and culture has strong connections to the ocean. The Māori believe that the sea is the source of all life and is an energy, called Tangaroa. This energy could manifest in many different ways, like strong ocean currents, calm seas, or turbulent storms. The Māori have a rich oral history of navigation within the Southern Ocean and Antarctic Ocean and a deep understanding their ice and ocean patterns. A current research project is aimed at consolidating these oral histories. Efforts are being made to integrate TEK with Western science in marine and ocean research in New Zealand. Additional research efforts aim to collate indigenous oral histories and incorporate indigenous knowledge into climate change adaptation practices in New Zealand that will directly affect the Māori and other indigenous communities.
Climate change
Ocean circulation re-distributes the heat and water-resources, therefore determines the regional climate. For example, the western branches of the subtropical gyres flow from the lower latitudes towards higher latitudes, bringing relatively warm and moist air to the adjacent land, contributing to a mild and wet climate (e.g., East China, Japan). In contrast, the eastern boundary currents of the subtropical gyres streaming from the higher latitudes towards lower latitudes, corresponding to a relatively cold and dry climate (e.g., California).
Currently, the core of the subtropical gyres are around 30° in both Hemispheres. However, their positions were not always there. Satellite observational sea surface height and sea surface temperature data suggest that the world's major ocean gyres are slowly moving towards higher latitudes in the past few decades. Such feature show agreement with climate model prediction under anthropogenic global warming. Paleo-climate reconstruction also suggest that during the past cold climate intervals, i.e., ice ages, some of the western boundary currents (western branches of the subtropical ocean gyres) are closer to the equator than their modern positions. These evidence implies that global warming is very likely to push the large-scale ocean gyres towards higher latitudes.
Pollution
See also
Anticyclone
Cyclone
Ecosystem of the North Pacific Subtropical Gyre
Eddy
Fluid dynamics
Geostrophic current
High-nutrient, low-chlorophyll regions
Ocean current
Skookumchuck
Thermohaline circulation
Volta do mar
Whirlpool
References
External links
5 Gyres – Understanding Plastic Marine Pollution
Wind Driven Surface Currents: Gyres
SIO 210: Introduction to Physical Oceanography – Global circulation
SIO 210: Introduction to Physical Oceanography – Wind-forced circulation notes
SIO 210: Introduction to Physical Oceanography – Lecture 6
Physical Geography – Surface and Subsurface Ocean Currents
North Pacific Gyre Oscillation — Georgia Institute of Technology
Aerodynamics
Fluid dynamics
Oceanic gyres
Fisheries science | Ocean gyre | [
"Chemistry",
"Engineering"
] | 6,110 | [
"Chemical engineering",
"Aerodynamics",
"Aerospace engineering",
"Piping",
"Fluid dynamics"
] |
776,713 | https://en.wikipedia.org/wiki/Unified%20field%20theory | In physics, a unified field theory (UFT) is a type of field theory that allows all fundamental forces and elementary particles to be written in terms of a single type of field. According to modern discoveries in physics, forces are not transmitted directly between interacting objects but instead are described and interpreted by intermediary entities called fields. Furthermore, according to quantum field theory, particles are themselves the quanta of fields. Examples of different fields in physics include vector fields such as the electromagnetic field, spinor fields whose quanta are fermionic particles such as electrons, and tensor fields such as the metric tensor field that describes the shape of spacetime and gives rise to gravitation in general relativity. Unified field theory attempts to organize these fields into a single mathematical structure.
For over a century, unified field theory has remained an open line of research. The term was coined by Albert Einstein, who attempted to unify his general theory of relativity with electromagnetism. Einstein attempted to create a classical unified field theory, rejecting quantum mechanics. Among other difficulties, this required a new explanation of particles as singularities or solitons instead of field quanta. Later attempts to unify general relativity with other forces incorporate quantum mechanics. The concept of a "Theory of Everything" or Grand Unified Theory are closely related to unified field theory, but differ by not requiring the basis of nature to be fields, and often by attempting to explain physical constants of nature. Additionally, Grand Unified Theories do not attempt to include the gravitational force and can therefore operate entirely within quantum field theory.
The goal of a unified field theory has led to a great deal of progress in theoretical physics.
Introduction
Unified field theory attempts to give a single elegant description of the following fields:
Forces
All four of the known fundamental forces are mediated by fields. In the Standard Model of particle physics, three of these result from the exchange of gauge bosons. These are:
Strong interaction: the interaction responsible for holding quarks together to form hadrons, and holding neutrons and also protons together to form atomic nuclei. The exchange particle that mediates this force is the gluon.
Electromagnetic interaction: the familiar interaction that acts on electrically charged particles. The photon is the exchange particle for this force.
Weak interaction: a short-range interaction responsible for some forms of radioactivity, that acts on electrons, neutrinos, and quarks. It is mediated by the W and Z bosons.
General relativity likewise describes gravitation as the result of the metric tensor field, which describes the shape of spacetime:
Gravitational interaction: a long-range attractive interaction that acts on all particles. In hypothetical quantum versions of GR, the postulated exchange particle has been named the graviton.
Matter
In the Standard Model, the "matter" particles (electrons, quarks, neutrinos, etc) are described as the quanta of spinor fields. Gauge boson fields also have quanta, such as photons for the electromagnetic field.
Higgs
The Standard Model has a unique fundamental scalar field, the Higgs field, the quanta of which are called Higgs bosons.
History
Classic theory
The first successful classical unified field theory was developed by James Clerk Maxwell. In 1820, Hans Christian Ørsted discovered that electric currents exerted forces on magnets, while in 1831, Michael Faraday made the observation that time-varying magnetic fields could induce electric currents. Until then, electricity and magnetism had been thought of as unrelated phenomena. In 1864, Maxwell published his famous paper on a dynamical theory of the electromagnetic field. This was the first example of a theory that was able to encompass previously separate field theories (namely electricity and magnetism) to provide a unifying theory of electromagnetism. By 1905, Albert Einstein had used the constancy of the speed-of-light in Maxwell's theory to unify our notions of space and time into an entity we now call spacetime. In 1915, he expanded this theory of special relativity to a description of gravity, general relativity, using a field to describe the curving geometry of four-dimensional (4D) spacetime.
In the years following the creation of the general theory, a large number of physicists and mathematicians enthusiastically participated in the attempt to unify the then-known fundamental interactions. Given later developments in this domain, of particular interest are the theories of Hermann Weyl of 1919, who introduced the concept of an (electromagnetic) gauge field in a classical field theory and, two years later, that of Theodor Kaluza, who extended General Relativity to five dimensions. Continuing in this latter direction, Oscar Klein proposed in 1926 that the fourth spatial dimension be curled up into a small, unobserved circle. In Kaluza–Klein theory, the gravitational curvature of the extra spatial direction behaves as an additional force similar to electromagnetism. These and other models of electromagnetism and gravity were pursued by Albert Einstein in his attempts at a classical unified field theory. By 1930 Einstein had already considered the Einstein-Maxwell–Dirac System [Dongen]. This system is (heuristically) the super-classical [Varadarajan] limit of (the not mathematically well-defined) quantum electrodynamics. One can extend this system to include the weak and strong nuclear forces to get the Einstein–Yang-Mills–Dirac System. The French physicist Marie-Antoinette Tonnelat published a paper in the early 1940s on the standard commutation relations for the quantized spin-2 field. She continued this work in collaboration with Erwin Schrödinger after World War II. In the 1960s Mendel Sachs proposed a generally covariant field theory that did not require recourse to renormalization or perturbation theory. In 1965, Tonnelat published a book on the state of research on unified field theories.
Modern progress
In 1963, American physicist Sheldon Glashow proposed that the weak nuclear force, electricity, and magnetism could arise from a partially unified electroweak theory. In 1967, Pakistani Abdus Salam and American Steven Weinberg independently revised Glashow's theory by having the masses for the W particle and Z particle arise through spontaneous symmetry breaking with the Higgs mechanism. This unified theory modelled the electroweak interaction as a force mediated by four particles: the photon for the electromagnetic aspect, a neutral Z particle, and two charged W particles for the weak aspect. As a result of the spontaneous symmetry breaking, the weak force becomes short-range and the W and Z bosons acquire masses of 80.4 and , respectively. Their theory was first given experimental support by the discovery of weak neutral currents in 1973. In 1983, the Z and W bosons were first produced at CERN by Carlo Rubbia's team. For their insights, Glashow, Salam, and Weinberg were awarded the Nobel Prize in Physics in 1979. Carlo Rubbia and Simon van der Meer received the Prize in 1984.
After Gerardus 't Hooft showed the Glashow–Weinberg–Salam electroweak interactions to be mathematically consistent, the electroweak theory became a template for further attempts at unifying forces. In 1974, Sheldon Glashow and Howard Georgi proposed unifying the strong and electroweak interactions into the Georgi–Glashow model, the first Grand Unified Theory, which would have observable effects for energies much above 100 GeV.
Since then there have been several proposals for Grand Unified Theories, e.g. the Pati–Salam model, although none is currently universally accepted. A major problem for experimental tests of such theories is the energy scale involved, which is well beyond the reach of current accelerators. Grand Unified Theories make predictions for the relative strengths of the strong, weak, and electromagnetic forces, and in 1991 LEP determined that supersymmetric theories have the correct ratio of couplings for a Georgi–Glashow Grand Unified Theory.
Many Grand Unified Theories (but not Pati–Salam) predict that the proton can decay, and if this were to be seen, details of the decay products could give hints at more aspects of the Grand Unified Theory. It is at present unknown if the proton can decay, although experiments have determined a lower bound of 1035 years for its lifetime.
Current status
Theoretical physicists have not yet formulated a widely accepted, consistent theory that combines general relativity and quantum mechanics to form a theory of everything. Trying to combine the graviton with the strong and electroweak interactions leads to fundamental difficulties and the resulting theory is not renormalizable. The incompatibility of the two theories remains an outstanding problem in the field of physics.
See also
Sheldon Glashow
Unification (physics)
References
Further reading
Jeroen van Dongen Einstein's Unification, Cambridge University Press (July 26, 2010)
Varadarajan, V.S. Supersymmetry for Mathematicians: An Introduction (Courant Lecture Notes), American Mathematical Society (July 2004)
External links
On the History of Unified Field Theories, by Hubert F. M. Goenner
Particle physics
Theories of gravity
Unsolved problems in physics | Unified field theory | [
"Physics"
] | 1,879 | [
"Theoretical physics",
"Unsolved problems in physics",
"Particle physics",
"Theories of gravity"
] |
777,147 | https://en.wikipedia.org/wiki/Solar%20inverter | A solar inverter or photovoltaic (PV) inverter is a type of power inverter which converts the variable direct current (DC) output of a photovoltaic solar panel into a utility frequency alternating current (AC) that can be fed into a commercial electrical grid or used by a local, off-grid electrical network. It is a critical balance of system (BOS)–component in a photovoltaic system, allowing the use of ordinary AC-powered equipment. Solar power inverters have special functions adapted for use with photovoltaic arrays, including maximum power point tracking and anti-islanding protection.
Classification
Solar inverters may be classified into four broad types:
Stand-alone inverters, used in stand-alone power systems where the inverter draws its DC energy from batteries charged by photovoltaic arrays. Many stand-alone inverters also incorporate integral battery chargers to replenish the battery from an AC source when available. Normally these do not interface in any way with the utility grid, and as such are not required to have anti-islanding protection.
Grid-tie inverters, which match phase with a utility-supplied sine wave. Grid-tie inverters are designed to shut down automatically upon loss of utility supply, for safety reasons. They do not provide backup power during utility outages.
Battery backup inverters are special inverters which are designed to draw energy from a battery, manage the battery charge via an onboard charger, and export excess energy to the utility grid. These inverters are capable of supplying AC energy to selected loads during a utility outage, and are required to have anti-islanding protection.
Intelligent hybrid inverters manage photovoltaic array, battery storage and utility grid, which are all coupled directly to the unit. These modern all-in-one systems are usually highly versatile and can be used for grid-tie, stand-alone or backup applications but their primary function is self-consumption with the use of storage.
Maximum power point tracking
Solar inverters use maximum power point tracking (MPPT) to get the maximum possible power from the PV array. Solar cells have a complex relationship between solar irradiation, temperature and total resistance that produces a non-linear output efficiency known as the I-V curve. It is the purpose of the MPPT system to sample the output of the cells and determine a resistance (load) to obtain maximum power for any given environmental conditions.
The fill factor, more commonly known by its abbreviation FF, is a parameter which, in conjunction with the open circuit voltage (Voc) and short circuit current (Isc) of the panel, determines the maximum power from a solar cell. Fill factor is defined as the ratio of the maximum power from the solar cell to the product of Voc and Isc.
There are three main types of MPPT algorithms: perturb-and-observe, incremental conductance and constant voltage. The first two methods are often referred to as hill climbing methods; they rely on the curve of power plotted against voltage rising to the left of the maximum power point, and falling on the right.
Grid tied solar inverters
The key role of the grid-interactive or synchronous inverters or simply the grid-tie inverter (GTI) is to synchronize the phase, voltage, and frequency of the power line with
that of the grid. Solar grid-tie inverters are designed to quickly disconnect from the grid if the utility grid goes down. This is an NEC requirement that ensures that in the event of a blackout, the grid tie inverter will shut down to prevent the energy it produces from harming any line workers who are sent to fix the power grid.
Grid-tie inverters that are available on the market today use a number of different technologies. The inverters may use the newer high-frequency transformers, conventional low-frequency transformers, or no transformer. Instead of converting direct current directly to 120 or 240 volts AC, high-frequency transformers employ a computerized multi-step process that involves converting the power to high-frequency AC and then back to DC and then to the final AC output voltage.
Historically, there have been concerns about having transformerless electrical systems feed into the public utility grid. The concerns stem from the fact that there is a lack of galvanic isolation between the DC and AC circuits, which could allow the passage of dangerous DC faults to the AC side. Since 2005, the NFPA's NEC allows transformer-less (or non-galvanically isolated) inverters. The VDE 0126-1-1 and IEC 6210 also have been amended to allow and define the safety mechanisms needed for such systems. Primarily, residual or ground current detection is used to detect possible fault conditions. Also isolation tests are performed to ensure DC to AC separation.
Many solar inverters are designed to be connected to a utility grid, and will not operate when they do not detect the presence of the grid. They contain special circuitry to precisely match the voltage, frequency and phase of the grid. When a grid is not detected, grid-tie inverters will not produce power to avoid islanding which can cause safety issues.
Solar pumping inverters
Advanced solar pumping inverters convert DC voltage from the solar array into AC voltage to drive submersible pumps directly without the need for batteries or other energy storage devices. By utilizing MPPT (maximum power point tracking), solar pumping inverters regulate output frequency to control the speed of the pumps in order to save the pump motor from damage.
Solar pumping inverters usually have multiple ports to allow the input of DC current generated by PV arrays, one port to allow the output of AC voltage, and a further port for input from a water-level sensor.
Three-phase-inverter
A three-phase-inverter is a type of solar microinverter specifically design to supply three-phase electric power. In conventional microinverter designs that work with one-phase power, the energy from the panel must be stored during the period where the voltage is passing through zero, which it does twice per cycle (at 50 or 60 Hz). In a three phase system, throughout the cycle, one of the three wires has a positive (or negative) voltage, so the need for storage can be greatly reduced by transferring the output of the panel to different wires during each cycle. The reduction in energy storage significantly lowers the price and complexity of the converter hardware, as well as potentially increasing its expected lifetime.
Concept
Background
Conventional alternating current power is a sinusoidal voltage pattern that repeats over a defined period. That means that during a single cycle, the voltage passes through zero two times. In European systems the voltage at the plug has a maximum of 230 V and cycles 50 times a second, meaning that there are 100 times a second where the voltage is zero, while North American derived systems are 120 V 60 Hz, or 120 zero voltages a second.
Inexpensive inverters can convert DC power to AC by simply turning the DC side of the power on and off 120 times a second, inverting the voltage every other cycle. The result is a square-wave that is close enough to AC power for many devices. However, this sort of solution is not useful in the solar power case, where the goal is to convert as much of the power from the solar power into AC as possible. If one uses these inexpensive types of inverters, all of the power generated during the time that the DC side is turned off is simply lost, and this represents a significant amount of each cycle.
To address this, solar inverters use some form of energy storage to buffer the panel's power during those zero-crossing periods. When the voltage of the AC goes above the voltage in the storage, it is dumped into the output along with any energy being developed by the panel at that instant. In this way, the energy produced by the panel through the entire cycle is eventually sent into the output.
The problem with this approach is that the amount of energy storage needed when connected to a typical modern solar panel can only economically be provided through the use of electrolytic capacitors. These are relatively inexpensive but have well-known degradation modes that mean they have lifetime expectancy on the order of a decade. This has led to a great debate in the industry over whether or not microinverters are a good idea, because when these capacitors start to fail at the end of their expected life, replacing them will require the panels to be removed, often on the roof.
Three-phase
In comparison to normal household current on two wires, current on the delivery side of the power grid uses three wires and phases. At any given instant, the sum of those three is always positive (or negative). So while any given wire in a three-phase system undergoes zero-crossing events in exactly the same fashion as household current, the system as a whole does not, it simply fluctuates between the maximum and a slightly lower value.
A microinverter designed specifically for three-phase supply can eliminate much of the required storage by simply selecting which wire is closest to its own operating voltage at any given instant. A simple system could simply select the wire that is closest to the maximum voltage, switching to the next line when that begins to approach the maximum. In this case, the system only has to store the amount of energy from the peak to the minimum of the cycle as a whole, which is much smaller both in voltage difference and time.
This can be improved further by selecting the wire that is closest to its own DC voltage at any given instant, instead of switching from one to the other purely on a timer. At any given instant two of the three wires will have a positive (or negative) voltage and using the one closer to the DC side will take advantage of slight efficiency improvements in the conversion hardware.
The reduction, or outright elimination, of energy storage requirements, simplifies the device and eliminates the one component that is expected to define its lifetime. Instead of a decade, a three-phase microinverter could be built to last for the lifetime of the panel. Such a device would also be less expensive and less complex, although at the cost of requiring each inverter to connect to all three lines, which possibly leads to more wiring.
Disadvantages
The primary disadvantage of the three-phase inverter concept is that only sites with three-phase power can take advantage of these systems. Three-phase is easily available at utility-scale and commercial sites, and it was to these markets that the systems were aimed. However, the main advantages of the microinverter concept involve issues of shading and panel orientation, and in the case of large systems, these are easily addressed by simply moving the panels around. The benefits of the three-phase micro are very limited compared to the residential case with limited space to work in.
As of 2014, observers believed that three-phase micros had not yet managed to reach the price point where their advantages appeared worthwhile. Moreover, the wiring costs for three-phase microinverters is expected to be higher.
Combining phases
It is important to contrast a native three-phase inverter with three single-phase micro-inverters wired to output in three-phase. The latter is a relatively common feature of most inverter designs, allowing you to connect three identical inverters together, each across a pair of wires in a three-phase circuit. The result is three-phase power, but each inverter in the system is outputting a single phase. These sorts of solutions do not take advantage of the reduced energy storage needs outlined above.
Solar micro-inverters
Solar micro-inverter is an inverter designed to operate with a single PV module. The micro-inverter converts the direct current output from each panel into alternating current. Its design allows parallel connection of multiple, independent units in a modular way.
Micro-inverter advantages include single panel power optimization, independent operation of each panel, plug-and play installation, improved installation and fire safety, minimized costs with system design and stock minimization.
A 2011 study at Appalachian State University reports that individual integrated inverter setup yielded about 20% more power in unshaded conditions and 27% more power in shaded conditions compared to string connected setup using one inverter. Both setups used identical solar panels.
A solar micro-inverter, or simply microinverter, is a plug-and-play device used in photovoltaics that converts direct current (DC) generated by a single solar module to alternating current (AC). Microinverters contrast with conventional string and central solar inverters, in which a single inverter is connected to multiple solar panels. The output from several microinverters can be combined and often fed to the electrical grid.
Microinverters have several advantages over conventional inverters. The main advantage is that they electrically isolate the panels from one another, so small amounts of shading, debris or snow lines on any one solar module, or even a complete module failure, do not disproportionately reduce the output of the entire array. Each microinverter harvests optimum power by performing maximum power point tracking (MPPT) for its connected module. Simplicity in system design, lower amperage wires, simplified stock management, and added safety are other factors introduced with the microinverter solution.
The primary disadvantages of a microinverter include a higher initial equipment cost per peak watt than the equivalent power of a central inverter since each inverter needs to be installed adjacent to a panel (usually on a roof). This also makes them harder to maintain and more costly to remove and replace. Some manufacturers have addressed these issues with panels with built-in microinverters. A microinverter has often a longer lifespan than a central inverter, which will need replacement during the lifespan of the solar panels. Therefore, the financial disadvantage at first may become an advantage in the long term.
A power optimizer is a type of technology similar to a microinverter and also does a power optimizer uses a panel-level maximum power point tracking, but does not convert to AC per module.
Description
String inverter
Solar panels produce direct current at a voltage that depends on module design and lighting conditions. Modern modules using 6-inch cells typically contain 60 cells and produce a nominal 24-30 V. (so inverters are ready for 24-50 V).
For conversion into AC, panels may be connected in series to produce an array that is effectively a single large panel with a nominal rating of 300 to 600 VDC. The power then runs to an inverter, which converts it into standard AC voltage, typically 230 VAC / 50 Hz or 240 VAC / 60 Hz.
The main problem with the string inverter approach is the string of panels acts as if it were a single larger panel with a max current rating equivalent to the poorest performer in the string. For example, if one panel in a string has 5% higher resistance due to a minor manufacturing defect, the entire string suffers a 5% performance loss. This situation is dynamic. If a panel is shaded its output drops dramatically, affecting the output of the string, even if the other panels are not shaded. Even slight changes in orientation can cause output loss in this fashion. In the industry, this is known as the "Christmas-lights effect", referring to the way an entire string of series-strung Christmas tree lights will fail if a single bulb fails. However, this effect is not entirely accurate and ignores the complex interaction between modern string inverter maximum power point tracking and even module bypass diodes. Shade studies by major microinverter and DC optimizer companies show small yearly gains in light, medium and heavy shaded conditions – 2%, 5% and 8% respectively – over an older string inverter
Additionally, the efficiency of a panel's output is strongly affected by the load the inverter places on it. To maximize production, inverters use a technique called maximum power point tracking to ensure optimal energy harvest by adjusting the applied load. However, the same issues that cause output to vary from panel to panel, affect the proper load that the MPPT system should apply. If a single panel operates at a different point, a string inverter can only see the overall change, and moves the MPPT point to match. This results in not just losses from the shadowed panel, but the other panels too. Shading of as little as 9% of the surface of an array can, in some circumstances, reduce system-wide power as much as 54%. However, as stated above, these yearly yield losses are relatively small and newer technologies allow some string inverters to significantly reduce the effects of partial shading.
Another issue, though minor, is that string inverters are available in a limited selection of power ratings. This means that a given array normally up-sizes the inverter to the next-largest model over the rating of the panel array. For instance, a 10-panel array of 2300 W might have to use a 2500 or even 3000 W inverter, paying for conversion capability it cannot use. This same issue makes it difficult to change array size over time, adding power when funds are available (modularity). If the customer originally purchased a 2500 W inverter for their 2300 W of panels, they cannot add even a single panel without over-driving the inverter. However, this over sizing is considered common practice in today's industry (sometimes as high as 20% over inverter nameplate rating) to account for module degradation, higher performance during winter months or to achieve higher sell back to the utility.
Other challenges associated with centralized inverters include the space required to locate the device, as well as heat dissipation requirements. Large central inverters are typically actively cooled. Cooling fans make noise, so location of the inverter relative to offices and occupied areas must be considered. And because cooling fans have moving parts, dirt, dust, and moisture can negatively affect their performance over time. String inverters are quieter but might produce a humming noise in late afternoon when inverter power is low.
Microinverter
Microinverters are small inverters rated to handle the output of a single panel or a pair of panels. Grid-tie panels are normally rated between 225 and 275 W, but rarely produce this in practice, so microinverters are typically rated between 190 and 220 W (sometimes, 100 W). Because it is operated at this lower power point, many design issues inherent to larger designs simply go away; the need for a large transformer is generally eliminated, large electrolytic capacitors can be replaced by more reliable thin-film ones, and cooling loads are reduced so no fans are needed. Mean time between failures (MTBF) are quoted in hundreds of years.
A microinverter attached to a single panel allows it to isolate and tune the output of that panel. Any panel that is under-performing has no effect on panels around it. In that case, the array as a whole produces as much as 5% more power than it would with a string inverter. When shadowing is factored in, if present, these gains can become considerable, with manufacturers generally claiming 5% better output at a minimum, and up to 25% better in some cases. Furthermore, a single model can be used with a wide variety of panels, new panels can be added to an array at any time, and do not have to have the same rating as existing panels.
Microinverters produce grid-matching AC power directly at the back of each solar panel. Arrays of panels are connected in parallel to each other, and then to the grid. This has the major advantage that a single failing panel or inverter cannot take the entire string offline. Combined with the lower power and heat loads, and improved MTBF, some suggest that overall array reliability of a microinverter-based system is significantly greater than a string inverter-based one. This assertion is supported by longer warranties, typically 15 to 25 years, compared with 5 or 10-year warranties that are more typical for string inverters. Additionally, when faults occur, they are identifiable to a single point, as opposed to an entire string. This not only makes fault isolation easier, but unmasks minor problems that might not otherwise become visible – a single under-performing panel may not affect a long string's output enough to be noticed.
Disadvantages
The main disadvantage of the microinverter concept has, until recently, been cost. Because each microinverter has to duplicate much of the complexity of a string inverter but spread that out over a smaller power rating, costs on a per-watt basis are greater. This offsets any advantage in terms of simplification of individual components. As of February 2018, a central inverter costs approximately $0.13 per watt, whereas a microinverter costs approximately $0.34 per watt. Like string inverters, economic considerations force manufacturers to limit the number of models they produce. Most produce a single model that may be over or undersize when matched with a specific panel.
In many cases the packaging can have a significant effect on price. With a central inverter you may have only one set of panel connections for dozens of panels, a single AC output, and one box. Microinverter installations larger than about 15 panels may require a roof mounted "combiner" breaker box as well. This can add to the overall price-per-watt.
To further reduce costs, some models control two or three panels from an inverter, reducing the packaging and associated costs. Some systems place two entire micros in a single box, while others duplicate only the MPPT section of the system and use a single DC-to-AC stage for further cost reductions. Some have suggested that this approach will make microinverters comparable in cost with those using string inverters. With steadily decreasing prices, the introduction of dual microinverters and the advent of wider model selections to match PV module output more closely, cost is less of an obstacle.
Microinverters have become common where array sizes are small and maximizing performance from every panel is a concern. In these cases, the differential in price-per-watt is minimized due to the small number of panels, and has little effect on overall system cost. The improvement in energy harvest given a fixed size array can offset this difference in cost. For this reason, microinverters have been most successful in the residential market, where limited space for panels constrains array size, and shading from nearby trees or other objects is often an issue. Microinverter manufacturers list many installations, some as small as a single panel and the majority under 50.
An often overlooked disadvantage of micro inverters is the future operation and maintenance costs associated with them. While the technology has improved over the years the fact remains that the devices will eventually either fail or wear out. The installer must balance these replacement costs (around $400 per truck roll), increased safety risks to personnel, equipment and module racking against the profit margins for the installation. For homeowners, the eventual wear out or premature device failures will introduce potential damage to the roof tiles or shingles, property damage and other nuisances.
Advantages
While microinverters generally have a lower efficiency than string inverters, the overall efficiency is increased due to the fact that every inverter / panel unit acts independently. In a string configuration, when a panel on a string is shaded, the output of the entire string of panels is reduced to the output of the lowest producing panel. This is not the case with micro inverters.
A further advantage is found in the panel output quality. The rated output of any two panels in the same production run can vary by as much as 10% or more. This is mitigated with a microinverter configuration but not so in a string configuration. The result is maximum power harvesting from a microinverter array.
Systems with microinverters also can be changed easier, when power demands grow, or decrease over time. As every solarpanel and microinverter is a small system of its own, it acts to a certain extent independent. This means that adding one or more panels will just provide more energy, as long as the fused electricity group in a house or building is not exceeding its limits. In contrary, with string based inverters, the inverter size need to be in accordance with the amount of panels or the amount of peak power. Choosing an oversized string-inverter is possible, when future extension is foreseen, but such a provision for an uncertain future increases the costs in any case.
Monitoring and maintenance is also easier as many microinverter producers provide apps or websites to monitor the power output of their units. In many cases, these are proprietary; however this is not always the case. Following the demise of Enecsys, and the subsequent closure of their site; a number of private sites such as Enecsys-Monitoring sprung up to enable owners to continue to monitor their systems.
Three-phase microinverters
Efficient conversion of DC power to AC requires the inverter to store energy from the panel while the grid's AC voltage is near zero, and then release it again when it rises. This requires considerable amounts of energy storage in a small package. The lowest-cost option for the required amount of storage is the electrolytic capacitor, but these have relatively short lifetimes normally measured in years, and those lifetimes are shorter when operated hot, like on a rooftop solar panel. This has led to considerable development effort on the part of microinverter developers, who have introduced a variety of conversion topologies with lowered storage requirements, some using the much less capable but far longer lived thin film capacitors where possible.
Three-phase electric power represents another solution to the problem. In a three-phase circuit, the power does not vary between (say) +120 to -120 V between two lines, but instead varies between 60 and +120 or -60 and -120 V, and the periods of variation are much shorter. Inverters designed to operate on three phase systems require much less storage. A three-phase micro using zero-voltage switching can also offer higher circuit density and lower cost components, while improving conversion efficiency to over 98%, better than the typical one-phase peak around 96%.
Three-phase systems, however, are generally only seen in industrial and commercial settings. These markets normally install larger arrays, where price sensitivity is the highest. Uptake of three-phase micros, in spite of any theoretical advantages, appears to be very low.
Portable uses
Foldable solar panel with AC microinverters can be used to recharge laptops and some electric vehicles.
History
The microinverter concept has been in the solar industry since its inception. However, flat costs in manufacturing, like the cost of the transformer or enclosure, scaled favorably with size, and meant that larger devices were inherently less expensive in terms of price per watt. Small inverters were available from companies like ExelTech and others, but these were simply small versions of larger designs with poor price performance, and were aimed at niche markets.
Early examples
In 1991 the US company Ascension Technology started work on what was essentially a shrunken version of a traditional inverter, intended to be mounted on a panel to form an AC panel. This design was based on the conventional linear regulator, which is not particularly efficient and dissipates considerable heat. In 1994 they sent an example to Sandia Labs for testing. In 1997, Ascension partnered with US panel company ASE Americas to introduce the 300 W SunSine panel.
Design of, what would today be recognized as a "true" microinverter, traces its history to late 1980s work by Werner Kleinkauf at the ISET (Institut für Solare Energieversorgungstechnik), now Fraunhofer Institute for Wind Energy and Energy System Technology. These designs were based on modern high-frequency switching power supply technology, which is much more efficient. His work on "module integrated converters" was highly influential, especially in Europe.
In 1993 Mastervolt introduced their first grid-tie inverter, the Sunmaster 130S, based on a collaborative effort between Shell Solar, Ecofys and ECN. The 130 was designed to mount directly to the back of the panel, connecting both AC and DC lines with compression fittings. In 2000, the 130 was replaced by the Soladin 120, a microinverter in the form of an AC adapter that allows panels to be connected simply by plugging them into any wall socket.
In 1995, OKE-Services designed a new high-frequency version with improved efficiency, which was introduced commercially as the OK4-100 in 1995 by NKF Kabel, and re-branded for US sales as the Trace Microsine. A new version, the OK4All, improved efficiency and had wider operating ranges.
In spite of this promising start, by 2003 most of these projects had ended. Ascension Technology was purchased by Applied Power Corporation, a large integrator. APC was in turn purchased by Schott in 2002, and SunSine production was canceled in favor of Schott's existing designs. NKF ended production of the OK4 series in 2003 when a subsidy program ended. Mastervolt has moved on to a line of "mini-inverters" combining the ease-of-use of the 120 in a system designed to support up to 600 W of panels.
Enphase
In the aftermath of the 2001 Telecoms crash, Martin Fornage of Cerent Corporation was looking for new projects. When he saw the low performance of the string inverter for the solar array on his ranch, he found the project he was looking for. In 2006 he formed Enphase Energy with another Cerent engineer, Raghu Belur, and they spent the next year applying their telecommunications design expertise to the inverter problem.
Released in 2008, the Enphase M175 model was the first commercially successful microinverter. A successor, the M190, was introduced in 2009, and the latest model, the M215, in 2011. Backed by $100 million in private equity, Enphase quickly grew to 13% marketshare by mid-2010, aiming for 20% by year-end. They shipped their 500,000th inverter in early 2011, and their 1,000,000th in September of the same year. In early 2011, they announced that re-branded versions of the new design will be sold by Siemens directly to electrical contractors for widespread distribution.
Enphase has subscribed an agreement with EnergyAustralia, to market its micro-inverter technology.
Major players
Enphase's success did not go unnoticed, and since 2010 a host of competitors came and largely left the space. Many of the products were identical to the M190 in specs, and even in the casing and mounting details. Some differentiate by competing head-to-head with Enphase in terms of price or performance, while others are attacking niche markets.
Larger firms also stepped into the field: SMA, Enecsys and iEnergy.
OKE-Services updated OK4-All product was bought by SMA in 2009 and released as the SunnyBoy 240 after an extended gestation period, while Power-One has introduced the AURORA 250 and 300. Other major players around 2010 included Enecsys and SolarBridge Technologies, especially outside the North American market. In 2021, the only microinverter made in the USA is from Chilicon Power. Since 2009, several companies from Europe to China, including major central inverter manufacturers, have launched microinverters—validating the microinverter as an established technology and one of the biggest technology shifts in the PV industry in recent years.
APsystems is marketing inverters for up to four solar modules a microinverters, including the three-phase YC1000 with an AC output of up to 1130 Watt.
The number of manufacturers has dwindled over the years, both by attrition and consolidation. In 2019, the few remaining include Enphase which purchased SolarBridge in 2021, Omnik Solar and Chilicon Power (acquired by Generac in July 2021).
In July 2021 the list of major PV companies who have partnered with microinverter companies to produce and sell AC solar panels include BenQ, Canadian Solar, LG, NESL, SunPower, Sharp Solar, Suntech, Siemens, Trina Solar and Qcells.
Market
As of 2019, conversion efficiency for state-of-the-art solar converters reached more than 98 percent. While string inverters are used in residential to medium-sized commercial PV systems, central inverters cover the large commercial and utility-scale market. Market-share for central and string inverters are about 36 percent and 61 percent, respectively, leaving less than 2 percent to micro-inverters.
Price declines
The period between 2009 and 2012 included unprecedented downward price movement in the PV market. At the beginning of this period, wholesale pricing for panels was generally around $2.00 to $2.50/W, and inverters around 50 to 65 cents/W. By the end of 2012, panels were widely available in wholesale at 65 to 70 cents, and string inverters around 30 to 35 cents/W. In comparison, microinverters have proven relatively immune to these same sorts of price declines, moving from about 65 cents/W to 50 to 55 once cabling is factored in. This could lead to widening losses as the suppliers attempt to remain competitive.
See also
AC universal sockets
Amphenol connector
Grid tie inverter
Inverter (electrical)
MC4 connector
Nanoinverter
Open-source hardware
Power optimizer
Three-phase micro-inverter
Junction box
Switch
Waterproof
Zigbee
Charge controller
DC-to-DC converter
DC-to-DC charger
Off-the-grid
Synchronverter
Notes
References
Manufacturer's specification of YC1000 (for 4 modules):
Bibliography
David Katz, "Micro-Inverters and AC Modules",
External links
Model based control of photovoltaic inverter Simulation, description and working VisSim source code diagram
Micro-inverters vs. Central Inverters: Is There a Clear Winner?, podcast debating the ups and downs of the microinverter approach.
Design and Implementation of Three-phase Two-stage Grid-connected Module Integrated Converter
A Review of the Single Phase Photovoltaic Module Integrated Converter Topologies with Three Different DC Link Configurations
ZVS BCM Current Controlled Three-Phase Micro-inverter
APsystems microinverter YC1000-3 for 4 modules (900 Watt AC)
Inverters
Photovoltaics
Electric power | Solar inverter | [
"Physics",
"Engineering"
] | 7,300 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
777,293 | https://en.wikipedia.org/wiki/Ouabain | Ouabain or (from Somali waabaayo, "arrow poison" through French ouabaïo) also known as g-strophanthin, is a plant derived toxic substance that was traditionally used as an arrow poison in eastern Africa for both hunting and warfare. Ouabain is a cardiac glycoside and in lower doses, can be used medically to treat hypotension and some arrhythmias. It acts by inhibiting the Na/K-ATPase, also known as the sodium–potassium ion pump. However, adaptations to the alpha-subunit of the -ATPase via amino acid substitutions, have been observed in certain species, namely some herbivore- insect species, that have resulted in toxin resistance.
It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
Sources
Ouabain can be found in the roots, stems, leaves, and seeds of the Acokanthera schimperi and Strophanthus gratus plants, both of which are native to eastern Africa.
Mechanism of action
Ouabain is a cardiac glycoside that acts by non-selectively inhibiting the -ATPase sodium–potassium ion pump. Once ouabain binds to this enzyme, the enzyme ceases to function, leading to an increase of intracellular sodium. This increase in intracellular sodium reduces the activity of the sodium–calcium exchanger (NCX), which pumps one calcium ion out of the cell and three sodium ions into the cell down their concentration gradient. Therefore, the decrease in the concentration gradient of sodium into the cell which occurs when the Na/K-ATPase is inhibited reduces the ability of the NCX to function. This in turn elevates intracellular calcium. This results in higher cardiac contractility and an increase in cardiac vagal tone. The change in ionic gradients caused by ouabain can also affect the membrane voltage of the cell and result in cardiac arrhythmias.
Symptoms
An overdose of ouabain can be detected by the presence of the following symptoms: rapid twitching of the neck and chest musculature, respiratory distress, increased and irregular heartbeat, rise in blood pressure, convulsions, wheezing, clicking, and gasping rattling. Death is caused by cardiac arrest.
Toxicology
Ouabain is a highly toxic compound, however, it has a low bioavailability and is absorbed poorly from the alimentary tract as so much of the oral dose is destroyed. Intravenous administration results in greater available concentrations. After intravenous administration, the onset of action occurs within 2–10 minutes in humans with the maximum effect enduring for 1.5 hours.
Ouabain is eliminated by renal excretion, largely unchanged.
Biological effects
Endogenous ouabain
In 1991, a specific high affinity sodium pump inhibitor indistinguishable from ouabain was first discovered in the human circulation and proposed as one of the potential mediators of long term blood pressure and the enhanced salt excretion following salt and volume loading. This agent was an inhibitor of the sodium pump that acted similarly to digitalis. A number of analytical techniques led to the conclusion that this circulating molecule was ouabain and that humans were producing it as an endogenous hormone. A large portion of the scientific community agreed that this inhibitor was endogenous ouabain and that there was strong evidence to indicate that it was synthesized in the adrenal gland. One early speculative interpretation of the analytical data led to the proposal that endogenous ouabain may have been the 11 epimer, i.e., an isomer of plant ouabain. However, this possibility was excluded by various methods including the synthesis of the 11 epimer and the demonstration that it has different chromatographic behavior from ouabain. Critically, the primary observations concerning the identification of ouabain in mammals were repeated and confirmed using a variety of tissue sources on three different continents with advanced analytical methods as summarized elsewhere.
Despite widespread analytical confirmation, some questioned whether or not this endogenous substance is ouabain. The arguments were based less upon rigorous analytical data but more on the fact that immunoassays are neither entirely specific nor reliable. Hence, it was suggested that some assays for endogenous ouabain detected other compounds or failed to detect ouabain at all. Additionally, it was suggested that rhamnose, the L-sugar component of ouabain, could not be synthesized within the body despite published data to the contrary. Yet another argument against the existence of endogenous ouabain was the lack of effect of rostafuroxin (a first generation ouabain receptor antagonist) on blood pressure in an unselected population of hypertensive patients.
Medical uses
Ouabain is no longer approved for use in the USA. In France and Germany, however, intravenous ouabain has a long history in the treatment of heart failure, and some continue to advocate its use intravenously and orally in angina pectoris and myocardial infarction despite its poor and variable absorption. The positive properties of ouabain regarding the prophylaxis and treatment of these two indications are documented by several studies.
Animal use of ouabain
The African crested rat (Lophiomys imhausi) has a broad, white-bordered strip of hairs covering an area of glandular skin on the flank. When the animal is threatened or excited, the mane on its back erects and this flank strip parts, exposing the glandular area. The hairs in this flank area are highly specialised; at the tips they are like ordinary hairs, but are otherwise spongy, fibrous, and absorbent. The rat is known to deliberately chew the roots and bark of the Poison-arrow tree (Acokanthera schimperi), which contains ouabain. After the rat has chewed the tree, instead of swallowing the poison it slathers the resulting masticate onto its specialised flank hairs which are adapted to absorb the poisonous mixture. It thereby creates a defense mechanism that can sicken or even kill predators which attempt to bite it.
Synthesis
The total synthesis of ouabain was achieved in 2008 by Deslongchamps laboratory in Canada. It was synthesized under the hypothesis that a polyanionic cyclization (double Michael addition followed by aldol condensation) would allow access to a tetracyclic intermediate with the desired functionality. The figure below shows the key steps in the synthesis of ouabain.
In their synthesis, Zhang et al. from the Deslongchamps laboratory condensed cyclohexenone A with Nazarov substitute B in a double Michael addition to produce tricycle C. At the indicated position, C was reduced to the aldehyde and the alcohol group was protected with p-methoxybenzyl ether (PMB) to form the aldol precursor needed to produce D. After several steps, intermediate E was produced. E contained all the required functionalities and stereochemistry needed to produce ouabain. The structure of E was confirmed by comparison against the degradation product of ouabain. Methylation of E, catalyzed by rhodium, produced F. The dehydroxylation and selective oxidation of the secondary hydroxy group of F produced G. G reacted with triphenyl phosphoranylidene ketene and the ester bonds in G were hydrolyzed to produce ouabagenin, a precursor to ouabain. The glycosylation of ouabagenin with rhamnose produced ouabain.
History
Africa
Poisons derived from Acokanthera plants are known to have been used in Africa as far back as the 3rd century BC when Theophrastus reported a toxic substance that the Ethiopians would smear on their arrows. The poisons derived from this genus of plants were used throughout eastern Africa, typically as arrow poisons for hunting and warfare. Acokanthera schimperi, in particular, exhibits a very large amount of ouabain, which the Kenyans, Tanzanians, Rwandans, Ethiopians, and Somalis would use as an arrow poison.
The poison was extracted from the branches and leaves of the plant by boiling them over a fire. Arrows would then be dipped into the concentrated black tar-like juice that formed. Often, certain magical additives were also mixed in with the ouabain extract in order to make the poison work according to the hunter's wishes. In Kenya, the Giriama and Langulu poison makers would add an elephant shrew to the poison mixture in order to facilitate the pursuit of their prey. They had observed that an elephant shrew would always run straight ahead or follow a direct path and thought that these properties would be transferred to the poison. A poisonous arrow made with this shrew was thought to cause the hunted animal to behave like the shrew and run in a straight path. In Rwanda members of the Nyambo tribe, also known poison arrow makers, harvest the Aconkathera plants according to how many dead insects are found under it - more dead insects under a shrub indicating a higher potency of poison.
Although ouabain was used as an arrow poison primarily for hunting, it was also used during battle. One example of this occurred during a battle against the Portuguese, who had stormed Mombasa in 1505. Portuguese records indicated that they had suffered a great deal from the poisoned arrows.
Europe
European imperial expansion and exploration into Africa overlapped with the rise of the European pharmaceutical industry towards the end of the nineteenth century. British troops were the target of arrows poisoned with the extracts of various Strophanthus species. They were familiar with the deadly properties of these plants and brought samples back to Europe. Around this time, interest in the plant grew. It was known that ouabain was a cardiac poison, but there was some speculation about its potential medical uses.
In 1882, ouabain was first isolated from the plant by the French chemist Léon-Albert Arnaud as an amorphous substance, which he identified as a glycoside. Ouabain was seen as a possible treatment for certain cardiac conditions.
See also
K-Strophanthidin
References
External links
Cardenolides
Pyranoses
Cyclohexanols
Cyclopentanols
Primary alcohols
Tertiary alcohols
Total synthesis
ATPase inhibitors
Plant toxins | Ouabain | [
"Chemistry"
] | 2,205 | [
"Total synthesis",
"Chemical ecology",
"Plant toxins",
"Chemical synthesis"
] |
777,462 | https://en.wikipedia.org/wiki/Neuroblast | In vertebrates, a neuroblast or primitive nerve cell is a postmitotic cell that does not divide further, and which will develop into a neuron after a migration phase. In invertebrates such as Drosophila, neuroblasts are neural progenitor cells which divide asymmetrically to produce a neuroblast, and a daughter cell of varying potency depending on the type of neuroblast. Vertebrate neuroblasts differentiate from radial glial cells and are committed to becoming neurons. Neural stem cells, which only divide symmetrically to produce more neural stem cells, transition gradually into radial glial cells. Radial glial cells, also called radial glial progenitor cells, divide asymmetrically to produce a neuroblast and another radial glial cell that will re-enter the cell cycle.
This mitosis occurs in the germinal neuroepithelium (or germinal zone), when a radial glial cell divides to produce the neuroblast. The neuroblast detaches from the epithelium and migrates while the radial glial progenitor cell produced stays in the lumenal epithelium. The migrating cell will not divide further and this is called the neuron's birthday. Cells with the earliest birthdays will only migrate a short distance. Those cells with later birthdays will migrate further to the more outer regions of the cerebral cortex. The positions that the migrated cells occupy will determine their neuronal differentiation.
Formation
Neuroblasts are formed by the asymmetric division of radial glial cells. They start to migrate as soon as they are born. Neurogenesis can only take place when neural stem cells have transitioned into radial glial cells.
Differentiation
Neuroblasts are mainly present as precursors of neurons during embryonic development; however, they also constitute one of the cell types involved in adult neurogenesis. Adult neurogenesis is characterized by neural stem cell differentiation and integration in the mature adult mammalian brain. This process occurs in the dentate gyrus of the hippocampus and in the subventricular zones of the adult mammalian brain. Neuroblasts are formed when a neural stem cell, which can differentiate into any type of mature neural cell (i.e. neurons, oligodendrocytes, astrocytes, etc.), divides and becomes a transit amplifying cell. Transit amplifying cells are slightly more differentiated than neural stem cells and can divide asymmetrically to produce postmitotic neuroblasts and glioblasts, as well as other transit amplifying cells. A neuroblast, a daughter cell of a transit amplifying cell, is initially a neural stem cell that has reached the "point of no return." A neuroblast has differentiated such that it will mature into a neuron and not any other neural cell type. Neuroblasts are being studied extensively as they have the potential to be used therapeutically to combat cell loss due to injury or disease in the brain, although their potential effectiveness is debated.
Migration
In the embryo neuroblasts form the middle mantle layer of the neural tube wall which goes on to form the grey matter of the spinal cord. The outer layer to the mantle layer is the marginal layer and this contains the myelinated axons from the neuroblasts forming the white matter of the spinal cord. The inner layer is the ependymal layer that will form the lining of the ventricles and central canal of the spinal cord.
In humans, neuroblasts produced by stem cells in the adult subventricular zone migrate into damaged areas after brain injuries. However, they are restricted to the subtype of small interneuron-like cells, and it is unlikely that they contribute to functional recovery of striatal circuits.
Clinical significance
There are several disorders known as neuronal migration disorders that can cause serious problems. These arise from a disruption in the pattern of migration of the neuroblasts on their way to their target destinations. The disorders include, lissencephaly, microlissencephaly, pachygyria, and several types of gray matter heterotopia.
Neuroblast development in Drosophila
In the fruit fly model organism Drosophila melanogaster, a neuroblast is a neural progenitor cell which divides asymmetrically to produce a neuroblast and either a neuron, a ganglion mother cell (GMC), or an intermediate neural progenitor, depending on the type of neuroblast. During embryogenesis, embryonic neuroblasts delaminate from either the procephalic neuroectoderm (for brain neuroblasts), or the ventral nerve cord neuroectoderm (for abdominal neuroblasts). During larval development, optic lobe neuroblasts are generated from a neuroectoderm called the Outer Proliferation Center. There are more than 800 optic lobe neuroblasts, 105 central brain neuroblasts, and 30 abdominal neuroblasts per hemisegment (a bilateral half of a segment).
Neuroblasts undergo three known division types. Type 0 neuroblasts divide to give rise to a neuroblast, and a daughter cell which directly differentiates into a single neuron or glia. Type I neuroblasts give rise to a neuroblast and a ganglion mother cell (GMC), which undergoes a terminal division to generate a pair of sibling neurons. This is the most common form of cell division, and is observed in abdominal, optic lobe, and central brain neuroblasts. Type II neuroblasts give rise to a neuroblast and a transit amplifying Intermediate Neural Progenitor (INP). INPs divide in a manner similar to type I neuroblasts, producing an INP and a ganglion mother cell. While only 8 type II neuroblasts exist in the central brain, their lineages are both much larger and more complex than type I neuroblasts. The switch from pluripotent neuroblast to differentiated cell fate is facilitated by the proteins Prospero, Numb, and Miranda. Prospero is a transcription factor that triggers differentiation. It is expressed in neuroblasts, but is kept out of the nucleus by Miranda, which tethers it to the cell basal cortex. This also results in asymmetric division, where Prospero localizes in only one out of the two daughter cells. After division, Prospero enters the nucleus, and the cell it is present in becomes the GMC.
Neuroblasts are capable of giving rise to the vast neural diversity present in the fly brain using a combination of spatial and temporal restriction of gene expression that give progeny born from each neuroblast a unique identity depending both their parent neuroblast and their birth date. This is partly based on the position of the neuroblast along the Anterior/Posterior and Dorsal/Ventral axes, and partly on a temporal sequence of transcription factors that are expressed in a specific order as neuroblasts undergo sequential divisions.
See also
Neuroblastoma
Posterior column
List of human cell types derived from the germ layers
References
Embryology of nervous system
Cell biology | Neuroblast | [
"Biology"
] | 1,536 | [
"Cell biology"
] |
777,471 | https://en.wikipedia.org/wiki/Engineering%20geology | Engineering geology is the application of geology to engineering study for the purpose of assuring that the geological factors regarding the location, design, construction, operation and maintenance of engineering works are recognized and accounted for. Engineering geologists provide geological and geotechnical recommendations, analysis, and design associated with human development and various types of structures. The realm of the engineering geologist is essentially in the area of earth-structure interactions, or investigation of how the earth or earth processes impact human made structures and human activities.
Engineering geology studies may be performed during the planning, environmental impact analysis, civil or structural engineering design, value engineering and construction phases of public and private works projects, and during post-construction and forensic phases of projects. Works completed by engineering geologists include; geologic hazards assessment, geotechnical, material properties, landslide and slope stability, erosion, flooding, dewatering, and seismic investigations, etc. Engineering geology studies are performed by a geologist or engineering geologist that is educated, trained and has obtained experience related to the recognition and interpretation of natural processes, the understanding of how these processes impact human made structures (and vice versa), and knowledge of methods by which to mitigate hazards resulting from adverse natural or human made conditions. The principal objective of the engineering geologist is the protection of life and property against damage caused by various geological conditions.
The practice of engineering geology is also very closely related to the practice of geological engineering and geotechnical engineering. If there is a difference in the content of the disciplines, it mainly lies in the training or experience of the practitioner.
History
Although the study of geology has been around for centuries, at least in its modern form, the science and practice of engineering geology only commenced as a recognized discipline until the late 19th and early 20th centuries. The first book titled Engineering Geology was published in 1880 by William Penning. In the early 20th century Charles Peter Berkey, an American trained geologist who was considered the first American engineering geologist, worked on several water-supply projects for New York City, then later worked on the Hoover Dam and a multitude of other engineering projects. The first American engineering geology textbook was written in 1914 by Ries and Watson. In 1921 Reginald W. Brock, the first Dean of Applied Science at the University of British Columbia, started the first undergraduate and graduate degree programs in Geological Engineering, noting that students with an engineering foundation made first-class practising geologists. In 1925, Karl Terzaghi, an Austrian trained engineer and geologist, published the first text in Soil Mechanics (in German). Terzaghi is known as the parent of soil mechanics, but also had a great interest in geology; Terzaghi considered soil mechanics to be a sub-discipline of engineering geology. In 1929, Terzaghi, along with Redlich and Kampe, published their own Engineering Geology text (also in German).Engineering geology are the different types of rocks.
The need for geologist on engineering works gained worldwide attention in 1928 with the failure of the St. Francis Dam in California and the death of 426 people. More engineering failures that occurred the following years also prompted the requirement for engineering geologists to work on large engineering projects.
In 1951, one of the earliest definitions of the "Engineering geologist" or "Professional Engineering Geologist" was provided by the Executive Committee of the Division on Engineering Geology of the Geological Society of America.
The practice
One of the most important roles of an engineering geologist is the interpretation of landforms and earth processes to identify potential geologic and related human-made hazards that may have a great impact on civil structures and human development. The background in geology provides the engineering geologist with an understanding of how the earth works, which is crucial minimizing earth related hazards. Most engineering geologists also have graduate degrees where they have gained specialized education and training in soil mechanics, rock mechanics, geotechnics, groundwater, hydrology, and civil design. These two aspects of the engineering geologists' education provide them with a unique ability to understand and mitigate for hazards associated with earth-structure interactions.
Scope of studies
Engineering geology investigation and studies may be performed:
for residential, commercial and industrial developments;
for governmental and military installations;
for public works such as a stormwater drainage system, power plant, wind turbine, transmission line, sewage treatment plant, water treatment plant, pipeline (aqueduct, sewer, outfall), tunnel, trenchless construction, canal, dam, reservoir, building foundation, railroad, transit, highway, bridge, seismic retrofit, airport and park;
for mine and quarry developments, mine tailing dam, mine reclamation and mine tunneling;
for wetland and habitat restoration programs;
for government, commercial, or industrial hazardous waste remediation sites;
for coastal engineering, sand replenishment, bluff or sea cliff stability, harbor, pier and waterfront development;
for offshore outfall, drilling platform and sub-sea pipeline, sub-sea cable; and
for other types of facilities.
Geohazards and adverse geological conditions
Typical geologic hazards or other adverse conditions evaluated and mitigated by an engineering geologist include:
fault rupture on seismically active faults;
seismic and earthquake hazards (ground shaking, liquefaction, lurching, lateral spreading, tsunami and seiche events);
landslide, mudflow, rockfall, debris flow, and avalanche hazards;
unstable slopes and slope stability;
erosion;
slaking and heave of geologic formations, such as frost heaving;
ground subsidence (such as due to ground water withdrawal, sinkhole collapse, cave collapse, decomposition of organic soils, and tectonic movement);
volcanic hazards (volcanic eruptions, hot springs, pyroclastic flows, debris flow, debris avalanche, Volcanic gas emissions, volcanic earthquakes);
non-rippable or marginally rippable rock requiring heavy ripping or blasting;
weak and collapsible soils, foundation bearing failures;
shallow ground water/seepage; and
other types of geologic constraints.
An engineering geologist or geophysicist may be called upon to evaluate the excavatability (i.e. rippability) of earth (rock) materials to assess the need for pre-blasting during earthwork construction, as well as associated impacts due to vibration during blasting on projects.
Soil and rock mechanics
Soil mechanics is a discipline that applies principles of engineering mechanics, e.g. kinematics, dynamics, fluid mechanics, and mechanics of material, to predict the mechanical behaviour of soils. Rock mechanics is the theoretical and applied science of the mechanical behaviour of rock and rock masses; it is that branch of mechanics concerned with the response of rock and rock masses to the force-fields of their physical environment. The fundamental processes are all related to the behaviour of porous media. Together, soil and rock mechanics are the basis for solving many engineering geology problems.
Methods and reporting
The methods used by engineering geologists in their studies include
geologic field mapping of geologic structures, geologic formations, soil units and hazards;
the review of geologic literature, geologic maps, geotechnical reports, engineering plans, environmental reports, stereoscopic aerial photographs, remote sensing data, Global Positioning System (GPS) data, topographic maps and satellite imagery;
the excavation, sampling and logging of earth/rock materials in drilled borings, backhoe test pits and trenches, fault trenching, and bulldozer pits;
geophysical surveys (such as seismic refraction traverses, resistivity surveys, ground penetrating radar (GPR) surveys, magnetometer surveys, electromagnetic surveys, high-resolution sub-bottom profiling, and other geophysical methods);
deformation monitoring as the systematic measurement and tracking of the alteration in the shape or dimensions of an object as a result of the application of stress to it manually or with an automatic deformation monitoring system; and
other methods.
The fieldwork is typically culminated in analysis of the data and the preparation of an engineering geologic report, geotechnical report or design brief, fault hazard or seismic hazard report, geophysical report, ground water resource report or hydrogeologic report. The engineering geology report can also be prepared in conjunction with a geotechnical report, but commonly provides the same geotechnical analysis and design recommendations that would be presented in a geotechnical report. An engineering geology report describes the objectives, methodology, references cited, tests performed, findings and recommendations for development and detailed design of engineering works. Engineering geologists also provide geologic data on topographic maps, aerial photographs, geologic maps, Geographic Information System (GIS) maps, or other map bases.
See also
Earthquake engineering
Geological engineering
Geoprofessions
Geotechnics
Geotechnical engineering
Geotechnical investigation
Hydrogeology
Mining engineering
Petroleum engineering
References
Further reading
Engineering geology
Brock, 1923, The Education of a Geologist: Economic Geology, v. 18, pp. 595–597.
Bates and Jackson, 1980, Glossary of Geology: American Geological Institute.
González de Vallejo, L. and Ferrer, M., 2011. "Geological Engineering". CRC Press, 678 pp.
Kiersh, 1991, The Heritage of Engineering Geology: The First Hundred Years: Geological Society of America; Centennial Special Volume 3
Legget, Robert F., editor, 1982, Geology under cities: Geological Society of America; Reviews in Engineering Geology, volume V, 131 pages; contains nine articles by separate authors for these cities: Washington, DC; Boston; Chicago; Edmonton; Kansas City; New Orleans; New York City; Toronto; and Twin Cities, Minnesota.
Legget, Robert F., and Karrow, Paul F., 1983, Handbook of geology in civil engineering: McGraw-Hill Book Company, 1,340 pages, 50 chapters, five appendices, 771 illustrations.
Price, David George, Engineering Geology: Principles and Practice, Springer, 2008
Prof. D. Venkat Reddy, NIT-Karnataka, Engineering Geology, Vikas Publishers, 2010
Bulletin of Engineering Geology and the Environment
Geological modelling
Wang H. F., Theory of Linear Poroelasticity with Applications to Geomechanics and Hydrogeology, Princeton Press, (2000).
Waltham T., Foundations of Engineering Geology, 2nd Edition, Taylor & Francis, (2001).
Geotechnical engineering | Engineering geology | [
"Engineering"
] | 2,072 | [
"Civil engineering",
"Geotechnical engineering"
] |
4,739,327 | https://en.wikipedia.org/wiki/Multifocal%20multiphoton%20microscopy | Multifocal multiphoton microscopy is a microscopy technique for generating 3D images, which uses a laser beam, separated by an array of microlenses into a number of beamlets, focused on the sample. The multiple signals are imaged onto a CCD camera in the same way as in a conventional microscope. The image rate is determined by the camera frame rate, depending
on the readout rate and the number of pixels and may range well above 30 images/s.
By exploiting specific properties of pulsed-mode multiphoton excitation the conflict between the density of the foci, i.e. the degree of parallelization, and the axial sectioning has been resolved. The laser pulses of neighboring foci are temporally separated by at least one pulse duration, so that interference is avoided. This method is referred to as time-multiplexing (TMX). Moreover, with a high degree of time multiplicity, the interfocal distance can be reduced to such an extent that lateral scanning becomes obsolete. In this case axial scanning is sufficient to record a 3D-image.
References
12
External links
Technical Details
Microscopy | Multifocal multiphoton microscopy | [
"Chemistry"
] | 232 | [
"Microscopy"
] |
4,739,349 | https://en.wikipedia.org/wiki/STED%20microscopy | Stimulated emission depletion (STED) microscopy is one of the techniques that make up super-resolution microscopy. It creates super-resolution images by the selective deactivation of fluorophores, minimizing the area of illumination at the focal point, and thus enhancing the achievable resolution for a given system. It was developed by Stefan W. Hell and Jan Wichmann in 1994, and was first experimentally demonstrated by Hell and Thomas Klar in 1999. Hell was awarded the Nobel Prize in Chemistry in 2014 for its development. In 1986, V.A. Okhonin (Institute of Biophysics, USSR Academy of Sciences, Siberian Branch, Krasnoyarsk) had patented the STED idea. This patent was unknown to Hell and Wichmann in 1994.
STED microscopy is one of several types of super resolution microscopy techniques that have recently been developed to bypass the diffraction limit of light microscopy to increase resolution. STED is a deterministic functional technique that exploits the non-linear response of fluorophores commonly used to label biological samples in order to achieve an improvement in resolution, that is to say STED allows for images to be taken at resolutions below the diffraction limit. This differs from the stochastic functional techniques such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) as these methods use mathematical models to reconstruct a sub diffraction limit from many sets of diffraction limited images.
Background
In traditional microscopy, the resolution that can be obtained is limited by the diffraction of light. Ernst Abbe developed an equation to describe this limit. The equation is:
where D is the diffraction limit, λ is the wavelength of the light, and NA is the numerical aperture, or the refractive index of the medium multiplied by the sine of the angle of incidence. n describes the refractive index of the specimen, α measures the solid half‐angle from which light is gathered by an objective, λ is the wavelength of light used to excite the specimen, and NA is the numerical aperture. To obtain high resolution (i.e. small d values), short wavelengths and high NA values (NA = n sinα) are optimal. This diffraction limit is the standard by which all super resolution methods are measured. Because STED selectively deactivates the fluorescence, it can achieve resolution better than traditional confocal microscopy. Normal fluorescence occurs by exciting an electron from the ground state into an excited electronic state of a different fundamental energy level (S0 goes to S1) which, after relaxing back to the vibrational ground state (of S1), emits a photon by dropping from S1 to a vibrational energy level on S0. STED interrupts this process before the photon is released. The excited electron is forced to relax into a higher vibration state than the fluorescence transition would enter, causing the photon to be released to be red-shifted as shown in the image to the right. Because the electron is going to a higher vibrational state, the energy difference of the two states is lower than the normal fluorescence difference. This lowering of energy raises the wavelength, and causes the photon to be shifted farther into the red end of the spectrum. This shift differentiates the two types of photons, and allows the stimulated photon to be ignored.
To force this alternative emission to occur, an incident photon must strike the fluorophore. This need to be struck by an incident photon has two implications for STED. First, the number of incident photons directly impacts the efficiency of this emission, and, secondly, with sufficiently large numbers of photons fluorescence can be completely suppressed. To achieve the large number of incident photons needed to suppress fluorescence, the laser used to generate the photons must be of a high intensity. Unfortunately, this high intensity laser can lead to the issue of photobleaching the fluorophore. Photobleaching is the name for the destruction of fluorophores by high intensity light.
Process
STED functions by depleting fluorescence in specific regions of the sample while leaving a center focal spot active to emit fluorescence. This focal area can be engineered by altering the properties of the pupil plane of the objective lens. The most common early example of these diffractive optical elements, or DOEs, is a torus shape used in two-dimensional lateral confinement shown below. The red zone is depleted, while the green spot is left active. This DOE is generated by a circular polarization of the depletion laser, combined with an optical vortex. The lateral resolution of this DOE is typically between 30 and 80 nm. However, values down to 2.4 nm have been reported. Using different DOEs, axial resolution on the order of 100 nm has been demonstrated. A modified Abbe's equation describes this sub diffraction resolution as:
Where is the refractive index of the medium, is the intracavity intensity and is the saturation intensity. Where is the saturation factor expressing the ratio of the applied (maximum) STED intensity to the saturation intensity, .
To optimize the effectiveness of STED, the destructive interference in the center of the focal spot needs to be as close to perfect as possible. That imposes certain constraints on the optics that can be used.
Dyes
Early on in the development of STED, the number of dyes that could be used in the process was very limited. Rhodamine B was named in the first theoretical description of STED. As a result, the first dyes used were laser emitting in the red spectrum. To allow for STED analysis of biological systems, the dyes and laser sources must be tailored to the system. This desire for better analysis of these systems has led to living cell STED and multicolor STED, but it has also demanded more and more advanced dyes and excitation systems to accommodate the increased functionality.
One such advancement was the development of immunolabeled cells. These cells are STED fluorescent dyes bound to antibodies through amide bonds. The first use of this technique coupled MR-121SE, a red dye, with a secondary anti-mouse antibody. Since that first application, this technique has been applied to a much wider range of dyes including green emitting, Atto 532, and yellow emitting, Atto 590, as well as additional red emitting dyes. In addition, Atto 647N was first used with this method to produce two-color STED.
Applications
Over the last several years, STED has developed from a complex and highly specific technique to a general fluorescence method. As a result, a number of methods have been developed to expand the utility of STED and to allow more information to be provided.
Structural analysis
From the beginning of the process, STED has allowed fluorescence microscopy to perform tasks that had been only possible using electron microscopy. As an example, STED was used for the elucidation of protein structure analysis at a sub-organelle level. The common proof of this level of study is the observation of cytoskeletal filaments. In addition, neurofilaments, actin, and tubulin are often used to compare the resolving power of STED and confocal microscopes.
Using STED, a lateral resolution of 70 – 90 nm has been achieved while examining SNAP25, a human protein that regulates membrane fusion. This observation has shown that SNAP25 forms clusters independently of the SNARE motif's functionality, and binds to clustered syntaxin. Studies of complex organelles, like mitochondria, also benefit from STED microscopy for structural analysis. Using custom-made STED microscopes with a lateral resolution of fewer than 50 nm, mitochondrial proteins Tom20, VDAC1, and COX2 were found to distribute as nanoscale clusters. Another study used a homemade STED microscopy and DNA binding fluorescent dye, measured lengths of DNA fragments much more precisely than conventional measurement with confocal microscopy.
Correlative methods
Due to its function, STED microscopy can often be used with other high-resolution methods. The resolution of both electron and atomic force microscopy is even better than STED resolution, but by combining atomic force with STED, Shima et al. were able to visualize the actin cytoskeleton of human ovarian cancer cells while observing changes in cell stiffness.
Multicolor
Multicolor STED was developed in response to a growing problem in using STED to study the dependency between structure and function in proteins. To study this type of complex system, at least two separate fluorophores must be used. Using two fluorescent dyes and beam pairs, colocalized imaging of synaptic and mitochondrial protein clusters is possible with a resolution down to 5 nm [18]. Multicolor STED has also been used to show that different populations of synaptic vesicle proteins do not mix of escape synaptic boutons. By using two color STED with multi-lifetime imaging, three channel STED is possible.
Live-cell
Early on, STED was thought to be a useful technique for working with living cells. Unfortunately, the only way for cells to be studied was to label the plasma membrane with organic dyes. Combining STED with fluorescence correlation spectroscopy showed that cholesterol-mediated molecular complexes trap sphingolipids, but only transiently. However, only fluorescent proteins provide the ability to visualize any organelle or protein in a living cell. This method was shown to work at 50 nm lateral resolution within Citrine-tubulin expressing mammalian cells. In addition to detecting structures in mammalian cells, STED has allowed for the visualization of clustering YFP tagged PIN proteins in the plasma membrane of plant cells.
Recently, multicolor live-cell STED was performed using a pulsed far-red laser and CLIPf-tag and SNAPf-tag expression.
In the brain of intact animals
Superficial layers of mouse cortex can be repetitively imaged through a cranial window. This allows following the fate and shape of individual dendritic spines for many weeks. With two-color STED, it is even possible to resolve the nanostructure of the postsynaptic density in life animals.
STED at video rates and beyond
Super-resolution requires small pixels, which means more spaces to acquire from in a given sample, which leads to a longer acquisition time. However, the focal spot size is dependent on the intensity of the laser being used for depletion. As a result, this spot size can be tuned, changing the size and imaging speed. A compromise can then be reached between these two factors for each specific imaging task. Rates of 80 frames per second have been recorded, with focal spots around 60 nm. Up to 200 frames per second can be reached for small fields of view.
Problems
Photobleaching can occur either from excitation into an even higher excited state, or from excitation in the triplet state. To prevent the excitation of an excited electron into another, higher excited state, the energy of the photon needed to trigger the alternative emission should not overlap the energy of the excitation from one excited state to another. This will ensure that each laser photon that contacts the fluorophores will cause stimulated emission, and not cause the electron to be excited to another, higher energy state. Triplet states are much longer lived than singlet states, and to prevent triplet states from exciting, the time between laser pulses needs to be long enough to allow the electron to relax through another quenching method, or a chemical compound should be added to quench the triplet state.
See also
Confocal microscopy
Fluorescence
Fluorescence microscope
Ground state depletion microscopy
Laser scanning confocal microscopy
Optical microscope
Photoactivated localization microscopy
Stochastic optical reconstruction microscopy
Super-resolution microscopy
References
External links
Overview at the Department of NanoBiophotonics at the Max Planck Institute for Biophysical Chemistry.
Brief summary of the RESOLFT equations developed by Stefan Hell.
Stefan Hell lecture: Super-Resolution: Overview and Stimulated Emission Depletion (STED) Microscopy
Light Microscopy: An ongoing contemporary revolution (Introductory Review)
Cell imaging
Diffraction
Laboratory equipment
Optical microscopy techniques | STED microscopy | [
"Physics",
"Chemistry",
"Materials_science",
"Biology"
] | 2,529 | [
"Spectrum (physical sciences)",
"Crystallography",
"Diffraction",
"Microscopy",
"Spectroscopy",
"Cell imaging"
] |
4,739,827 | https://en.wikipedia.org/wiki/Hyperplane%20separation%20theorem | In geometry, the hyperplane separation theorem is a theorem about disjoint convex sets in n-dimensional Euclidean space. There are several rather similar versions. In one version of the theorem, if both these sets are closed and at least one of them is compact, then there is a hyperplane in between them and even two parallel hyperplanes in between them separated by a gap. In another version, if both disjoint convex sets are open, then there is a hyperplane in between them, but not necessarily any gap. An axis which is orthogonal to a separating hyperplane is a separating axis, because the orthogonal projections of the convex bodies onto the axis are disjoint.
The hyperplane separation theorem is due to Hermann Minkowski. The Hahn–Banach separation theorem generalizes the result to topological vector spaces.
A related result is the supporting hyperplane theorem.
In the context of support-vector machines, the optimally separating hyperplane or maximum-margin hyperplane is a hyperplane which separates two convex hulls of points and is equidistant from the two.
Statements and proof
In all cases, assume to be disjoint, nonempty, and convex subsets of . The summary of the results are as follows:
The number of dimensions must be finite. In infinite-dimensional spaces there are examples of two closed, convex, disjoint sets which cannot be separated by a closed hyperplane (a hyperplane where a continuous linear functional equals some constant) even in the weak sense where the inequalities are not strict.
Here, the compactness in the hypothesis cannot be relaxed; see an example in the section Counterexamples and uniqueness. This version of the separation theorem does generalize to infinite-dimension; the generalization is more commonly known as the Hahn–Banach separation theorem.
The proof is based on the following lemma:
Since a separating hyperplane cannot intersect the interiors of open convex sets, we have a corollary:
Case with possible intersections
If the sets have possible intersections, but their relative interiors are disjoint, then the proof of the first case still applies with no change, thus yielding:
in particular, we have the supporting hyperplane theorem.
Converse of theorem
Note that the existence of a hyperplane that only "separates" two convex sets in the weak sense of both inequalities being non-strict obviously does not imply that the two sets are disjoint. Both sets could have points located on the hyperplane.
Counterexamples and uniqueness
If one of A or B is not convex, then there are many possible counterexamples. For example, A and B could be concentric circles. A more subtle counterexample is one in which A and B are both closed but neither one is compact. For example, if A is a closed half plane and B is bounded by one arm of a hyperbola, then there is no strictly separating hyperplane:
(Although, by an instance of the second theorem, there is a hyperplane that separates their interiors.) Another type of counterexample has A compact and B open. For example, A can be a closed square and B can be an open square that touches A.
In the first version of the theorem, evidently the separating hyperplane is never unique. In the second version, it may or may not be unique. Technically a separating axis is never unique because it can be translated; in the second version of the theorem, a separating axis can be unique up to translation.
The horn angle provides a good counterexample to many hyperplane separations. For example, in , the unit disk is disjoint from the open interval , but the only line separating them contains the entirety of . This shows that if is closed and is relatively open, then there does not necessarily exist a separation that is strict for . However, if is closed polytope then such a separation exists.
More variants
Farkas' lemma and related results can be understood as hyperplane separation theorems when the convex bodies are defined by finitely many linear inequalities.
More results may be found.
Use in collision detection
In collision detection, the hyperplane separation theorem is usually used in the following form:
Regardless of dimensionality, the separating axis is always a line.
For example, in 3D, the space is separated by planes, but the separating axis is perpendicular to the separating plane.
The separating axis theorem can be applied for fast collision detection between polygon meshes. Each face's normal or other feature direction is used as a separating axis. Note that this yields possible separating axes, not separating lines/planes.
In 3D, using face normals alone will fail to separate some edge-on-edge non-colliding cases. Additional axes, consisting of the cross-products of pairs of edges, one taken from each object, are required.
For increased efficiency, parallel axes may be calculated as a single axis.
See also
Dual cone
Farkas's lemma
Kirchberger's theorem
Optimal control
Notes
References
External links
Collision detection and response
Theorems in convex geometry
Hermann Minkowski
Linear functionals
fr:Séparation des convexes | Hyperplane separation theorem | [
"Mathematics"
] | 1,060 | [
"Theorems in convex geometry",
"Theorems in geometry"
] |
4,740,956 | https://en.wikipedia.org/wiki/Vanadocene%20dichloride | Vanadocene dichloride is an organometallic complex with formula (η5-C5H5)2VCl2 (commonly abbreviated as Cp2VCl2). It is a structural analogue of titanocene dichloride but with vanadium(IV) instead of titanium(IV). This compound has one unpaired electron, hence Cp2VCl2 is paramagnetic. Vanadocene dichloride is a suitable precursor for variety of bis(cyclopentadienyl)vanadium(IV) compounds.
Preparation
Cp2VCl2 was first prepared by Wilkinson and Birmingham via the reaction of NaC5H5 and VCl4 in THF.
Reactions and use
The compound has been used in organic synthesis.
Reduction of vanadocene dichloride gives vanadocene, (C5H5)2V.
Like titanocene dichloride, this organovanadium compound was investigated as a potential anticancer drug. It was conjectured to function by interactions with the protein transferrin.
References
Metallocenes
Organovanadium compounds
Chloro complexes
Cyclopentadienyl complexes | Vanadocene dichloride | [
"Chemistry"
] | 245 | [
"Organometallic chemistry",
"Cyclopentadienyl complexes"
] |
4,741,089 | https://en.wikipedia.org/wiki/Jeans%20instability | The Jeans instability is a concept in astrophysics that describes an instability that leads to the gravitational collapse of a cloud of gas or dust. It causes the collapse of interstellar gas clouds and subsequent star formation. It occurs when the internal gas pressure is not strong enough to prevent the gravitational collapse of a region filled with matter. It is named after James Jeans.
For stability, the cloud must be in hydrostatic equilibrium, which in case of a spherical cloud translates to
where is the enclosed mass, is the pressure, is the density of the gas (at radius ), is the gravitational constant, and is the radius. The equilibrium is stable if small perturbations are damped and unstable if they are amplified. In general, the cloud is unstable if it is either very massive at a given temperature or very cool at a given mass; under these circumstances, the gas pressure gradient cannot overcome gravitational force, and the cloud will collapse. This is called the "Jeans Collapse Criterion".
The Jeans instability likely determines when star formation occurs in molecular clouds.
History
In 1720, Edmund Halley considered a universe without edges and pondered what would happen if the "system of the world", which exists within the universe, were finite or infinite. In the finite case, stars would gravitate towards the center, and if infinite, all the stars would be nearly in equilibrium and the stars would eventually reach a resting place.
Contrary to the writing of Halley, Isaac Newton, in a 1692/3 letter to Richard Bentley, wrote that it's hard to imagine that particles in an infinite space should be able to stand in such a configuration to result in a perfect equilibrium.
James Jeans extended the issue of gravitational stability to include pressure. In 1902, Jeans wrote, similarly to Halley, that a finite distribution of matter, assuming pressure does not prevent it, will collapse gravitationally towards its center. For an infinite distribution of matter, there are two possible scenarios. An exactly homogeneous distribution has no clear center of mass and no clear way to define a gravitational acceleration direction. For the other case, Jeans extends what Newton wrote about: Jeans demonstrated that small deviations from exact homogeneity lead to instabilities.
Jeans mass
The Jeans mass is named after the British physicist Sir James Jeans, who considered the process of gravitational collapse within a gaseous cloud. He was able to show that, under appropriate conditions, a cloud, or part of one, would become unstable and begin to collapse when it lacked sufficient gaseous pressure support to balance the force of gravity. The cloud is stable for sufficiently small mass (at a given temperature and radius), but once this critical mass is exceeded, it will begin a process of runaway contraction until some other force can impede the collapse. He derived a formula for calculating this critical mass as a function of its density and temperature. The greater the mass of the cloud, the bigger its size, and the colder its temperature, the less stable it will be against gravitational collapse.
The approximate value of the Jeans mass may be derived through a simple physical argument. One begins with a spherical gaseous region of radius , mass , and with a gaseous sound speed . The gas is compressed slightly and it takes a time
for sound waves to cross the region and attempt to push back and re-establish the system in pressure balance. At the same time, gravity will attempt to contract the system even further, and will do so on a free-fall time
where is the universal gravitational constant, is the gas density within the region, and is the gas number density for mean mass per particle ( is appropriate for molecular hydrogen with 20% helium by number). When the sound-crossing time is less than the free-fall time, pressure forces temporarily overcome gravity, and the system returns to a stable equilibrium. However, when the free-fall time is less than the sound-crossing time, gravity overcomes pressure forces, and the region undergoes gravitational collapse. The condition for gravitational collapse is therefore
The resultant Jeans length is approximately
This length scale is known as the Jeans length. All scales larger than the Jeans length are unstable to gravitational collapse, whereas smaller scales are stable. The Jeans mass is just the mass contained in a sphere of radius ( is half the Jeans length):
"Jeans swindle"
It was later pointed out by other astrophysicists including Binney and Tremaine that the original analysis used by Jeans was flawed: in his formal analysis, although Jeans assumed that the collapsing region of the cloud was surrounded by an infinite, static medium, the surrounding medium should in reality also be collapsing, since all larger scales are also gravitationally unstable by the same analysis. The influence of this medium was completely ignored in Jeans' analysis. This flaw has come to be known as the "Jeans swindle".
Remarkably, when using a more careful analysis taking into account other factors such as the expansion of the Universe fortuitously cancel out the apparent error in Jeans' analysis, and Jeans' equation is correct, even if its derivation might have been dubious.
Energy-based derivation
An alternative, arguably even simpler, derivation can be found using energy considerations. In the interstellar cloud, two opposing forces are at work. The gas pressure, caused by the thermal movement of the atoms or molecules comprising the cloud, tries to make the cloud expand, whereas gravitation tries to make the cloud collapse. The Jeans mass is the critical mass where both forces are in equilibrium with each other. In the following derivation numerical constants (such as ) and constants of nature (such as the gravitational constant) will be ignored. They will be reintroduced in the result.
Consider a homogeneous spherical gas cloud with radius . In order to compress this sphere to a radius , work must be done against the gas pressure. During the compression, gravitational energy is released. When this energy equals the amount of work to be done on the gas, the critical mass is attained. Let be the mass of the cloud, the (absolute) temperature, the particle density, and the gas pressure. The work to be done equals . Using the ideal gas law, according to which , one arrives at the following expression for the work:
The gravitational potential energy of a sphere with mass and radius is, apart from constants, given by the following expression:
The amount of energy released when the sphere contracts from radius to radius is obtained by differentiation this expression to , so
The critical mass is attained as soon as the released gravitational energy is equal to the work done on the gas:
Next, the radius must be expressed in terms of the particle density and the mass . This can be done using the relation
A little algebra leads to the following expression for the critical mass:
If during the derivation all constants are taken along, the resulting expression is
where is the Boltzmann constant, the gravitational constant, and the mass of a particle comprising the gas. Assuming the cloud to consist of atomic hydrogen, the prefactor can be calculated. If we take the solar mass as the unit of mass, and use units of for the particle density, the result is
Jeans' length
Jeans' length is the critical radius of a cloud (typically a cloud of interstellar molecular gas and dust) where thermal energy, which causes the cloud to expand, is counteracted by gravity, which causes the cloud to collapse. It is named after the British astronomer Sir James Jeans, who concerned himself with the stability of spherical nebulae in the early 1900s.
The formula for Jeans length is:
where is the Boltzmann constant, is the temperature of the cloud, is the mean molecular weight of the particles, is the gravitational constant, and is the cloud's mass density (i.e. the cloud's mass divided by the cloud's volume).
Perhaps the easiest way to conceptualize Jeans' length is in terms of a close approximation, in which we discard the factors and and in which we rephrase as . The formula for Jeans' length then becomes:
where is the radius of the cloud.
It follows immediately that when ; i.e., the cloud's radius is the Jeans' length when thermal energy per particle equals gravitational work per particle. At this critical length the cloud neither expands nor contracts. It is only when thermal energy is not equal to gravitational work that the cloud either expands and cools or contracts and warms, a process that continues until equilibrium is reached.
Jeans' length as oscillation wavelength
The Jeans' length is the oscillation wavelength (respectively, Jeans' wavenumber, ) below which stable oscillations rather than gravitational collapse will occur.
where G is the gravitational constant, is the sound speed, and is the enclosed mass density.
It is also the distance a sound wave would travel in the collapse time.
Fragmentation
Jeans instability can also give rise to fragmentation in certain conditions. To derive the condition for fragmentation an adiabatic process is assumed in an ideal gas and also a polytropic equation of state is taken. The derivation is shown below through a dimensional analysis:
If the adiabatic index , the Jeans mass increases with increasing density, while if the Jeans mass decreases with increasing density. During gravitational collapse density always increases, thus in the second case the Jeans mass will decrease during collapse, allowing smaller overdense regions to collapse, leading to fragmentation of the giant molecular cloud. For an ideal monatomic gas, the adiabatic index is 5/3. However, in astrophysical objects this value is usually close to 1 (for example, in partially ionized gas at temperatures low compared to the ionization energy). More generally, the process is not really adiabatic but involves cooling by radiation that is much faster than the contraction, so that the process can be modeled by an adiabatic index as low as 1 (which corresponds to the polytropic index of an isothermal gas). So the second case is the rule rather than an exception in stars. This is the reason why stars usually form in clusters.
See also
Bonnor–Ebert mass
Langmuir waves (similar waves in a plasma)
References
Concepts in astrophysics
Effects of gravity
Fluid dynamic instabilities
Interstellar media
Star formation | Jeans instability | [
"Physics",
"Chemistry",
"Astronomy"
] | 2,062 | [
"Interstellar media",
"Outer space",
"Fluid dynamic instabilities",
"Concepts in astrophysics",
"Astrophysics",
"Fluid dynamics"
] |
4,742,308 | https://en.wikipedia.org/wiki/Phase%20boundary | In thermal equilibrium, each phase (i.e. liquid, solid etc.) of physical matter comes to an end at a transitional point, or spatial interface, called a phase boundary, due to the immiscibility of the matter with the matter on the other side of the boundary. This immiscibility is due to at least one difference between the two substances' corresponding physical properties. The behavior of phase boundaries has been a developing subject of interest and an active interdisciplinary research field, called interface science, for almost two centuries, due partly to phase boundaries naturally arising in many physical processes, such as the capillarity effect, the growth of grain boundaries, the physics of binary alloys, and the formation of snow flakes.
One of the oldest problems in the area dates back to Lamé and Clapeyron who studied the freezing of the ground. Their goal was to determine the thickness of solid crust generated by the cooling of a liquid at constant temperature filling the half-space. In 1889, Stefan, while working on the freezing of the ground developed these ideas further and formulated the two-phase model which came to be known as the Stefan Problem.
The proof for the existence and uniqueness of a solution to the Stefan problem was developed in many stages. Proving the general existence and uniqueness of the solutions in the case of was solved by Shoshana Kamin.
References
Phase transitions
Applied mathematics | Phase boundary | [
"Physics",
"Chemistry",
"Mathematics"
] | 283 | [
"Thermodynamics stubs",
"Physical phenomena",
"Phase transitions",
"Applied mathematics",
"Phases of matter",
"Critical phenomena",
"Applied mathematics stubs",
"Thermodynamics",
"Statistical mechanics",
"Physical chemistry stubs",
"Matter"
] |
4,743,399 | https://en.wikipedia.org/wiki/Born%E2%80%93Huang%20approximation | The Born–Huang approximation (named after Max Born and Huang Kun) is an approximation closely related to the Born–Oppenheimer approximation. It takes into account diagonal nonadiabatic effects in the electronic Hamiltonian better than the Born–Oppenheimer approximation. Despite the addition of correction terms, the electronic states remain uncoupled under the Born–Huang approximation, making it an adiabatic approximation.
Shape
The Born–Huang approximation asserts that the representation matrix of nuclear kinetic energy operator in the basis of Born–Oppenheimer electronic wavefunctions is diagonal:
Consequences
The Born–Huang approximation loosens the Born–Oppenheimer approximation by including some electronic matrix elements, while at the same time maintains its diagonal structure in the nuclear equations of motion. As a result, the nuclei still move on isolated surfaces, obtained by the addition of a small correction to the Born–Oppenheimer potential energy surface.
Under the Born–Huang approximation, the Schrödinger equation of the molecular system simplifies to
The quantity serves as the corrected potential energy surface.
Upper-bound property
The value of Born–Huang approximation is that it provides the upper bound for the ground-state energy. The Born–Oppenheimer approximation, on the other hand, provides the lower bound for this value.
See also
Vibronic coupling
Born–Oppenheimer approximation
References
Quantum chemistry
Approximations
Max Born | Born–Huang approximation | [
"Physics",
"Chemistry",
"Mathematics"
] | 283 | [
"Quantum chemistry stubs",
"Approximations",
"Quantum chemistry",
"Theoretical chemistry stubs",
"Quantum mechanics",
"Theoretical chemistry",
"Mathematical relations",
" molecular",
"Atomic",
"Physical chemistry stubs",
" and optical physics"
] |
4,744,322 | https://en.wikipedia.org/wiki/Flux%20linkage | In electrical engineering the term flux linkage is used to define the interaction of a multi-turn inductor with the magnetic flux as described by the Faraday's law of induction. Since the contributions of all turns in the coil add up, in the over-simplified situation of the same flux passing through all the turns, the flux linkage (also known as flux linked) is , where is the number of turns. The physical limitations of the coil and the configuration of the magnetic field make some flux to leak between the turns of the coil, forming the leakage flux and reducing the linkage. The flux linkage is measured in webers (Wb), like the flux itself.
Relation to inductance and reactance
In a typical application the term "flux linkage" is used when the flux is created by the electric current flowing through the coil itself. Per Hopkinson's law, , where is the magnetomotive force and is the total reluctance of the coil. Since , where is the current, the equation can be rewritten as , where is called the inductance. Since the electrical reactance of an inductor , where is the AC frequency, .
In circuit theory
In circuit theory, flux linkage is a property of a two-terminal element. It is an extension rather than an equivalent of magnetic flux and is defined as a time integral
where is the voltage across the device, or the potential difference between the two terminals. This definition can also be written in differential form as a rate
Faraday showed that the magnitude of the electromotive force (EMF) generated in a conductor forming a closed loop is proportional to the rate of change of the total magnetic flux passing through the loop (Faraday's law of induction). Thus, for a typical inductance (a coil of conducting wire), the flux linkage is equivalent to magnetic flux, which is the total magnetic field passing through the surface (i.e., normal to that surface) formed by a closed conducting loop coil and is determined by the number of turns in the coil and the magnetic field, i.e.,
where is the magnetic flux density, or magnetic flux per unit area at a given point in space.
The simplest example of such a system is a single circular coil of conductive wire immersed in a magnetic field, in which case the flux linkage is simply the flux passing through the loop.
The flux through the surface delimited by a coil turn exists independently of the presence of the coil. Furthermore, in a thought experiment with a coil of turns, where each turn forms a loop with exactly the same boundary, each turn will "link" the "same" (identically, not merely the same quantity) flux , all for a total flux linkage of . The distinction relies heavily on intuition, and the term "flux linkage" is used mainly in engineering disciplines. Theoretically, the case of a multi-turn induction coil is explained and treated perfectly rigorously with Riemann surfaces: what is called "flux linkage" in engineering is simply the flux passing through the Riemann surface bounded by the coil's turns, hence no particularly useful distinction between flux and "linkage".
Due to the equivalence of flux linkage and total magnetic flux in the case of inductance, it is popularly accepted that the flux linkage is simply an alternative term for total flux, used for convenience in engineering applications. Nevertheless, this is not true, especially for the case of memristor, which is also referred to as the fourth fundamental circuit element. For a memristor, the electric field in the element is not as negligible as for the case of inductance, so the flux linkage is no longer equivalent to magnetic flux. In addition, for a memristor, the energy related to the flux linkage is dissipated in the form of Joule heating, instead of being stored in magnetic field, as done in the case of an inductance.
References
Sources
L. O. Chua, "Memristor – The Missing Circuit Element", IEEE Trans. Circuit Theory, vol. CT_18, no. 5, pp. 507–519, 1971.
Electromagnetism
Thought experiments in physics | Flux linkage | [
"Physics"
] | 872 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
4,744,351 | https://en.wikipedia.org/wiki/Coordination%20polymer | A coordination polymer is an inorganic or organometallic polymer structure containing metal cation centers linked by ligands. More formally a coordination polymer is a coordination compound with repeating coordination entities extending in 1, 2, or 3 dimensions.
It can also be described as a polymer whose repeat units are coordination complexes. Coordination polymers contain the subclass coordination networks that are coordination compounds extending, through repeating coordination entities, in 1 dimension, but with cross-links between two or more individual chains, loops, or spiro-links, or a coordination compound extending through repeating coordination entities in 2 or 3 dimensions. A subclass of these are the metal-organic frameworks, or MOFs, that are coordination networks with organic ligands containing potential voids.
Coordination polymers are relevant to many fields, having many potential applications.
Coordination polymers can be classified in a number of ways according to their structure and composition. One important classification is referred to as dimensionality. A structure can be determined to be one-, two- or three-dimensional, depending on the number of directions in space the array extends in. A one-dimensional structure extends in a straight line (along the x axis); a two-dimensional structure extends in a plane (two directions, x and y axes); and a three-dimensional structure extends in all three directions (x, y, and z axes). This is depicted in Figure 1.
History
The work of Alfred Werner and his contemporaries laid the groundwork for the study of coordination polymers. Many time-honored materials are now recognized as coordination polymers. These include the cyanide complexes Prussian blue and Hofmann clathrates.
Synthesis and propagation
Coordination polymers are often prepared by self-assembly, involving crystallization of a metal salt with a ligand. The mechanisms of crystal engineering and molecular self-assembly are relevant.
The structure and dimensionality of the coordination polymer are determined by the linkers and the coordination geometry of the metal center. Coordination numbers are most often between 2 and 10. Examples of various coordination numbers are shown in planar geometry in Figure 2. In Figure 1 the 1D structure is 2-coordinated, the planar is 4-coordinated, and the 3D is 6-coordinated.
Metal centers
Metal centers, often called nodes or hubs, bond to a specific number of linkers at well defined angles. The number of linkers bound to a node is known as the coordination number, which, along with the angles they are held at, determines the dimensionality of the structure. The coordination number and coordination geometry of a metal center is determined by the nonuniform distribution of electron density around it, and in general the coordination number increases with cation size. Several models, most notably hybridization model and molecular orbital theory, use the Schrödinger equation to predict and explain coordination geometry, however this is difficult in part because of the complex effect of environment on electron density distribution.
Transition metals
Transition metals are commonly used as nodes. Partially filled d orbitals, either in the atom or ion, can hybridize differently depending on environment. This electronic structure causes some of them to exhibit multiple coordination geometries, particularly copper and gold ions which as neutral atoms have full d-orbitals in their outer shells.
Lanthanides
Lanthanides are large atoms with coordination numbers varying from 7 to 14. Their coordination environment can be difficult to predict, making them challenging to use as nodes. They offer the possibility of incorporating luminescent components.
Alkali metals and alkaline earth metals
Alkali metals and alkaline earth metals exist as stable cations. Alkali metals readily form cations with stable valence shells, giving them different coordination behavior than lanthanides and transition metals. They are strongly affected by the counterion from the salt used in synthesis, which is difficult to avoid. The coordination polymers shown in Figure 3 are all group two metals. In this case, the dimensionality of these structures increases as the radius of the metal increases down the group (from calcium to strontium to barium).
Ligands
Coordination polymers require ligands with the ability to form multiple coordination bonds, i.e. act as bridges between metal centers. Many bridging ligands are known. They range from polyfunctional heterocycles, such as pyrazine, to simple halides. Almost any type of atom with a lone pair of electrons can serve as a ligand.
Very elaborate ligands have been investigated. and phosphorus, have been observed.
Structural orientation
Ligands can be flexible or rigid. A rigid ligand is one that has no freedom to rotate around bonds or reorient within a structure. Flexible ligands can bend, rotate around bonds, and reorient themselves. These different conformations create more variety in the structure. There are examples of coordination polymers that include two configurations of the same ligand within one structure, as well as two separate structures where the only difference between them is ligand orientation.
Ligand length
A length of the ligand can be an important factor in determining possibility for formation of a polymeric structure versus non-polymeric (mono- or oligomeric) structures.
Other factors
Counterion
Besides metal and ligand choice, there are many other factors that affect the structure of the coordination polymer. For example, most metal centers are positively charged ions which exist as salts. The counterion in the salt can affect the overall structure. For example, when silver salts such as AgNO3, AgBF4, AgClO4, AgPF6, AgAsF6 and AgSbF6 are all crystallized with the same ligand, the structures vary in terms of the coordination environment of the metal, as well as the dimensionality of the entire coordination polymer.
Crystallization environment
Additionally, variations in the crystallization environment can also change the structure. Changes in pH, exposure to light, or changes in temperature can all change the resulting structure. Influences on the structure based on changes in crystallization environment are determined on a case by case basis.
Guest molecules
The structure of coordination polymers often incorporates empty space in the form of pores or channels. This empty space is thermodynamically unfavorable. In order to stabilize the structure and prevent collapse, the pores or channels are often occupied by guest molecules. Guest molecules do not form bonds with the surrounding lattice, but sometimes interact via intermolecular forces, such as hydrogen bonding or pi stacking. Most often, the guest molecule will be the solvent that the coordination polymer was crystallized in, but can really be anything (other salts present, atmospheric gases such as oxygen, nitrogen, carbon dioxide, etc.) The presence of the guest molecule can sometimes influence the structure by supporting a pore or channel, where otherwise none would exist.
Applications
Coordination polymers are found in some commercialized as dyes.. Metal complex dyes using copper or chromium are commonly used for producing dull colors. Tridentate ligand dyes are useful because they are more stable than their bi- or mono-dentate counterparts.
Some early commercialized coordination polymers are the Hofmann compounds, which have the formula Ni(CN)4Ni(NH3)2. These materials crystallize with small aromatic guests (benzene, certain xylenes), and this selectivity has been exploited commercially for the separation of these hydrocarbons.
Research trends
Molecular storage
Although not yet practical, porous coordination polymers have potential as molecular sieves in parallel with porous carbon and zeolites. The size and shapes of the pore can be controlled by the linker size and the connecting ligands' length and functional groups. To modify the pore size in order to achieve effective adsorption, nonvolatile guests are intercalated in the porous coordination polymer space to decrease the pore size. Active surface guests can also be used contribute to adsorption. For example, the large-pore MOF-177, 11.8 Å in diameter, can be doped by C60 molecules (6.83 Å in diameter) or polymers with a highly conjugated system in order to increase the surface area for H2 adsorption.
Flexible porous coordination polymers are potentially attractive for molecular storage, since their pore sizes can be altered by physical changes. An example of this might be seen in a polymer that contains gas molecules in its normal state, but upon compression the polymer collapses and releases the stored molecules. Depending on the structure of the polymer, it is possible that the structure be flexible enough that collapsing the pores is reversible and the polymer can be reused to uptake the gas molecules again. The Metal-organic framework page has a detailed section dealing with H2 gas storage.
Luminescence
Luminescent coordination polymers typically feature organic chromophoric ligands, which absorb light and then pass the excitation energy to the metal ion. For ligands that fluoresce without the presence of the metal linker (not due to LMCT), the intense photoluminescence emission of these materials tend to be magnitudes of order higher than that of the free ligand alone. These materials are candidates for light emitting diode (LED) devices. The dramatic increase in fluorescence is caused by the increase in rigidity and asymmetry of the ligand when coordinated to the metal center.
Electrical conductivity
Coordination polymers can have short inorganic and conjugated organic bridges in their structures, which provide pathways for electrical conduction. example of such coordination polymers are conductive metal organic frameworks. Some one-dimensional coordination polymers built as shown in the figure exhibit conductivities in a range of 1x10−6 to 2x10−1 S/cm. The conductivity is due to the interaction between the metal d-orbital and the pi* level of the bridging ligand. In some cases coordination polymers can have semiconductor behavior. Three-dimensional structures consisting of sheets of silver-containing polymers demonstrate semi-conductivity when the metal centers are aligned, and conduction decreases as the silver atoms go from parallel to perpendicular.
Magnetism
Coordination polymers exhibit many kinds of magnetism. Antiferromagnetism, ferrimagnetism, and ferromagnetism are cooperative phenomena of the magnetic spins within a solid arising from coupling between the spins of the paramagnetic centers. In order to allow efficient magnetic, metal ions should be bridged by small ligands allowing for short metal-metal contacts (such as oxo, cyano, and azido bridges).
Sensor capability
Coordination polymers can also show color changes upon the change of solvent molecules incorporated into the structure. An example of this would be the two Co coordination polymers of the [Re6S8(CN)6]4− cluster that contains water ligands that coordinate to the cobalt atoms. This originally orange solution turns either purple or green with the replacement of water with tetrahydrofuran, and blue upon the addition of diethyl ether. The polymer can thus act as a solvent sensor that physically changes color in the presence of certain solvents. The color changes are attributed to the incoming solvent displacing the water ligands on the cobalt atoms, resulting in a change of their geometry from octahedral to tetrahedral.
References
Polymers | Coordination polymer | [
"Chemistry",
"Materials_science"
] | 2,288 | [
"Polymers",
"Polymer chemistry"
] |
4,744,787 | https://en.wikipedia.org/wiki/Blue-listed | Blue-listed species are species that belong to the Blue List and includes any indigenous species or subspecies (taxa) considered to be vulnerable in their locale in order to provide early warning to federal and regional governments. Vulnerable taxa are of special concern because of characteristics that make them particularly sensitive to human activities or natural events. Blue-listed taxa are at risk, but are not extirpated, endangered or threatened.
History
The concept of a Blue List was derived in 1971 by Robert Arbib from the National Audubon Society in his article, "Announcing-- The Blue List: an 'early warning' system for birds". The article stated that the list was made up for species that appear to be locally common in North America, but is undergoing non-cyclic declines. Starting from 1971, it was utilized to list vulnerable bird species throughout North America. Unlike the US Fish and Wildlife Endangered Species List, the Blue List was made to identify patterns of population losses for regional bird populations before they could be listed as endangered. Every decade after its release, the list is revisited and revised based on regional editors and species get "nominated" to be added to the list. From then on, species that are part of the Blue List were referred to as Blue-listed species.
Status Ranks
Initially, in order to identify the types of risks that each Blue-Listed species have, the Blue List has identified various categories for Blue-Listed species based on the following alphabets:
"A" : the species population is "greatly down in numbers"
"B" : the species population is "down in numbers"
"C" : the species population is experiencing no change
"D" : the species population is "up in numbers"
"E" : the species population is "greatly up in numbers"
Using this metric reginal editors were able to report on the species along with their status ranks in order to identify the patterns of population growth that each species is facing. Later on, the Government of British Columbia has revised the status ranks such that Blue-Listed species are listed based on the following modifier codes:
See also
Biodiversity Action Plan
Red-listed
References
External links
British Columbia
Biota by conservation status
Environmental conservation | Blue-listed | [
"Biology"
] | 443 | [
"Biota by conservation status",
"Biodiversity"
] |
4,745,069 | https://en.wikipedia.org/wiki/Black-ray%20goby | Stonogobiops nematodes, the Filament-finned prawn-goby, the Antenna goby, the high-fin goby, the red-banded goby, the high-fin red-banded goby, the striped goby, the barber-pole goby, or the black-ray Goby, is a species of marine goby native to the Indian Ocean and western Pacific Ocean from the Seychelles to the Philippines and Bali.
Physical features
Adult fish can grow up to in length, with the striking pointed dorsal fin becoming more raised and pronounced in adulthood. This elongated fin is the most obvious distinguishing feature between the black-ray goby and its close cousin, the yellow snout goby (S. Xanthorhinica). The fish are coloured with four diagonal brown stripes across a white body, and a distinctive yellow head.
It is almost impossible for anybody less than a specialised expert in the specific field of these types of fish to discern differences between males and females of the species.
Natural environment
This goby inhabits sandy or sand-rubble bottoms adjacent to reefs at depths of from . It is one of several species that form commensal relationships with Randall's pistol shrimp (Alpheus randalli).
Behaviour in the wild
This species shares a burrow with its shrimp partner. The goby has much better eyesight than the shrimp, and, as such, acts as the watchman for both of them, keeping an eye out for danger. The shrimp spends the day digging a burrow in the sand in which both live. Burrows usually measure up to one inch in diameter, and can reach up to four feet in length. The two animals maintain continuous contact, with the shrimp placing one of its antennae permanently on the goby's tail. When danger threatens, the goby will make continuous flicks of its tail, warning the shrimp there is a predator nearby, and the shrimp will remain safely in the burrow. If the danger reaches a certain level, the goby will dart into the burrow after the shrimp.
At night, the goby will go into the burrow, and the shrimp will collapse the entrance to close it off. The burrow is exited the next day by the goby blasting its way out and collapsing the burrow. The shrimp then spends the next day laboriously rebuilding the entrance to the burrow. Both animals have also been known to share food with each other.
When the goby catches food, it will often give a portion of it to the pistol shrimp, through a somewhat similar process of a mother bird regurgitating food to her chicks. This way, the shrimp and the goby are kept well fed. Sometimes, if the shrimp is not kept well fed, it will resort to killing the goby.
In the wild, most burrows are shared by male and female goby pairs, with their respective shrimp partners, and the female goby will use this burrow as a nesting site to lay her eggs.
The obvious benefits to both organisms of this symbiotic relationship make the interaction a form of mutualism. Here is an interesting example of this. The other fish is a dartfish (genus Ptereleotris). These fish are often found as unwelcome but ignored guests sharing the burrow with goby and shrimp.
Aquarium keeping
Black-ray goby in commercial trade
This goby is in high demand for home and hobby marine aquaria due to its beautiful colouration, docile nature and interesting interaction with symbiotic shrimp. This type of goby is the most common Stonogobiops species to show up in the marine trade, but is still quite rare.
Behaviour and compatibility
This fish is very docile and poses almost no threat to any other stock inhabiting a typical marine aquarium. This passiveness makes it a perfect tankmate for delicate species like sea horses or pipefish. In fact, it is in reality quite shy, and when first introduced into an aquarium, may take up to several weeks before it is bold enough to leave its hiding place, or bolt hole. While this fish can display aggression towards other tank inhabitants by opening its mouth and "yawning" at them, this is mostly show, and the goby will quickly turn tail and hide if confronted.
The goby will spend most of its time hovering about two inches above its bolt hole, searching for scraps of food in the water column. If scared or startled, it will slowly retreat towards its hole. If the danger does not go away, it will dart inside at lightning speed.
Mated pairs of this fish are very rare and difficult to attain. Individual males may fight if placed in a tank smaller than about 50 gallons (~200 litres).
Tank environment
For successful aquarium culture, this fish needs good sand/coral rubble cover for burrow-building and much rock cover; a reef environment is suitable. The recommended minimum tank size is 10 gallons (40 litres), however these fish mainly hide out in burrows all day, and are not active swimmers, making them candidates for smaller "pico" aquariums. (Aquariums 4 gallons and under.) The water specific gravity should be 1.020 - 1.025, with a pH of 8.1 - 8.4; water temperature at 72 - 78 °F / 22 - 25 °C is ideal, however water temperature up to 80 °F will not harm the fish.
Care and maintenance
Small meaty foods, such as mysid (sometimes referred to as mysis shrimp) or brine shrimp, along with flake food and algae wafers, spirulina, etc. are all happily accepted. In the wild, these gobies most often feed on zooplankton. The water quality must be kept reasonably high, as with all marine species. A substrate of small-grained coral sand, with larger particles mixed in (preferably four inches or deeper) is ideal for the goby/shrimp pair to make their burrow.
References
External links
General information
General information on this type of goby, from About.com
Some general/taxonomic info
Wetwetmedia.com link, contains some useful information, along with helpful FAQs
Shrimp-goby interaction
Interesting information on a parallel relationship between other goby/shrimp types in the Atlantic
Excellent general information on this goby, and the genus Stonogobiops in general, along with their mutualism with pistol shrimp
- An interesting study analysing daily shrimp activity cycles, the effects of goby presence on shrimp behavior, and the effects of predation on numerical density and size of gobies
Black-ray goby
Symbiosis
Fish described in 1982 | Black-ray goby | [
"Biology"
] | 1,369 | [
"Biological interactions",
"Behavior",
"Symbiosis"
] |
8,160,680 | https://en.wikipedia.org/wiki/Biocontainment%20of%20genetically%20modified%20organisms | Since the advent of genetic engineering in the 1970s, concerns have been raised about the dangers of the technology. Laws, regulations, and treaties were created in the years following to contain genetically modified organisms and prevent their escape. Nevertheless, there are several examples of failure to keep GM crops separate from conventional ones.
Overview
In the context of agriculture and food and feed production, co-existence means using cropping systems with and without genetically modified crops in parallel. In some countries, such as the United States, co-existence is not governed by any single law but instead is managed by regulatory agencies and tort law. In other regions, such as Europe, regulations require that the separation and the identity of the respective food and feed products must be maintained at all stages of the production process.
Many consumers are critical of genetically modified plants and their products, while, conversely, most experts in charge of GMO approvals do not perceive concrete threats to health or the environment. The compromise chosen by some countries - notably the European Union - has been to implement regulations specifically governing co-existence and traceability. Traceability has become commonplace in the food and feed supply chains of most countries in the world, but the traceability of GMOs is made more challenging by the addition of very strict legal thresholds for unwanted mixing. Within the European Union, since 2001, conventional and organic food and feedstuffs can contain up to 0.9% of authorised GM material without being labelled GM (any trace of non-authorised GM products would cause shipments to be rejected).
In the United States there is no legislation governing the co-existence of neighboring farms growing organic and GM crops; instead the US relies on a "complex but relaxed" combination of three federal agencies (FDA, EPA, and USDA/APHIS) and the common law tort system, governed by state law, to manage risks of co-existence.
Containment measures
To limit mixing in the first stages of production, researchers and politicians are developing codes of good agricultural practice for GM crops. In addition to the thorough cleaning of machinery, recommended measures include the establishment of "isolation distances" and "pollen barriers". Isolation distances are the minimum distances required between GM and non-GM cultivations for most of the GM pollen to fall to the ground before reaching non-GM plants. Pollen barriers attempt actively catch pollen, and can consist of hedges and trees which physically hinder pollen movement. Pollen barriers consisting of conventional crops of the same species as the GM crop have a special advantage, as the conventional plants not only physically limit the GM pollen flow, but also produce competitive, conventional pollen. During harvest, the buffer strip of conventional crops is considered part of the GM crop yield.
Biological approaches
In addition to agricultural measures, there may be also biological tools to prevent the genetically modified crop from fertilising conventional fields. Researchers are investigating methods either to prevent GM crops from producing pollen at all (for example male-sterile plants), or to develop GM crops with pollen that nonetheless does not contain the additional, genetically engineered material. In an example of the latter, transplastomic plants can be generated in which the genetic modification has been integrated in the DNA of chloroplasts. As the chloroplasts of plants are maternally inherited, the transgenes are not spread by pollen thus achieving biological containment. In other words, the cell nucleus contains no transgenes, and the pollen contains no chloroplasts and thus no transgenes. Two important research projects on co-existence are and Co-Extra. With the end of the de facto moratorium on genetically modified plants in Europe, several research programmes (e.g. SIGMEA, Co-Extra, and Transcontainer) have begun investigating biological containment strategies for GMOs.
While SIGMEA was focused on co-existence at the farm level, Co-Extra studies co-existence along the whole production chain, and has a second focus on the traceability of GMOs, since co-existence cannot work without traceability. To be able to monitor and enforce compliance with co-existence regulations, authorities require the ability to trace, detect and identify GMOs.
Regulation and policy
The development of a regulatory framework concerning genetic engineering began in 1975, at Asilomar, California. The first use of Recombinant DNA (rDNA) technology had just been successfully accomplished by Stanley Cohen and Herbert Boyer two years previously and the scientific community recognized that as well as benefits this technology could also pose some risks. The Asilomar meeting recommended a set of guidelines regarding the cautious use of recombinant technology and any products resulting from that technology. The Asilomar recommendations were voluntary, but in 1976 the US National Institute of Health (NIH) formed a rDNA advisory committee. This was followed by other regulatory offices (the United States Department of Agriculture (USDA), Environmental Protection Agency (EPA) and Food and Drug Administration (FDA)), effectively making all rDNA research tightly regulated in the USA.
In 1982 the Organisation for Economic Co-operation and Development (OECD) released a report into the potential hazards of releasing genetically modified organisms into the environment as the first transgenic plants were being developed. As the technology improved and genetically organisms moved from model organisms to potential commercial products the USA established a committee at the Office of Science and Technology (OSTP) to develop mechanisms to regulate the developing technology. In 1986 the OSTP assigned regulatory approval of genetically modified plants in the US to the USDA, FDA and EPA.
The Cartagena Protocol on Biosafety was adopted on 29 January 2000 and entered into force on 11 September 2003. It is an international treaty that governs the transfer, handling, and use of genetically modified (GM) organisms. It is focussed on movement of GMOs between countries and has been called a de facto trade agreement. One hundred and fifty-seven countries are members of the Protocol and many use it as a reference point for their own regulations.
In the face of continuing concerns about the economic losses that might be suffered by organic farmers by unintended intermixing, the U.S. Secretary of Agriculture convened an Advisory Committee on Biotechnology and 21st Century Agriculture (AC21) to study the issue and make recommendations as to whether to address these concerns and if so, how. economic losses to farmers caused by unintended presence of genetically engineered materials, as well as how such mechanisms might work. The members of AC21 included representatives of the biotechnology industry, the organic food industry, farming communities, the seed industry, food manufacturers, State government, consumer and community development groups, the medical profession, and academic researchers. The AC21 recommended that a study should be conducted to answer the question of whether and to what extent there are any economic losses to US organic farmers; recommended that if the losses are serious, that a crop insurance program for organic farmers be put in place, and that an education program should be undertaken to ensure that organic farmers are putting appropriate contracts in place for their crops and that neighboring GM crop farmers are taking appropriate containment measures. Overall the report supported a diverse agriculture system in which many different farming systems could co-exist.
Compensation for failure to maintain separation
Since GM-free products yield higher prices in many countries, some governments have introduced limits for the mixing of both production systems, with compensation for non-GM farmers for economic losses in cases where mixing inadvertently occurred. One tool for compensation is a liability fund, to which all GM farmers, and sometimes GM seed producers, contribute. After a notable GMO contamination event in Western Australia where a certified organic farm lost certification due to GMO contamination, a Parliamentary Inquiry considered six proactive proposals for compensating farms contaminated by GMOs, however the Inquiry did not recommend a particular mechanism of compensation.
Notable escapes
Mixing can occur already at the agricultural stage. Fundamentally, two reasons exist for the presence of GMOs in the harvest of a non-GM cultivation: first, that the seed has been contaminated already or, secondly, that the plants in the non-GM field have received pollen from neighbouring GM fields. Mixing may also occur post-harvest, anywhere in the production chain.
1990s
In 1997, Percy Schmeiser discovered that canola growing on his farm was genetically modified to be resistant to Roundup although he had not planted GM seed. He had initially discovered that some canola growing by a roadside along one of his fields was Roundup resistant when he was killing weeds along the road; this led him to spray a 3- to 4‑acre section of his adjacent field and 60% of the canola survived. Schmeiser harvested the seed from the surviving, Roundup resistant plants, and planted the seed in 1998. Monsanto sued Schmeiser for patent infringement for the 1998 planting. Schmeiser claimed that because the 1997 plants grew from seed that was blown into his field from neighboring fields, that he owned the harvest and was entitled to do with it whatever he wished, including saving the seeds from the 1997 harvest and planting them in 1998. The case (Monsanto Canada Inc v Schmeiser) went to the Supreme Court which held for Monsanto by a 5‑4 vote in late May 2004. The case is widely cited or referenced by the anti-GM community in the context of a fear of a company claiming ownership of a farmer's crop based on the inadvertent presence of GM pollen grain or seed. "The court record shows, however, that it was not just a few seeds from a passing truck, but that Mr Schmeiser was growing a crop of 95–98% pure Roundup Ready plants, a commercial level of purity far higher than one would expect from inadvertent or accidental presence. The judge could not account for how a few wayward seeds or pollen grains could come to dominate hundreds of acres without Mr Schmeiser's active participation, saying '. . .none of the suggested sources could reasonably explain the concentration or extent of Roundup Ready canola of a commercial quality evident from the results of tests on Schmeiser's crop'" – in other words, the original presence of Monsanto seed on his land in 1997 was indeed inadvertent, but the crop in 1998 was entirely purposeful.
In 1999 scientists in Thailand claimed they discovered glyphosate-resistant genetically modified wheat that was not yet approved for release in a grain shipment from the Pacific Northwest of the United States, even though transgenic wheat had never been approved for sale and was only ever grown in test plots. No one could explain how the transgenic wheat got into the food supply.
2000s
In 2000, Aventis StarLink corn, which had been approved only as animal feed due to concerns about possible allergic reactions in humans, was found contaminating corn products in U.S. supermarkets and restaurants. This corn became the subject of a widely publicized recall, when Taco Bell taco shells were found to contain the corn, eventually resulting in the recall of over 300 products. It was the first-ever recall of a genetically modified food. The registration for the Starlink varieties was voluntarily withdrawn by Aventis in October 2000.
In 2005, scientists at the UK Centre for Ecology and Hydrology reported the first evidence of horizontal gene transfer of pesticide resistance to weeds, in a few plants from a single season; they found no evidence that any of the hybrids had survived in subsequent seasons.
In 2006, American exports of rice to Europe were interrupted when the U.S. crop was contaminated with rice containing the LibertyLink modification, which had not been approved for release. An investigation by the USDA's Animal and Plant Health Inspection Service (APHIS) was unable to determine the cause of the contamination.
In 2007, the U.S. Department of Agriculture fined Scotts Miracle-Gro $500,000 when modified genetic material from creeping bentgrass, a new golf-course grass Scotts had been testing, was found within close relatives of the same genus (Agrostis) as well as in native grasses up to away from the test sites, released when freshly cut grass was blown by the wind.
In 2009 the government of Mexico created a regulatory pathway for approval of genetically modified maize, but because Mexico is the center of diversity for maize, concerns have been raised about the effect that genetically modified maize could have on local strains. A 2001 report in Nature presented evidence that Bt maize was cross-breeding with unmodified maize in Mexico, although the data in this paper was later described as originating from an artifact and Nature stated that "the evidence available is not sufficient to justify the publication of the original paper". A subsequent large-scale study, in 2005, failed to find any evidence of contamination in Oaxaca. However, other authors have stated that they also found evidence of cross-breeding between natural maize and transgenic maize.
2010s
A study published in 2010 by scientists at the University of Arkansas, North Dakota State University, California State University and the US Environmental Protection Agency showed that about 83 percent of wild or weedy canola tested contained genetically modified herbicide resistance genes. According to the researchers, the lack of reports in the US suggests inadequate oversight and monitoring protocols are in place in the US. The development of weeds resistant to glyphosate, the most commonly applied herbicide, could mean that farmers must return to more labour-intensive methods to control weeds, use more dangerous herbicides or till the soil (so increasing then risk of erosion). A 2010 report by the National Academy of Sciences stated that the advent of glyphosate-herbicide resistant weeds could cause the genetically engineered crops to lose their effectiveness unless farmers also use other established weed management strategies. In Australia, some of a 2010 planting of Monsanto's Roundup-Ready (RR) canola blew across a neighboring organic farm. The organic farm lost its organic certification and the organic farmer sued the GM farmer - so far without success. The certifier called it "contamination" and in the 2014 judgement the judge called it an "incursion" and rejected claims for nuisance, negligence and damages.
In 2013, glyphosate-resistant genetically modified wheat that was not yet approved for release, but which had been declared safe for consumption in the USA, was discovered in a farm in Oregon, growing as a weed or "volunteer plant". The wheat had been created by Monsanto, and was a strain that was field-tested from 1998 to 2005 and was in the American regulatory approval process before Monsanto withdrew it based on concern that importers would avoid the crop. The last field test in Oregon had occurred in 2001. Volunteer wheat from a field two miles away owned by the same farmer and planted with the same seed was tested and it was not found to be glyphosate-resistant. Monsanto was liable for fines of up to $1 million, if violations of the Plant Protection Act were found. According to Monsanto it was "mystified" by its appearance, having destroyed all the material it held after completing trials in 2004 and because they did not think that seed left in the ground or pollen transfer could account for it. Later in the month, Monsanto suggested that the presence of the wheat was likely an act of "sabotage". The discovery could have threatened U.S. wheat exports, which totaled $8.1 billion in 2012; the US is the world's largest wheat exporter. New Scientist reported that the variety of wheat was rarely imported into Europe and doubted that the discovery of the wheat would affect Europe, but more likely destined for Asia. As a result of the discovery of the unapproved strain, Japan and South Korea halted wheat orders from the United States, leaving wheat growers in neighboring communities unable to decide what to plant next season. The crop growing when the genetically modified wheat was discovered had already been sold or insured. On June 14, 2013, the USDA announced: "As of today, USDA has neither found nor been informed of anything that would indicate that this incident amounts to more than a single isolated incident in a single field on a single farm. All information collected so far shows no indication of the presence of GE wheat in commerce." As of August 30, while the source of the GM wheat remained unknown, Japan, South Korea and Taiwan had all resumed placing orders, and the export market resumed. The Oregon wheat commissioner, Blake Rowe, said that "the overall economic impact has been minimal".
References
External links
Co-Extra – EU research programme on co-existence and traceability along the whole production chain
SIGMEA – EU research programme on co-existence in agriculture
Transcontainer – EU research programme on biological containment systems for genetically modified plants
GMO-Compass – facts, numbers, and news about GM crops in Europe
Research projects: Biological confinement of new genes - methods for containing the spread of genetically modified plants
Genetic engineering and agriculture
Genetically modified organisms
Genetic engineering
Biological techniques and tools
Regulation of genetically modified organisms
Biological contamination | Biocontainment of genetically modified organisms | [
"Chemistry",
"Engineering",
"Biology"
] | 3,462 | [
"Regulation of genetically modified organisms",
"Biological engineering",
"Genetically modified organisms",
"Regulation of biotechnologies",
"Genetic engineering",
"Genetic engineering and agriculture",
"nan",
"Molecular biology"
] |
8,161,079 | https://en.wikipedia.org/wiki/Plant%20Genetic%20Systems | Plant Genetic Systems (PGS), since 2002 part of Bayer CropScience, is a biotech company located in Ghent, Belgium. The focus of its activities is the genetic engineering of plants. The company is best known for its work in the development of insect-resistant transgenic plants.
Its origin goes back to the work of Marc Van Montagu and Jeff Schell at the University of Ghent who were among the first to assemble a practical system for genetic engineering of plants. They developed a vector system for transferring foreign genes into the plant genome, by using the Ti plasmid of Agrobacterium tumefaciens. They also found a way to make plant cells resistant to the antibiotic kanamycin by transferring a bacterial neomycin phosphotransferase gene into the plant genome. PGS was the first company (in 1985) to develop genetically engineered (tobacco) plants with insect tolerance by expressing genes encoding for insecticidal proteins from Bacillus thuringiensis (Bt).
History
The company was founded in 1982 by Marc Van Montagu and Jeff Schell who worked at the University of Ghent, Belgium. In 1996 the company was acquired by AgrEvo. In 2000, Aventis CropScience was formed through a merger of AgrEvo and Rhône-Poulenc Agro. In 1999, a trial of AgrEvo genetically modified maize was the target of a Greenpeace direct action near Lyng, Norfolk, and was involved in the subsequent trial of the activists.
In 2000 StarLink, a genetically modified maize developed by Plant Genetic Systems was detected in over 300 consumer food products in the US, triggering a recall. StarLink had not been approved for human consumption by the FDA. Following the recalls, PGS at first tried to get the application for human consumption approved, and then withdrew the product entirely from the market.
In 2002, Bayer CropScience was formed through Bayer's acquisition of the plant biotech branch Aventis CropScience.
See also
CropDesign
Flanders Interuniversity Institute of Biotechnology (VIB)
Marc Zabeau
References
Hofte H, de Greve H, Seurinck J, Jansens S, Mahillon J, Ampe C, Vandekerckhove J, Vanderbruggen H, van Montagu M, Zabeau M, et al., Structural and functional analysis of a cloned delta endotoxin of Bacillus thuringiensis berliner 1715, Eur. J. Biochem. 1986 December 1;161(2):273-80.
Leemans J, Ti to Tomato, Tomato to Market: A decade of plant biotechnology, Bio/Technology, vol. 11, March 1993.
Vaeck, M., A. Reynaerts, H. Hofte, S. Jansens, M. De Beuckeleer, C. Dean, M. Zabeau, M. Van Montagu & J. Leemans. 1987, Transgenic plants protected from insect attack, Nature 328: 33-37.
External links
Institute of Plant Biotechnology for Developing Countries
Biological pest control
Genetic engineering and agriculture
Biotechnology companies of Belgium
Biotechnology companies established in 1982
1982 establishments in Belgium
Companies based in Ghent | Plant Genetic Systems | [
"Engineering",
"Biology"
] | 664 | [
"Genetic engineering and agriculture",
"Genetic engineering"
] |
8,163,713 | https://en.wikipedia.org/wiki/Galenic%20formulation | Galenic formulation deals with the principles of preparing and compounding medicines in order to optimize their absorption. Galenic formulation is named after Claudius Galen, a 2nd Century AD Greek physician, who codified the preparation of drugs using multiple ingredients. Today, galenic formulation is part of pharmaceutical formulation. The pharmaceutical formulation of a medicine affects the pharmacokinetics, pharmacodynamics and safety profile of a drug.
See also
Formulations
Pharmaceutical formulation
ADME
Pharmacology
Medicinal chemistry
Pesticide formulation
Medicinal chemistry | Galenic formulation | [
"Chemistry",
"Biology"
] | 109 | [
"Medicinal chemistry stubs",
"Biochemistry stubs",
"nan",
"Medicinal chemistry",
"Biochemistry"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.