id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
4,367,424 | https://en.wikipedia.org/wiki/Lov%C3%A1sz%20conjecture | In graph theory, the Lovász conjecture (1969) is a classical problem on Hamiltonian paths in graphs. It says:
Every finite connected vertex-transitive graph contains a Hamiltonian path.
Originally László Lovász stated the problem in the opposite way, but
this version became standard. In 1996, László Babai published a conjecture sharply contradicting this conjecture, but both conjectures remain widely open. It is not even known if a single counterexample would necessarily lead to a series of counterexamples.
Historical remarks
The problem of finding Hamiltonian paths in highly symmetric graphs is quite old. As Donald Knuth describes it in volume 4 of The Art of Computer Programming, the problem originated in British campanology (bell-ringing). Such Hamiltonian paths and cycles are also closely connected to Gray codes. In each case the constructions are explicit.
Variants of the Lovász conjecture
Hamiltonian cycle
Another version of Lovász conjecture states that
Every finite connected vertex-transitive graph contains a Hamiltonian cycle except the five known counterexamples.
There are 5 known examples of vertex-transitive graphs with no Hamiltonian cycles (but with Hamiltonian paths): the complete graph , the Petersen graph, the Coxeter graph and two graphs derived from the Petersen and Coxeter graphs by replacing each vertex with a triangle.
Cayley graphs
None of the 5 vertex-transitive graphs with no Hamiltonian cycles is a Cayley graph. This observation leads to a weaker version of the conjecture:
Every finite connected Cayley graph contains a Hamiltonian cycle.
The advantage of the Cayley graph formulation is that such graphs correspond to a finite group and a
generating set . Thus one can ask for which and the conjecture holds rather than attack it in full generality.
Directed Cayley graph
For directed Cayley graphs (digraphs) the Lovász conjecture is false. Various counterexamples were obtained by Robert Alexander Rankin. Still, many of the below results hold in this restrictive setting.
Special cases
Every directed Cayley graph of an abelian group has a Hamiltonian path; however, every cyclic group whose order is not a prime power has a directed Cayley graph that does not have a Hamiltonian cycle.
In 1986, D. Witte proved that the Lovász conjecture holds for the Cayley graphs of p-groups. It is open even for dihedral groups, although for special sets of generators some progress has been made.
For the symmetric group , there are many attractive generating sets. For example, the Lovász conjecture holds in the following cases of generating sets:
(long cycle and a transposition).
(Coxeter generators). In this case a Hamiltonian cycle is generated by the Steinhaus–Johnson–Trotter algorithm.
any set of transpositions corresponding to a labelled tree on .
Stong has shown that the conjecture holds for the Cayley graph of the wreath product Zm wr Zn with the natural minimal generating set when m is either even or three. In particular this holds for the cube-connected cycles, which can be generated as the Cayley graph of the wreath product Z2 wr Zn.
General groups
For general finite groups, only a few results are known:
(Rankin generators)
(Rapaport–Strasser generators)
(Pak–Radoičić generators)
where (here we have (2,s,3)-presentation, Glover–Marušič theorem).
Finally, it is known that for every finite group there exists a generating set of size at most such that the corresponding Cayley graph is Hamiltonian (Pak-Radoičić). This result is based on classification of finite simple groups.
The Lovász conjecture was also established for random generating sets of size .
References
Algebraic graph theory
Conjectures
Unsolved problems in graph theory
Finite groups
Group theory
Hamiltonian paths and cycles | Lovász conjecture | [
"Mathematics"
] | 800 | [
"Unsolved problems in mathematics",
"Mathematical structures",
"Finite groups",
"Graph theory",
"Group theory",
"Fields of abstract algebra",
"Conjectures",
"Mathematical relations",
"Unsolved problems in graph theory",
"Algebraic structures",
"Mathematical problems",
"Algebra",
"Algebraic g... |
4,367,479 | https://en.wikipedia.org/wiki/Atkinson%20resistance | Atkinson resistance is commonly used in mine ventilation to characterise the resistance to airflow of a duct of irregular size and shape, such as a mine roadway. It has the symbol and is used in the square law for pressure drop,
where (in English units)
is pressure drop (pounds per square foot),
is the air density in the duct (pounds per cubic foot),
is the standard air density (0.075 pound per cubic foot),
is the resistance (atkinsons),
is the rate of flow of air (thousands of cubic feet per second).
One atkinson is defined as the resistance of an airway which, when air flows along it at a rate of 1,000 cubic feet per second, causes a pressure drop of one pound-force per square foot.
The unit is named after J J Atkinson, who published one of the earliest comprehensive mathematical treatments of mine ventilation. Atkinson based his expressions for airflow resistance on the more general work of Chézy and Darcy who defined frictional pressure drop as
where
is pressure drop,
is the density of the fluid in question (water, air, oil etc.),
is the Fanning friction factor,
is the length of the duct,
is the perimeter of the duct,
is the area of the duct,
is the velocity of the fluid.
The practicalities of mine ventilation led Atkinson to group some of these variables into one all-encompassing term:
Area and perimeter were incorporated because mine airways are of irregular shape, and both vary along the length of an airway.
velocity was replaced by the ratio of flowrate to area () because variations in area cause variations in velocity. Area was then incorporated into the denominator of the Atkinson resistance term.
Length of the airway was incorporated. This may have been a step too far, as most of his successors chose to give values of Atkinson resistance in terms of atkinsons per unit length (often 100 or 1,000 yards).
The term was incorporated, which later authors definitely considered a step too far (e.g. McPherson, 1988). In Atkinson's time not only were all British mines shallow enough that the density of air could be considered constant, but fan design was primitive enough that variations in density would make no measurable difference to the amount of motive power required. Atkinson did not foresee that his methods would be applied several miles underground, where air is 30–50% denser than it is at the surface. Density variations of this magnitude can alter the power consumption of colliery ventilation fans by hundreds of kilowatts.
The resulting term is one that can be easily calculated from the results of two simple measurements: a pressure survey by the gauge and tube method and a flowrate survey with a counting anemometer. This is a major strength and is the reason why Atkinson resistance remains in use today.
A complete definition of Atkinson resistance in more common fluid flow terms is as follows:
in which
is hydraulic radius,
is hydraulic diameter and
is Darcy friction factor
in addition to the terms defined above.
Atkinson also defined a friction factor (Atkinson friction factor) used for airways of fixed section such as shafts. It accounts for Fanning friction factor, density and the constant and relates to Atkinson resistance by
where is Atkinson friction factor and the other terms are as defined above.
Despite its weakness with regards to density changes, the use of Atkinson resistance is so widespread in the mining industry that a corresponding term in metric units has also been defined. It, too, is termed the atkinson resistance but the unit was given the name gaul (for reasons unknown). The earliest known use of the name is a 1971 British Coal memorandum on metrication, VB/CIRC/71(26).
One gaul is defined as the resistance of an airway which, when air (of density 1.2 kg/m3) flows along it at a rate of one cubic metre per second, causes a pressure drop of one pascal. The gaul has units of N·s2/m8, or alternatively Pa·s2/m6.
It uses the same basic equation as its Imperial counterpart, but with slightly different dimensions:
where
is pressure drop (pascals),
is the air density in the air duct (kilograms per cubic metre),
is the standard air density (1.2 kilograms per cubic metre),
is the resistance of the air path (gauls),
is the rate of flow of air (cubic metres per second).
The metric and Imperial resistances are related by
where is the standard acceleration of gravity (metres per second squared).
The metric equivalent is now more widely used than the original Imperial definition. Most suppliers quote resistances of flexible temporary ventilation ducts in gauls/100 m and in most mine ventilation software programs, branch resistances are given in gauls.
References
National Coal Board Information Bulletin 55/153, Planning the Ventilation of New and Reorganised Collieries, 1955
National Coal Board Mining Dept, Coal mine ventilation: a handbook for colliery ventilation engineers, NCB 1979
McPherson, M J, An analysis of the resistance and airflow characteristics of mine shafts, Proceedings of the 4th International Mine Ventilation Congress, Brisbane, 1988
National Coal Board Memorandum VB/CIRC/71(26), from DER Lloyd to All Area Ventilation Engineers: "Metrication - Airway Resistance", 5 May 1971
Further reading
Atkinson, J J, Gases met with in Coal Mines, and the general principles of Ventilation Transactions of the Manchester Geological Society, Vol. III, p. 218, 1862
McPherson, M J, Subsurface Ventilation Engineering, 2nd edition (ISBN of 1st edition - Chapman & Hall 1993 - is )
Fluid dynamics
Mine ventilation | Atkinson resistance | [
"Chemistry",
"Engineering"
] | 1,148 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
4,370,004 | https://en.wikipedia.org/wiki/Lepton%20epoch | In cosmological models of the Big Bang, the lepton epoch was the period in the evolution of the early universe in which the leptons dominated the mass of the Universe. It started roughly 1 second after the Big Bang, after the majority of hadrons and anti-hadrons annihilated each other at the end of the hadron epoch. During the lepton epoch, the temperature of the Universe was still high enough to create neutrino and electron-positron pairs. Approximately 10 seconds after the Big Bang, the temperature of the universe had fallen to the point where electron-positron pairs were gradually annihilated. A small residue of electrons needed to charge-neutralize the Universe remained along with free streaming neutrinos: an important aspect of this epoch is the neutrino decoupling. The Big Bang nucleosynthesis epoch follows, overlapping with the photon epoch.
See also
Timeline of the early universe
Chronology of the universe
Cosmology
Big Bang
References
Physical cosmology
Big Bang | Lepton epoch | [
"Physics",
"Astronomy"
] | 213 | [
"Cosmogony",
"Big Bang",
"Theoretical physics",
"Astrophysics",
"Physical cosmology",
"Astronomical sub-disciplines"
] |
4,370,018 | https://en.wikipedia.org/wiki/Hadron%20epoch | In physical cosmology, the hadron epoch started 20 microseconds after the Big Bang. The temperature of the universe had fallen sufficiently to allow the quarks from the preceding quark epoch to bind together into hadrons. Initially, the temperature was high enough to allow the formation of hadron/anti-hadron pairs, which kept matter and anti-matter in thermal equilibrium. Following the annihilation of matter and antimatter, a nano-asymmetry of matter remains to the present day. Most of the hadrons and anti-hadrons were eliminated in annihilation reactions, leaving a small residue of hadrons. Upon elimination of anti-hadrons, the Universe was dominated by photons, neutrinos and electron-positron pairs. This period is referred to as the lepton epoch.
Constituents
In the hadron epoch, it is generally believed that the pion, the lightest meson, was temporarily the most common particle.
See also
Timeline of the early universe
Chronology of the universe
Big Bang
Further reading
Physics 175: Stars and Galaxies - The Big Bang, Matter and Energy; Ithaca College, New York.
References
Physical cosmology
Big Bang | Hadron epoch | [
"Physics",
"Astronomy"
] | 245 | [
"Cosmogony",
"Big Bang",
"Theoretical physics",
"Astrophysics",
"Physical cosmology",
"Astronomical sub-disciplines"
] |
4,370,036 | https://en.wikipedia.org/wiki/Electroweak%20epoch | In physical cosmology, the electroweak epoch was the period in the evolution of the early universe when the temperature of the universe had fallen enough that the strong force separated from the electronuclear interaction, but was still high enough for electromagnetism and the weak interaction to remain merged into a single electroweak interaction above the critical temperature for electroweak symmetry breaking (159.5±1.5 GeV
in the Standard Model of particle physics). Some cosmologists place the electroweak epoch at the start of the inflationary epoch, approximately 10−36 seconds after the Big Bang. Others place it at approximately 10−32 seconds after the Big Bang, when the potential energy of the inflaton field that had driven the inflation of the universe during the inflationary epoch was released, filling the universe with a dense, hot quark–gluon plasma.
Particle interactions in this phase were energetic enough to create large numbers of exotic particles, including W and Z bosons and Higgs bosons. As the universe expanded and cooled, interactions became less energetic, and when the universe was about 10−12 seconds old, W and Z bosons ceased to be created at observable rates. The remaining W and Z bosons decayed quickly, and the weak interaction became a short-range force in the following quark epoch.
The electroweak epoch ended with an electroweak phase transition, the nature of which is unknown. If first order, this could source a gravitational wave background. The electroweak phase transition is also a potential source of baryogenesis, provided the Sakharov conditions are satisfied.
In the minimal Standard Model, the transition during the electroweak epoch was not a first- or a second-order phase transition but a continuous crossover, preventing any baryogenesis,
or the production of an observable gravitational wave background.
However, many extensions to the Standard Model including supersymmetry and the two-Higgs-doublet model have a first-order electroweak phase transition (but require additional CP violation).
See also
Chronology of the universe
References
Physical cosmology
Big Bang | Electroweak epoch | [
"Physics",
"Astronomy"
] | 439 | [
"Cosmogony",
"Astronomical sub-disciplines",
"Big Bang",
"Theoretical physics",
"Astrophysics",
"Physical cosmology"
] |
4,370,125 | https://en.wikipedia.org/wiki/Grand%20unification%20epoch | In physical cosmology, the grand unification epoch is a poorly understood period in the evolution of the early universe following the Planck epoch and preceding inflation (cosmology). This places it between about 10−43 seconds after the Big Bang and 10−35 seconds, when the temperature of the universe was comparable to the characteristic temperatures of grand unified theories. However, these theories have not been successful producing quantitative agreement with the results of modern astrophysical observations.
If the grand unification energy is taken to be 1015 GeV, this corresponds to temperatures higher than 1027 K. During this period, three of the four fundamental interactions — electromagnetism, the strong interaction, and the weak interaction — were unified as the electronuclear force. Gravity had separated from the electronuclear force at the end of the Planck era. During the grand unification epoch, physical characteristics such as mass, charge, flavour and colour charge were meaningless.
The grand unification epoch ended at approximately 10−36 seconds after the Big Bang. At this point several key events took place. The strong force separated from the other fundamental forces.
It is possible that some part of this decay process violated the conservation of baryon number and gave rise to a small excess of matter over antimatter (see baryogenesis). This phase transition is also thought to have triggered the process of cosmic inflation that dominated the development of the universe during the following inflationary epoch.
See also
Big Bang
Chronology of the universe
Ultimate fate of the universe
References
Physical cosmology
Big Bang | Grand unification epoch | [
"Physics",
"Astronomy"
] | 311 | [
"Cosmogony",
"Big Bang",
"Theoretical physics",
"Astrophysics",
"Physical cosmology",
"Astronomical sub-disciplines"
] |
4,371,547 | https://en.wikipedia.org/wiki/Jensen%20Loudspeakers | Jensen Loudspeakers is a company that manufactures speakers in many different models and sizes. Originally located in Chicago, Illinois, the company built a reputation during the 50s and 60s providing speakers used mainly in guitar and bass amplifiers. Although the American company is long out of business, "reissue" guitar speakers are currently made in Italy by SICA Altoparlanti and distributed in the United States by CE Distribution. Jensen and Rola were, for a time both under common ownership (subsidiaries of the Muter Co.), and shared various design similarities. Their 8" and 15" baskets appeared to utilize the same tooling. Rola locations took over Jensen product manufacturing when the Chicago plant closed.
The current Fender Twin Reverb amp uses two 12" Jensen C-12K speakers.
History
The former Jensen Radio Manufacturing Company was founded in 1927 by Peter Laurits Jensen, the co-inventor of the first loudspeaker, in Chicago, Illinois. The company gained popularity in its early years, rising to its peak in the mid 1940s when Jensen speakers were selected to be used in the first production of a guitar amplifier by Fender Musical Instruments Corporation. Subsequently, Jensen speakers in the 40s, 50s, and 60s became commonly featured in major amplifier production, including amplifiers produced by Fender, Ampeg, and Gibson. The company also produced Hi-Fi loudspeakers for home use and was a manufacturer of OEM drivers for other brands.
Jensen Loudspeakers ceased production of their products in the early 1970s. The speakers remained heavily sought-after as replacements for amplifiers originally manufactured with Jensen speakers. Following that demand, SICA Altoparlanti began reproducing "reissues" of the Jensen speakers in 1996. These speakers are designed to be as close as possible to their original design.
In 2008, SICA Altoparlanti began producing new speaker designs under the Jensen name. These speakers and the Jensen reissues are distributed in the United States by CE Distribution.
Model numbers
Jensen speakers used a model number that contained 4 to 6 characters.
The first character denotes the magnet type used.
F = Field Coil
P = Alnico
C = Ceramic magnet
These are listed in the order in which they were made, with the P line starting in the 1940s and the C line starting around 1960.
The next one or two characters denotes speaker size (8 = 8", 10 = 10" diameter and so on).
The last character denotes the magnet/voice coil size, and directly influences the power rating. The same magnet weights (and corresponding codes) were used on multiple speaker sizes, and the power rating increases with the increasing diameter. Ceramic magnets are roughly 45% heavier than their Alnico equivalents that share the same code.
Examples:
K = Unknown/50 Oz Ceramic
N = 29.5 Oz Alnico/29 Oz Ceramic (This is an exception-the Alnico version uses an inefficient ring magnet)
Q = 10 Oz Alnico/15 Oz Ceramic
R = 6.8 Oz Alnico/10 Oz Ceramic
These aspects together make up the speaker's model number. For example: a Jensen C8R speaker is eight inches in size, and has a 10 Oz ceramic magnet. All Jensen speakers also include a date and manufacturer's code. Jensen's source code is 220. Accompanying the company specific (EIA) code (220) is the last digit of the year and the 2 digits of the week manufactured, creating a 6 digit code usually imprinted on the speaker edge or the magnet housing. Thus a code of 220534 would denote a Jensen speaker manufactured in either 1945, 1955, or 1965 on the 34th week of that year. Most speaker (and other electronic component) manufacturers began adding the last 2 digits of the year in the 1970s, so the 6 digit code became a 7-digit code. In the '40's, Jensen would sometimes omit the leading zero of a single digit week, which would reduce the total number of characters to 5.
References
External links
Jensen Loudspeakers homepage
SICA Jensen History
Jensen Speakers Australia
Loudspeaker manufacturers
Manufacturing companies based in Chicago
Electronics companies established in 1927
1927 establishments in Illinois
Audio equipment manufacturers of the United States
Radio manufacturers | Jensen Loudspeakers | [
"Engineering"
] | 868 | [
"Radio electronics",
"Radio manufacturers"
] |
4,372,153 | https://en.wikipedia.org/wiki/The%20Conquest%20of%20Space | The Conquest of Space is a 1949 speculative science book written by Willy Ley and illustrated by Chesley Bonestell. The book contains a portfolio of paintings by Bonestell depicting the possible future exploration of the Solar System, with explanatory text by Ley. Most of the 58 illustrations by Bonestell in Conquest, were previously published in color, in popular magazines.
Influences on fiction
Some of Bonestell's designs inspired the look of George Pal's 1955 science fiction movie Conquest of Space, which also takes its title from the book, but uses it as a framework on which to hang a melodramatic plot.
Bonestell's illustrations of the Moon in The Conquest of Space were used by Hergé as a basis for his illustrations of the lunar surface in his 1952–53 The Adventures of Tintin comic, Explorers on the Moon.
Arthur C. Clarke was also an admirer of The Conquest of Space; in his novel 2001: A Space Odyssey, Clarke refers to Saturn's moon Iapetus as "Japetus" due to that being the spelling used by Ley in The Conquest of Space.
Larry Niven's 1967 short story "The Soft Weapon" is set on a planet around Beta Lyrae; Niven's description of Beta Lyrae is actually a meticulous retelling of the details of Bonestell's painting rather than any kind of portrayal of the Beta Lyrae system itself, which is now understood to look quite different.
References
Notes
Bibliography
Ley, Willy. The Conquest of Space. New York: Viking, 1949. Pre-ISBN era.
Astronomy books
Spaceflight books | The Conquest of Space | [
"Astronomy"
] | 331 | [
"Astronomy books",
"Astronomy book stubs",
"Works about astronomy",
"Astronomy stubs"
] |
4,372,485 | https://en.wikipedia.org/wiki/Avicennia%20germinans | Avicennia germinans, the black mangrove, is a shrub or small tree growing up to 12 meters (39 feet) in the acanthus family, Acanthaceae. It grows in tropical and subtropical regions of the Americas, on both the Atlantic and Pacific Coasts, and on the Atlantic Coast of tropical Africa, where it thrives on the sandy and muddy shores where seawater reaches. It is common throughout coastal areas of Texas and Florida, and ranges as far north as southern Louisiana and northern Florida in the United States.
Like many other mangrove species, it reproduces by vivipary. Seeds are encased in a fruit, which reveals the germinated seedling when it falls into the water.
Unlike other mangrove species, it does not grow on prop roots, but possesses pneumatophores that allow its roots to breathe even when submerged. It is a hardy species and expels absorbed salt mainly from its leathery leaves.
The name "black mangrove" refers to the color of the trunk and heartwood. The leaves often appear whitish from the salt excreted at night and on cloudy days. It is often found in its native range with the red mangrove (Rhizophora mangle) and the white mangrove (Laguncularia racemosa). White mangroves grow inland from black mangroves, which themselves grow inland from red mangroves. The three species work together to stabilize the shoreline, provide buffers from storm surges, trap debris and detritus brought in by tides, and provide feeding, breeding, and nursery grounds for a great variety of fish, shellfish, birds, and other wildlife.
Habitat
The black mangrove grows just above the high tide in coastal areas. It is less tolerant of highly saline conditions than certain other species that occur in mangrove ecosystems. It can reach in height, although it is a small shrub in cooler regions of its range. The seeds germinate in midsummer, but may be seen all year on the trees. The seeds can remain viable for over a year once released.
Wood
The heartwood is dark-brown to black, while the sapwood is yellow-brown. It has the unusual property of having less dense heartwood than sapwood. The sapwood sinks in water while the heartwood floats. The wood is strong, heavy, and hard, but is difficult to work due to its interlocked grain, and is somewhat difficult to finish due to its oily texture. Uses include posts, pilings, charcoal, and fuel. Despite growing in a marine environment, the dry wood is subject to attack by marine borers and termites. Like many species, it contains tannins in the bark and has been used to tan leather products.
References
Further reading
External links
Interactive Distribution Map of Avicennia germinans
germinans
Mangroves
Halophytes
Flora of the Afrotropical realm
Flora of the Neotropical realm
Tropical Atlantic flora
Tropical Eastern Pacific flora
Trees of Africa
Trees of Northern America
Trees of Central America
Trees of South America
Trees of the Caribbean
Plants described in 1764
Taxa named by Carl Linnaeus | Avicennia germinans | [
"Chemistry"
] | 631 | [
"Halophytes",
"Salts"
] |
17,822,126 | https://en.wikipedia.org/wiki/Push-to-pull%20compression%20fittings | Push-to-pull, push-to-connect, push-in, push-fit, or instant fittings are a type of easily removed compression fitting or quick connect fitting that allows an air (or water) line to be attached, nominally without the use of tools (a tool is still usually required for cutting tubing to length and removal). These fittings act similar to the way regular compression fittings work, but use a resilient O-ring (normally EPDM) for sealing, and a grip ring (normally stainless steel) to hold the tube in place.
The main advantages of this technology over traditional soldered copper or glued plastic are that fittings can easily be unmounted and re-used, speed of assembly, assembly is possible when wet, and that the joints can still be rotated after connection.
These fittings can be used on all sorts of pipe of many sizes for many purposes.
History
British manufacturer Hepworth Building Products (founded 1936 in Doncaster) introduced these fittings under the brand Hep2O in 1980. It was a grey plastic material for the first couple of decades. There was a reusable fitting that could be unscrewed and a slimmer single-use fitting which could not. A new grab washer was required each time if a joint was reused. The fittings were designed for use with polybutylene pipe, while stainless steel pipe inserts were used for internal support.
Hepworth was acquired by Wavin in 2005. Hep2O changed material and design in the 2000s to a smooth white plastic and a push-to-demount design. This resulted in a physically smaller fitting that is easier to release, especially in a confined space.
John Guest (Established in 1961, West Drayton, UK) developed the Speedfit push-fit connector for compressed air use in 1974, and introduced plumbing fittings in 1987. These fittings are white plastic, and are unscrewable to replace components, like Hep2O, but also have a push-release mechanism. Speedfit uses plastic pipe support inserts.
Brass demountable push-fit fittings are manufactured by Pegler under the brand Tectite. In the US, several different brands are available: Sharkbite, PlumBite, Nibco Push, which are all brass, demountable, similar to the Pegler Tectite design.
Some fittings are only designed for plastic (PEX and PERT) pipe and are non-removable. In the US, Legend Valve make a single-use push-fit system, and Sharkbite have one called EvoPex.
Usage
Push fit connections are easier to make than soldered or glued connections, however there is still some knowledge needed to do it right. The defects that can cause joint failure are not pushing the pipe in far enough, not having a smooth round end to the pipe, too rough a pipe surface under the O-ring, and detritus in the mechanism.
Deburring the end of the pipe is vital to avoid damaging the O-ring on insertion, and the pipe surface under the ring must be smooth to get an adequate seal. The pipe end should be square so it sits against the stop in the fitting and does not cause turbulence in the flow of water.
It is quite easy to push the pipe in only as far as the grab ring, or the O-ring, and not all the way to the stop. This is the most common issue with inexperienced use of these connectors. Fitting designers have come up with a selection of methods to try and show the user when they have pushed it in far enough. Pipes are often marked to show the insertion depth, so if you cut at a mark, then insert the pipe up to the next mark, you will have a good connection. Hepworth pipe has always had this feature, for example. Making a mark before insertion is advised on pipe that is not pre-marked (e.g. copper pipe). Hepworth supply an insert with bulges on the end. When fully inserted this lines up with bulges in the fitting so that if you rotate the pipe in the fitting it "rumbles". Manufacturers of non-removable PEX/PERT fittings have included a coloured indicator ring which is pushed into a visible place on full insertion, or a spring-clip separator which is pushed out by the pipe so there is an audible "snap".
Notes and references
Plumbing | Push-to-pull compression fittings | [
"Engineering"
] | 920 | [
"Construction",
"Plumbing"
] |
17,823,510 | https://en.wikipedia.org/wiki/True%20vertical%20depth | True vertical depth is the measurement of a straight line perpendicularly downwards from a horizontal plane.
In the petroleum industry, true vertical depth, abbreviated as TVD, is the measurement from the surface to the bottom of the borehole (or anywhere along its length) in a straight perpendicular line represented by line (a) in the image.
Line (b) is the actual borehole and its length would be considered the "measured depth" in oil industry terminology. The TVD is always equal to or less than (≤) the measured depth. If one were to imagine line (b) to be a piece of string, and further were to imagine it being pulled straight down, one would observe it to be longer than line (a). This example oil well would be considered a directional well because it deviates from a straight vertical line.
See also
Depth in a well
Driller's depth
References
Oilfield terminology
Petroleum geology
Vertical position | True vertical depth | [
"Physics",
"Chemistry"
] | 190 | [
"Vertical position",
"Physical quantities",
"Distance",
"Petroleum stubs",
"Petroleum",
"Petroleum geology"
] |
17,828,291 | https://en.wikipedia.org/wiki/Barkhausen%20stability%20criterion | In electronics, the Barkhausen stability criterion is a mathematical condition to determine when a linear electronic circuit will oscillate. It was put forth in 1921 by German physicist Heinrich Barkhausen (1881–1956). It is widely used in the design of electronic oscillators, and also in the design of general negative feedback circuits such as op amps, to prevent them from oscillating.
Limitations
Barkhausen's criterion applies to linear circuits with a feedback loop. It cannot be applied directly to active elements with negative resistance like tunnel diode oscillators.
The kernel of the criterion is that a complex pole pair must be placed on the imaginary axis of the complex frequency plane if steady state oscillations should take place. In the real world, it is impossible to balance on the imaginary axis; small errors will cause the poles to be either slightly to the right or left, resulting in infinite growth or decreasing to zero, respectively. Thus, in practice a steady-state oscillator is a non-linear circuit; the poles are manipulated to be slightly to the right, and a nonlinearity is introduced that reduces the loop gain when the output is high.
Criterion
It states that if A is the gain of the amplifying element in the circuit and β(jω) is the transfer function of the feedback path, so βA is the loop gain around the feedback loop of the circuit, the circuit will sustain steady-state oscillations only at frequencies for which:
The loop gain is equal to unity in absolute magnitude, that is, and
The phase shift around the loop is zero or an integer multiple of 2π:
Barkhausen's criterion is a necessary condition for oscillation but not a sufficient condition: some circuits satisfy the criterion but do not oscillate. Similarly, the Nyquist stability criterion also indicates instability but is silent about oscillation. Apparently there is not a compact formulation of an oscillation criterion that is both necessary and sufficient.
Erroneous version
Barkhausen's original "formula for self-excitation", intended for determining the oscillation frequencies of the feedback loop, involved an equality sign: |βA| = 1. At the time conditionally-stable nonlinear systems were poorly understood; it was widely believed that this gave the boundary between stability (|βA| < 1) and instability (|βA| ≥ 1), and this erroneous version found its way into the literature. However, sustained oscillations only occur at frequencies for which equality holds.
See also
Nyquist stability criterion
References
Oscillation
Electronic circuits | Barkhausen stability criterion | [
"Physics",
"Engineering"
] | 536 | [
"Electronic engineering",
"Electronic circuits",
"Mechanics",
"Oscillation"
] |
14,938,064 | https://en.wikipedia.org/wiki/Computational%20epigenetics | Computational epigenetics uses statistical methods and mathematical modelling in epigenetic research. Due to the recent explosion of epigenome datasets, computational methods play an increasing role in all areas of epigenetic research.
Research in computational epigenetics comprises the development and application of bioinformatics methods for solving epigenetic questions, as well as computational data analysis and theoretical modeling in the context of epigenetics. This includes modelling of the effects of histone and DNA CpG island methylation.
Current research areas
Importance
Computational methods and next-generation sequencing (NGS) technologies to are being employed to study DNA methylation and histone modifications, which are essential in cancer research. High-throughput sequencing offers valuable insights into epigenetic changes, and the growing volume of these datasets drives the continuous development of bioinformatics techniques for their effective management and analysis.
There is a need for data integration tools that can merge various types of epigenetic modifications and -omics data (including transcriptomics, genomics, epigenomics, and proteomics) to gain a comprehensive understanding of biological processes. This requires the standardization, annotation, and harmonization of epigenetic data, along with the enhancement of computational and machine learning approaches.
Understanding the functional implications of epigenetics in diseases can be greatly advanced by using epigenetic editing tools, such as CRISPR-dCas9 technology. These tools enable precise modifications of epigenetic marks at specific loci, allowing researchers to assess the effects of these alterations in cellular and animal models, thus complementing insights obtained from computational analyses.
Data processing and analysis
Various experimental techniques have been developed for genome-wide mapping of epigenetic information, the most widely used being ChIP-on-chip, ChIP-seq and bisulfite sequencing. All of these methods generate large amounts of data and require efficient ways of data processing and quality control by bioinformatic methods.
Predictions
A substantial amount of bioinformatic research has been devoted to the prediction of epigenetic information from characteristics of the genome sequence. Such predictions serve a dual purpose. First, accurate epigenome predictions can substitute for experimental data, to some degree, which is particularly relevant for newly discovered epigenetic mechanisms and for species other than human and mouse. Second, prediction algorithms build statistical models of epigenetic information from training data and can therefore act as a first step toward quantitative modeling of an epigenetic mechanism. Successful computational prediction of DNA and lysine methylation and acetylation has been achieved by combinations of various features.
Applications in cancer epigenetics
The important role of epigenetic defects for cancer opens up new opportunities for improved diagnosis and therapy. These active areas of research give rise to two questions that are particularly amenable to bioinformatic analysis. First, given a list of genomic regions exhibiting epigenetic differences between tumor cells and controls (or between different disease subtypes), can we detect common patterns or find evidence of a functional relationship of these regions to cancer? Second, can we use bioinformatic methods in order to improve diagnosis and therapy by detecting and classifying important disease subtypes?
Emerging topics
The first wave of research in the field of computational epigenetics was driven by rapid progress of experimental methods for data generation, which required adequate computational methods for data processing and quality control, prompted epigenome prediction studies as a means of understanding the genomic distribution of epigenetic information, and provided the foundation for initial projects on cancer epigenetics. While these topics will continue to be major areas of research and the mere quantity of epigenetic data arising from epigenome projects poses a significant bioinformatic challenge, several additional topics are currently emerging.
Epigenetic regulatory circuitry: Reverse engineering the regulatory networks that read, write and execute epigenetic codes.
Population epigenetics: Distilling regulatory mechanisms from the integration of epigenome data with gene expression profiles and haplotype maps for a large sample from a heterogeneous population.
Evolutionary epigenetics: Learning about epigenome regulation in human (and its medical consequences) by cross-species comparisons.
Theoretical modeling: Testing our mechanistic and quantitative understanding of epigenetic mechanisms by in silico simulation.
Genome browsers: Developing a new blend of web services that enable biologists to perform sophisticated genome and epigenome analysis within an easy-to-use genome browser environment.
Medical epigenetics: Searching for epigenetic mechanisms that play a role in diseases other than cancer, as there is strong circumstantial evidence for epigenetic regulation being involved in mental disorders, autoimmune diseases and other complex diseases.
Data portals and projects
Databases
Sources and further reading
The original version of this article was based on a review paper on computational epigenetics that appeared in the January 2008 issue of the Bioinformatics journal: . This review paper provides >100 references to scientific papers and extensive background information.
References
Epigenetics
Bioinformatics
Biophysics
Computational fields of study | Computational epigenetics | [
"Physics",
"Technology",
"Engineering",
"Biology"
] | 1,046 | [
"Biological engineering",
"Computational fields of study",
"Applied and interdisciplinary physics",
"Bioinformatics",
"Computing and society",
"Biophysics"
] |
14,941,864 | https://en.wikipedia.org/wiki/Reverberation%20mapping | Reverberation mapping (or Echo mapping) is an astrophysical technique for measuring the structure of the broad-line region (BLR) around a supermassive black hole at the center of an active galaxy, and thus estimating the hole's mass. It is considered a "primary" mass estimation technique, i.e., the mass is measured directly from the motion that its gravitational force induces in the nearby gas.
Newton's law of gravity defines a direct relation between the mass of a central object and the speed of a smaller object in orbit around the central mass. Thus, for matter orbiting a black hole, the black-hole mass is related by the formula
to the RMS velocity ΔV of gas moving near the black hole in the broad emission-line region, measured from the Doppler broadening of the gaseous emission lines. In this formula, RBLR is the radius of the broad-line region; G is the constant of gravitation; and f is a poorly known "form factor" that depends on the shape of the BLR.
While ΔV can be measured directly using spectroscopy, the necessary determination of RBLR is much less straightforward. This is where reverberation mapping comes into play. It utilizes the fact that the emission-line fluxes vary strongly in response to changes in the continuum, i.e., the light from the accretion disk near the black hole. Put simply, if the brightness of the accretion disk varies, the emission lines, which are excited in response to the accretion disk's light, will "reverberate", that is, vary in response. But it will take some time for light from the accretion disk to reach the broad-line region. Thus, the emission-line response is delayed with respect to changes in the continuum. Assuming that this delay is solely due to light travel times, the distance traveled by the light, corresponding to the radius of the broad emission-line region, can be measured.
Only a small handful (less than 40) of active galactic nuclei have been accurately "mapped" in this way. An alternative approach is to use an empirical correlation between RBLR and the continuum luminosity.
Another uncertainty is the value of f. In principle, the response of the BLR to variations in the continuum could be used to map out the three-dimensional structure of the BLR. In practice, the amount and quality of data required to carry out such a deconvolution is prohibitive. Until about 2004, f was estimated ab initio based on simple models for the structure of the BLR. More recently, the value of f has been determined so as to bring the M–sigma relation for active galaxies into the best possible agreement with the M–sigma relation for quiescent galaxies. When f is determined in this way, reverberation mapping becomes a "secondary", rather than "primary", mass estimation technique.
References and notes
External links
Reverberation Mapping ppt-presentation (2005)
Active galaxies
Astronomical imaging
Concepts in astrophysics
Observational astronomy
Supermassive black holes
Signal processing | Reverberation mapping | [
"Physics",
"Astronomy",
"Technology",
"Engineering"
] | 646 | [
"Black holes",
"Telecommunications engineering",
"Concepts in astrophysics",
"Computer engineering",
"Signal processing",
"Unsolved problems in physics",
"Supermassive black holes",
"Astrophysics",
"Observational astronomy",
"Astronomical sub-disciplines"
] |
5,809,688 | https://en.wikipedia.org/wiki/Structural%20dynamics | Structural dynamics is a type of structural analysis which covers the behavior of a structure subjected to dynamic (actions having high acceleration) loading. Dynamic loads include people, wind, waves, traffic, earthquakes, and blasts. Any structure can be subjected to dynamic loading. Dynamic analysis can be used to find dynamic displacements, time history, and modal analysis.
Structural analysis is mainly concerned with finding out the behavior of a physical structure when subjected to force. This action can be in the form of load due to the weight of things such as people, furniture, wind, snow, etc. or some other kind of excitation such as an earthquake, shaking of the ground due to a blast nearby, etc. In essence all these loads are dynamic, including the self-weight of the structure because at some point in time these loads were not there. The distinction is made between the dynamic and the static analysis on the basis of whether the applied action has enough acceleration in comparison to the structure's natural frequency. If a load is applied sufficiently slowly, the inertia forces (Newton's first law of motion) can be ignored and the analysis can be simplified as static analysis.
A static load is one which varies very slowly. A dynamic load is one which changes with time fairly quickly in comparison to the structure's natural frequency. If it changes slowly, the structure's response may be determined with static analysis, but if it varies quickly (relative to the structure's ability to respond), the response must be determined with a dynamic analysis.
Dynamic analysis for simple structures can be carried out manually, but for complex structures finite element analysis can be used to calculate the mode shapes and frequencies.
Displacements
A dynamic load can have a significantly larger effect than a static load of the same magnitude due to the structure's inability to respond quickly to the loading (by deflecting). The increase in the effect of a dynamic load is given by the dynamic amplification factor (DAF) or dynamic load factor (DLF):
where u is the deflection of the structure due to the applied load.
Graphs of dynamic amplification factors vs non-dimensional rise time (tr/T) exist for standard loading functions (for an explanation of rise time, see time history analysis below). Hence the DAF for a given loading can be read from the graph, the static deflection can be easily calculated for simple structures and the dynamic deflection found.
Time history analysis
A history will give the response of a structure over time during and after the application of a load. To find the history of a structure's response, you must solve the structure's equation of motion.
Example
A simple single degree of freedom system (a mass, M, on a spring of stiffness k, for example) has the following equation of motion:
where is the acceleration (the double derivative of the displacement) and x is the displacement.
If the loading F(t) is a Heaviside step function (the sudden application of a constant load), the solution to the equation of motion is:
where and the fundamental natural frequency, .
The static deflection of a single degree of freedom system is:
so we can write, by combining the above formulae:
This gives the (theoretical) time history of the structure due to a load F(t), where the false assumption is made that there is no damping.
Although this is too simplistic to apply to a real structure, the Heaviside step function is a reasonable model for the application of many real loads, such as the sudden addition of a piece of furniture, or the removal of a prop to a newly cast concrete floor. However, in reality loads are never applied instantaneously – they build up over a period of time (this may be very short indeed). This time is called the rise time.
As the number of degrees of freedom of a structure increases it very quickly becomes too difficult to calculate the time history manually – real structures are analysed using non-linear finite element analysis software.
Damping
Any real structure will dissipate energy (mainly through friction). This can be modelled by modifying the DAF
where and is typically 2–10% depending on the type of construction:
Bolted steel ~6%
Reinforced concrete ~5%
Welded steel ~2%
Brick masonry ~10%
Methods to increase damping
One of the widely used methods to increase damping is to attach a layer of material with a high Damping Coefficient, for example rubber, to a vibrating structure.
Modal analysis
A modal analysis calculates the frequency modes or natural frequencies of a given system, but not necessarily its full-time history response to a given input. The natural frequency of a system is dependent only on the stiffness of the structure and the mass which participates with the structure (including self-weight). It is not dependent on the load function.
It is useful to know the modal frequencies of a structure as it allows you to ensure that the frequency of any applied periodic loading will not coincide with a modal frequency and hence cause resonance, which leads to large oscillations.
The method is:
Find the natural modes (the shape adopted by a structure) and natural frequencies
Calculate the response of each mode
Optionally superpose the response of each mode to find the full modal response to a given loading
Energy method
It is possible to calculate the frequency of different mode shape of system manually by the energy method. For a given mode shape of a multiple degree of freedom system you can find an "equivalent" mass, stiffness and applied force for a single degree of freedom system. For simple structures the basic mode shapes can be found by inspection, but it is not a conservative method. Rayleigh's principle states:
"The frequency ω of an arbitrary mode of vibration, calculated by the energy method, is always greater than – or equal to – the fundamental frequency ωn."
For an assumed mode shape , of a structural system with mass M; bending stiffness, EI (Young's modulus, E, multiplied by the second moment of area, I); and applied force, F(x):
then, as above:
Modal response
The complete modal response to a given load F(x,t) is . The summation can be carried out by one of three common methods:
Superpose complete time histories of each mode (time consuming, but exact)
Superpose the maximum amplitudes of each mode (quick but conservative)
Superpose the square root of the sum of squares (good estimate for well-separated frequencies, but unsafe for closely spaced frequencies)
To superpose the individual modal responses manually, having calculated them by the energy method:
Assuming that the rise time tr is known (T = 2/ω), it is possible to read the DAF from a standard graph. The static displacement can be calculated with . The dynamic displacement for the chosen mode and applied force can then be found from:
Modal participation factor
For real systems there is often mass participating in the forcing function (such as the mass of ground in an earthquake) and mass participating in inertia effects (the mass of the structure itself, Meq). The modal participation factor Γ is a comparison of these two masses. For a single degree of freedom system Γ = 1.
External links
Structural Dynamics and Vibration Laboratory of McGill University
Frequency response function from modal parameters
Structural Dynamics Tutorials & Matlab scripts
AIAA Exploring Structural Dynamics (http://www.exploringstructuraldynamics.org/ ) – Structural Dynamics in Aerospace Engineering: Interactive Demos, Videos & Interviews with Practicing Engineers
Structural analysis
Dynamics (mechanics) | Structural dynamics | [
"Physics",
"Engineering"
] | 1,566 | [
"Structural engineering",
"Physical phenomena",
"Structural analysis",
"Classical mechanics",
"Motion (physics)",
"Dynamics (mechanics)",
"Mechanical engineering",
"Aerospace engineering"
] |
193,294 | https://en.wikipedia.org/wiki/Aluminium%20hydroxide | Aluminium hydroxide, , is found in nature as the mineral gibbsite (also known as hydrargillite) and its three much rarer polymorphs: bayerite, doyleite, and nordstrandite. Aluminium hydroxide is amphoteric, i.e., it has both basic and acidic properties. Closely related are aluminium oxide hydroxide, AlO(OH), and aluminium oxide or alumina (), the latter of which is also amphoteric. These compounds together are the major components of the aluminium ore bauxite. Aluminium hydroxide also forms a gelatinous precipitate in water.
Structure
is built up of double layers of hydroxyl groups with aluminium ions occupying two-thirds of the octahedral holes between the two layers. Four polymorphs are recognized. All feature layers of octahedral aluminium hydroxide units, with hydrogen bonds between the layers. The polymorphs differ in terms of the stacking of the layers. All forms of crystals are hexagonal :
gibbsite is also known as γ- or α-
bayerite is also known as α- or β-alumina trihydrate
nordstrandite is also known as
doyleite
Hydrargillite, once thought to be aluminium hydroxide, is an aluminium phosphate. Nonetheless, both gibbsite and hydrargillite refer to the same polymorphism of aluminium hydroxide, with gibbsite used most commonly in the United States and hydrargillite used more often in Europe. Hydrargillite is named after the Greek words for water () and clay ().
Properties
Aluminium hydroxide is amphoteric. In acid, it acts as a Brønsted–Lowry base. It neutralizes the acid, yielding a salt:
In bases, it acts as a Lewis acid by binding hydroxide ions:
Production
Virtually all the aluminium hydroxide used commercially is manufactured by the Bayer process which involves dissolving bauxite in sodium hydroxide at temperatures up to . The waste solid, bauxite tailings, is removed and aluminium hydroxide is precipitated from the remaining solution of sodium aluminate. This aluminium hydroxide can be converted to aluminium oxide or alumina by calcination.
The residue or bauxite tailings, which is mostly iron oxide, is highly caustic due to residual sodium hydroxide. It was historically stored in lagoons; this led to the Ajka alumina plant accident in 2010 in Hungary, where a dam bursting led to the drowning of nine people. An additional 122 sought treatment for chemical burns. The mud contaminated of land and reached the Danube. While the mud was considered non-toxic due to low levels of heavy metals, the associated slurry had a pH of 13.
Uses
Filler and fire retardant
Aluminium hydroxide finds use as a fire retardant filler for polymer applications. It is selected for these applications because it is colorless (like most polymers), inexpensive, and has good fire retardant properties. Magnesium hydroxide and mixtures of huntite and hydromagnesite are used similarly. These mixtures start to decompose at temperatures around to (depending on the type of aluminium hydroxide used), absorbing a considerable amount of heat in the process and giving off water vapour. The decomposition rate of aluminium hydroxide increases with an increase in temperature, with an reported maximum rate at .
In addition to behaving as a fire retardant, it is very effective as a smoke suppressant in a wide range of polymers, most especially in polyesters, acrylics, ethylene vinyl acetate, epoxies, polyvinyl chloride (PVC) and rubber.
Aluminium hydroxide is used as filler in some artificial stone compound material, often in acrylic resin.
Precursor to Al compounds
Aluminium hydroxide is a feedstock for the manufacture of other aluminium compounds: calcined aluminas, aluminium sulfate, polyaluminium chloride, aluminium chloride, zeolites, sodium aluminate, activated alumina, and aluminium nitrate.
Freshly precipitated aluminium hydroxide forms gels, which are the basis for the application of aluminium salts as flocculants in water purification. This gel crystallizes with time. Aluminium hydroxide gels can be dehydrated (e.g. using water-miscible non-aqueous solvents like ethanol) to form an amorphous aluminium hydroxide powder, which is readily soluble in acids. Heating converts it to activated aluminas, which are used as desiccants, adsorbent in gas purification, and catalyst supports.
Pharmaceutical
Under the generic name "algeldrate", aluminium hydroxide is used as an antacid in humans and animals (mainly cats and dogs). It is preferred over other alternatives such as sodium bicarbonate because , being insoluble, does not increase the pH of stomach above 7, and hence does not trigger secretion of excess acid by the stomach. Brand names include Alu-Cap, Aludrox, Gaviscon or Pepsamar. It reacts with excess acid in the stomach, reducing the acidity of the stomach content, which may relieve the symptoms of ulcers, heartburn or dyspepsia. Such products can cause constipation, because the aluminium ions inhibit the contractions of smooth muscle cells in the gastrointestinal tract, slowing peristalsis and lengthening the time needed for stool to pass through the colon. Some such products are formulated to minimize such effects through the inclusion of equal concentrations of magnesium hydroxide or magnesium carbonate, which have counterbalancing laxative effects.
This compound is also used to control hyperphosphatemia (elevated phosphate, or phosphorus, levels in the blood) in people and animals suffering from kidney failure. Normally, the kidneys filter excess phosphate out from the blood, but kidney failure can cause phosphate to accumulate. The aluminium salt, when ingested, binds to phosphate in the intestines and reduce the amount of phosphorus that can be absorbed.
Precipitated aluminium hydroxide is included as an adjuvant in some vaccines (e.g. anthrax vaccine). One of the well-known brands of aluminium hydroxide adjuvant is Alhydrogel, made by Brenntag Biosector. Since it absorbs protein well, it also functions to stabilize vaccines by preventing the proteins in the vaccine from precipitating or sticking to the walls of the container during storage. Aluminium hydroxide is sometimes called "alum", a term generally reserved for one of several sulfates.
Vaccine formulations containing aluminium hydroxide stimulate the immune system by inducing the release of uric acid, an immunological danger signal. This strongly attracts certain types of monocytes which differentiate into dendritic cells. The dendritic cells pick up the antigen, carry it to lymph nodes, and stimulate T cells and B cells. It appears to contribute to induction of a good Th2 response, so is useful for immunizing against pathogens that are blocked by antibodies. However, it has little capacity to stimulate cellular (Th1) immune responses, important for protection against many pathogens, nor is it useful when the antigen is peptide-based.
Safety
In the 1960s and 1970s it was speculated that aluminium was related to various neurological disorders, including Alzheimer's disease. Since then, multiple epidemiological studies have found no connection between exposure to environmental or swallowed aluminium and neurological disorders, though injected aluminium was not looked at in these studies.
Neural disorders were found in experiments on mice motivated by Gulf War illness (GWI). Aluminium hydroxide injected in doses equivalent to those administered to the United States military, showed increased reactive astrocytes, increased apoptosis of motor neurons and microglial proliferation within the spinal cord and cortex.
References
External links
International Chemical Safety Card 0373
"Some properties of aluminum hydroxide precipitated in the presence of clays", Soil Research Institute, R C Turner, Department of Agriculture, Ottawa
Effect of ageing on properties of polynuclear hydroxyaluminium cations
A second species of polynuclear hydroxyaluminium cation, its formation and some of its properties
Aluminium compounds
Amphoteric compounds
Antacids
Hydroxides
Inorganic compounds
Phosphate binders
Flame retardants | Aluminium hydroxide | [
"Chemistry"
] | 1,739 | [
"Amphoteric compounds",
"Acids",
"Inorganic compounds",
"Hydroxides",
"Bases (chemistry)"
] |
193,298 | https://en.wikipedia.org/wiki/Gibbsite | Gibbsite, Al(OH)3, is one of the mineral forms of aluminium hydroxide. It is often designated as γ-Al(OH)3 (but sometimes as α-Al(OH)3). It is also sometimes called hydrargillite (or hydrargyllite).
Gibbsite is an important ore of aluminium in that it is one of three main phases that make up the rock bauxite.
Gibbsite has three named structural polymorphs or polytypes: bayerite (designated often as α-Al(OH)3, but sometimes as β-Al(OH)3), doyleite, and nordstrandite. Gibbsite can be monoclinic or triclinic, while bayerite is monoclinic. Doyleite and nordstrandite are triclinic forms.
Structure
The structure of gibbsite is interesting and analogous to the basic structure of the micas. The basic structure forms stacked sheets of linked octahedra. Each octahedron is composed of an aluminium ion bonded to six hydroxide groups, and each hydroxide group is shared by two aluminium octahedra. One third of the potential octahedral spaces are missing a central aluminium. The result is a neutral sheet: with aluminium as a +3 ion and hydroxide a −1 ion, the net cationic charge of one aluminium per six hydroxides is (+3)/6 = +1/2, and likewise the net anionic charge of one hydroxide per two aluminium atoms is (−1)/2 = −1/2. The lack of a charge on the gibbsite sheets means that there is no charge to retain ions between the sheets and act as a "glue" to keep the sheets together. The sheets are only held together by weak residual bonds and this results in a very soft easily cleaved mineral.
Gibbsite's structure is closely related to the structure of brucite, Mg(OH)2. However the lower charge in brucite's magnesium (+2) as opposed to gibbsite's aluminium (+3) does not require that one third of the octahedrons be vacant of a central ion in order to maintain a neutral sheet. The different symmetry of gibbsite and brucite is due to the different way that the layers are stacked.
It is the gibbsite layer that in a way forms the "floor plan" for the mineral corundum, Al2O3. The basic structure of corundum is identical to gibbsite except the hydroxides are replaced by oxygen. Since oxygen has a charge of −2 the layers are not neutral and require that they must be bonded to other aluminiums above and below the initial layer producing the framework structure that is the structure of corundum.
Gibbsite is interesting for another reason because it is often found as a part of the structure of other minerals. The neutral aluminium hydroxide sheets are found sandwiched between silicate sheets in important clay groups: the illite, kaolinite, and montmorillonite/smectite groups. The individual aluminium hydroxide layers are identical to the individual layers of gibbsite and are referred to as the gibbsite layers.
The lattice parameters for gibbsite depending upon the particular method used to measure or calculate them and are therefore displayed as ranges below. An Al-Al interlayer spacing of 0.484 or 0.494 nm has been reported.
Mineralogical properties
Etymology
Gibbsite is named after George Gibbs (1776–1833), an American mineral collector.
References
Further reading
Hurlbut, Cornelius S.; Klein, Cornelis, 1985, Manual of Mineralogy, 20th ed.,
Aluminium minerals
Hydroxide minerals
Monoclinic minerals
Minerals in space group 11
Luminescent minerals | Gibbsite | [
"Chemistry"
] | 790 | [
"Luminescence",
"Luminescent minerals"
] |
193,366 | https://en.wikipedia.org/wiki/Asafoetida | Asafoetida (; also spelled asafetida) is the dried latex (gum oleoresin) exuded from the rhizome or tap root of several species of Ferula, perennial herbs of the carrot family. It is produced in Iran, Afghanistan, Central Asia, northern India and Northwest China (Xinjiang). Different regions have different botanical sources.
Asafoetida has a pungent smell, as reflected in its name, lending it the common name of "stinking gum". The odour dissipates upon cooking; in cooked dishes, it delivers a smooth flavour reminiscent of leeks or other onion relatives. Asafoetida is also known colloquially as "devil's dung" in English (and similar expressions in many other languages).
Etymology and other names
The English name is derived from asa, a Latinised form of Persian 'mastic', and Latin 'stinky'.
Other names include, with its pungent odour having resulted in many unpleasant names:
Composition
Typical asafoetida contains about 40–64% resin, 25% endogeneous gum, 10–17% volatile oil, and 1.5–10% ash. The resin portion contains asaresinotannols A and B, ferulic acid, umbelliferone, and four unidentified compounds. The volatile oil component is rich in various organosulfide compounds, such as 2-butyl-propenyl-disulfide, diallyl sulfide, diallyl disulfide (also present in garlic) and dimethyl trisulfide, which is also responsible for the odour of cooked onions. The organosulfides are primarily responsible for the odour and flavour of asafoetida.
Botanical sources
Many Ferula species are utilised as the sources of asafoetida. Most of them are characterised by abundant sulphur-containing compounds in the essential oil.
Ferula foetida is the source of asafoetida in Eastern Iran, western Afghanistan, western Pakistan and Central Asia (Karakum Desert, Kyzylkum Desert). It is one of the most widely distributed asafoetida-producing species and often mistaken for F. assa-foetida. It has sulphur-containing compounds in the essential oil.
Ferula assa-foetida is endemic to Southern Iran and is the source of asafoetida there. It has sulphur-containing compounds in the essential oil. Although it is often considered the main source of asafoetida on the international market, this notion is attributable to the fact that several Ferula species acting as the major sources are often misidentified as F. assa-foetida. In fact, the production of asafoetida from F. assa-foetida is confined to its native range, namely Southern Iran, outside which the sources of asafoetida are other species.
Ferula pseudalliacea and Ferula rubricaulis are endemic to western and southwestern Iran. They are sometimes considered conspecific with F. assa-foetida.
Ferula lutensis and Ferula alliacea are the sources of asafoetida in Eastern Iran. They have sulphur-containing compounds in the essential oil.
Ferula latisecta is the source of asafoetida in Eastern Iran and southern Turkmenistan. It has sulphur-containing compounds in the essential oil.
Ferula sinkiangensis and Ferula fukanensis are endemic to Xinjiang, China. They are the sources of asafoetida in China. They have sulphur-containing compounds in the essential oil.
Ferula narthex is native to Afghanistan, northern Pakistan and Kashmir. Although it is often listed as the source of asafoetida, one report states that it lacks sulphur-containing compounds in the essential oil.
Uses
Cooking
This spice is used as a digestive aid, in food as a condiment, and in pickling. It plays a critical flavouring role in Indian vegetarian cuisine by acting as a savory enhancer. Used along with turmeric, it is a standard component of lentil curries, such as dal, chickpea curries, and vegetable dishes, especially those based on potato and cauliflower. Asafoetida is quickly heated in hot oil before it's sprinkled on the food. It is sometimes used to harmonise sweet, sour, salty, and spicy components in food. The spice is added to the food as it's tempered.
In its pure form, it is sold in the form of chunks of resin, small quantities of which are scraped off for use. The odour of the pure resin is so strong that the pungent smell will contaminate other spices stored nearby if it is not stored in an airtight container.
When adapting recipes for those with garlic allergy or intolerance, asafoetida can be used as a substitute.
Cultivation and manufacture
The resin-like gum comes from the dried sap extracted from the stem and roots, and is used as a spice. The resin is greyish-white when fresh, but dries to a dark amber colour. The asafoetida resin is difficult to grate and is traditionally crushed between stones or with a hammer. Today, the most commonly available form is compounded asafoetida, a fine powder containing 30% asafoetida resin, along with rice flour or maida (white wheat flour) and gum arabic.
Ferula assa-foetida is a monoecious, herbaceous, perennial plant of the family Apiaceae. It grows to high, with a circular mass of leaves. Stem leaves have wide sheathing petioles. Flowering stems are high and thick and hollow, with a number of schizogenous ducts in the cortex containing the resinous gum. Flowers are pale greenish yellow produced in large compound umbels. Fruits are oval, flat, thin, reddish brown and have a milky juice. Roots are thick, massive, and pulpy. They yield a resin similar to that of the stems. All parts of the plant have the distinctive fetid smell.
History
Asafoetida was familiar in the early Mediterranean, having come by land across Iran. It was brought to Europe by an expedition of Alexander the Great, who, after returning from a trip to northeastern ancient Persia, thought that he had found a plant almost identical to the famed silphium of Cyrene in North Africa—though less tasty. Dioscorides, in the first century, wrote, "the Cyrenaic kind, even if one just tastes it, at once arouses a humour throughout the body and has a very healthy aroma, so that it is not noticed on the breath, or only a little; but the Median [Iranian] is weaker in power and has a nastier smell." Nevertheless, it could be substituted for silphium in cooking, which was fortunate, because a few decades after Dioscorides' time, the true silphium of Cyrene became extinct, and asafoetida became more popular amongst physicians, as well as cooks.
Asafoetida is also mentioned numerous times in Jewish literature, such as the Mishnah. Maimonides also writes in the Mishneh Torah "In the rainy season, one should eat warm food with much spice, but a limited amount of mustard and asafoetida [ ]."
While it is generally forgotten now in Europe, it is widely used in India. Asafoetida is mentioned in the Bhagavata Purana (7:5:23-24), which states that one must not have eaten hing before worshipping the deity. Asafoetida is eaten by Brahmins and Jains. Devotees of the Hare Krishna movement also use hing in their food, as they are not allowed to consume onions or garlic. Their food has to be presented to Lord Krishna for sanctification (to become Prasadam) before consumption and onions and garlic cannot be offered to Krishna.
Asafoetida was described by a number of Arab and Islamic scientists and pharmacists. Avicenna discussed the effects of asafoetida on digestion. Ibn al-Baitar and Fakhr al-Din al-Razi described some positive medicinal effects on the respiratory system.
After the fall of Rome and until the 16th century, asafoetida was rare in Europe, and if ever encountered, it was viewed as a medicine. "If used in cookery, it would ruin every dish because of its dreadful smell", asserted Garcia de Orta's European guest. "Nonsense", Garcia replied, "nothing is more widely used in every part of India, both in medicine and in cookery."
During the Italian Renaissance, asafoetida was used as part of the exorcism ritual.
See also
Ammoniacum
Chaat masala
Durian, a fruit with a pungent odour many find disagreeable
Muskroot
South Asian pickle
Turmeric
References
External links
Saudi Aramco World article on the history of asafoetida
Antiflatulents
Edible Apiaceae
Ferula
Indian spices
Medicinal plants of Asia
Resins
Spices | Asafoetida | [
"Physics"
] | 1,943 | [
"Amorphous solids",
"Unsolved problems in physics",
"Resins"
] |
193,735 | https://en.wikipedia.org/wiki/Poisson%27s%20equation | Poisson's equation is an elliptic partial differential equation of broad utility in theoretical physics. For example, the solution to Poisson's equation is the potential field caused by a given electric charge or mass density distribution; with the potential field known, one can then calculate the corresponding electrostatic or gravitational (force) field. It is a generalization of Laplace's equation, which is also frequently seen in physics. The equation is named after French mathematician and physicist Siméon Denis Poisson who published it in 1823.
Statement of the equation
Poisson's equation is
where is the Laplace operator, and and are real or complex-valued functions on a manifold. Usually, is given, and is sought. When the manifold is Euclidean space, the Laplace operator is often denoted as , and so Poisson's equation is frequently written as
In three-dimensional Cartesian coordinates, it takes the form
When identically, we obtain Laplace's equation.
Poisson's equation may be solved using a Green's function:
where the integral is over all of space. A general exposition of the Green's function for Poisson's equation is given in the article on the screened Poisson equation. There are various methods for numerical solution, such as the relaxation method, an iterative algorithm.
Applications in physics and engineering
Newtonian gravity
In the case of a gravitational field g due to an attracting massive object of density ρ, Gauss's law for gravity in differential form can be used to obtain the corresponding Poisson equation for gravity. Gauss's law for gravity is
Since the gravitational field is conservative (and irrotational), it can be expressed in terms of a scalar potential ϕ:
Substituting this into Gauss's law,
yields Poisson's equation for gravity:
If the mass density is zero, Poisson's equation reduces to Laplace's equation. The corresponding Green's function can be used to calculate the potential at distance from a central point mass (i.e., the fundamental solution). In three dimensions the potential is
which is equivalent to Newton's law of universal gravitation.
Electrostatics
Many problems in electrostatics are governed by the Poisson equation, which relates the electric potential
to the free charge density
, such as those found in conductors.
The mathematical details of Poisson's equation, commonly expressed in SI units (as opposed to Gaussian units), describe how the distribution of free charges generates the electrostatic potential in a given Region (mathematics).
Starting with Gauss's law for electricity (also one of Maxwell's equations) in differential form, one has
where is the divergence operator, D is the electric displacement field, and ρf is the free-charge density (describing charges brought from outside).
Assuming the medium is linear, isotropic, and homogeneous (see polarization density), we have the constitutive equation
where is the permittivity of the medium, and E is the electric field.
Substituting this into Gauss's law and assuming that is spatially constant in the region of interest yields
In electrostatics, we assume that there is no magnetic field (the argument that follows also holds in the presence of a constant magnetic field).
Then, we have that
where is the curl operator. This equation means that we can write the electric field as the gradient of a scalar function (called the electric potential), since the curl of any gradient is zero. Thus we can write
where the minus sign is introduced so that is identified as the electric potential energy per unit charge.
The derivation of Poisson's equation under these circumstances is straightforward. Substituting the potential gradient for the electric field,
directly produces Poisson's equation for electrostatics, which is
Specifying the Poisson's equation for the potential requires knowing the charge density distribution. If the charge density is zero, then Laplace's equation results. If the charge density follows a Boltzmann distribution, then the Poisson–Boltzmann equation results. The Poisson–Boltzmann equation plays a role in the development of the Debye–Hückel theory of dilute electrolyte solutions.
Using a Green's function, the potential at distance from a central point charge (i.e., the fundamental solution) is
which is Coulomb's law of electrostatics. (For historical reasons, and unlike gravity's model above, the factor appears here and not in Gauss's law.)
The above discussion assumes that the magnetic field is not varying in time. The same Poisson equation arises even if it does vary in time, as long as the Coulomb gauge is used. In this more general class of cases, computing is no longer sufficient to calculate E, since E also depends on the magnetic vector potential A, which must be independently computed. See Maxwell's equation in potential formulation for more on and A in Maxwell's equations and how an appropriate Poisson's equation is obtained in this case.
Potential of a Gaussian charge density
If there is a static spherically symmetric Gaussian charge density
where is the total charge, then the solution of Poisson's equation
is given by
where is the error function. This solution can be checked explicitly by evaluating .
Note that for much greater than , approaches unity, and the potential approaches the point-charge potential,
as one would expect. Furthermore, the error function approaches 1 extremely quickly as its argument increases; in practice, for the relative error is smaller than one part in a thousand.
Surface reconstruction
Surface reconstruction is an inverse problem. The goal is to digitally reconstruct a smooth surface based on a large number of points pi (a point cloud) where each point also carries an estimate of the local surface normal ni. Poisson's equation can be utilized to solve this problem with a technique called Poisson surface reconstruction.
The goal of this technique is to reconstruct an implicit function f whose value is zero at the points pi and whose gradient at the points pi equals the normal vectors ni. The set of (pi, ni) is thus modeled as a continuous vector field V. The implicit function f is found by integrating the vector field V. Since not every vector field is the gradient of a function, the problem may or may not have a solution: the necessary and sufficient condition for a smooth vector field V to be the gradient of a function f is that the curl of V must be identically zero. In case this condition is difficult to impose, it is still possible to perform a least-squares fit to minimize the difference between V and the gradient of f.
In order to effectively apply Poisson's equation to the problem of surface reconstruction, it is necessary to find a good discretization of the vector field V. The basic approach is to bound the data with a finite-difference grid. For a function valued at the nodes of such a grid, its gradient can be represented as valued on staggered grids, i.e. on grids whose nodes lie in between the nodes of the original grid. It is convenient to define three staggered grids, each shifted in one and only one direction corresponding to the components of the normal data. On each staggered grid we perform trilinear interpolation on the set of points. The interpolation weights are then used to distribute the magnitude of the associated component of ni onto the nodes of the particular staggered grid cell containing pi. Kazhdan and coauthors give a more accurate method of discretization using an adaptive finite-difference grid, i.e. the cells of the grid are smaller (the grid is more finely divided) where there are more data points. They suggest implementing this technique with an adaptive octree.
Fluid dynamics
For the incompressible Navier–Stokes equations, given by
The equation for the pressure field is an example of a nonlinear Poisson equation:
Notice that the above trace is not sign-definite.
See also
Discrete Poisson equation
Poisson–Boltzmann equation
Helmholtz equation
Uniqueness theorem for Poisson's equation
Weak formulation
Harmonic function
Heat equation
Potential theory
References
Further reading
External links
Poisson Equation at EqWorld: The World of Mathematical Equations
Eponymous equations of physics
Potential theory
Partial differential equations
Electrostatics
Mathematical physics
Electromagnetism | Poisson's equation | [
"Physics",
"Mathematics"
] | 1,705 | [
"Electromagnetism",
"Physical phenomena",
"Functions and mappings",
"Equations of physics",
"Applied mathematics",
"Theoretical physics",
"Eponymous equations of physics",
"Mathematical objects",
"Potential theory",
"Mathematical relations",
"Fundamental interactions",
"Mathematical physics"
] |
194,031 | https://en.wikipedia.org/wiki/Nuclear%20fuel%20cycle | The nuclear fuel cycle, also called nuclear fuel chain, is the progression of nuclear fuel through a series of differing stages. It consists of steps in the front end, which are the preparation of the fuel, steps in the service period in which the fuel is used during reactor operation, and steps in the back end, which are necessary to safely manage, contain, and either reprocess or dispose of spent nuclear fuel. If spent fuel is not reprocessed, the fuel cycle is referred to as an open fuel cycle (or a once-through fuel cycle); if the spent fuel is reprocessed, it is referred to as a closed fuel cycle.
Basic concepts
Nuclear power relies on fissionable material that can sustain a chain reaction with neutrons. Examples of such materials include uranium and plutonium. Most nuclear reactors use a moderator to lower the kinetic energy of the neutrons and increase the probability that fission will occur. This allows reactors to use material with far lower concentration of fissile isotopes than are needed for nuclear weapons. Graphite and heavy water are the most effective moderators, because they slow the neutrons through collisions without absorbing them. Reactors using heavy water or graphite as the moderator can operate using natural uranium.
A light water reactor (LWR) uses water in the form that occurs in nature, and requires fuel enriched to higher concentrations of fissile isotopes. Typically, LWRs use uranium enriched to 3–5% U-235, the only fissile isotope that is found in significant quantity in nature. One alternative to this low-enriched uranium (LEU) fuel is mixed oxide (MOX) fuel produced by blending plutonium with natural or depleted uranium, and these fuels provide an avenue to utilize surplus weapons-grade plutonium. Another type of MOX fuel involves mixing LEU with thorium, which generates the fissile isotope U-233. Both plutonium and U-233 are produced from the absorption of neutrons by irradiating fertile materials in a reactor, in particular the common uranium isotope U-238 and thorium, respectively, and can be separated from spent uranium and thorium fuels in reprocessing plants.
Some reactors do not use moderators to slow the neutrons. Like nuclear weapons, which also use unmoderated or "fast" neutrons, these fast-neutron reactors require much higher concentrations of fissile isotopes in order to sustain a chain reaction. They are also capable of breeding fissile isotopes from fertile materials; a breeder reactor is one that generates more fissile material in this way than it consumes.
During the nuclear reaction inside a reactor, the fissile isotopes in nuclear fuel are consumed, producing more and more fission products, most of which are considered radioactive waste. The buildup of fission products and consumption of fissile isotopes eventually stop the nuclear reaction, causing the fuel to become a spent nuclear fuel. When 3% enriched LEU fuel is used, the spent fuel typically consists of roughly 1% U-235, 95% U-238, 1% plutonium and 3% fission products. Spent fuel and other high-level radioactive waste is extremely hazardous, although nuclear reactors produce orders of magnitude smaller volumes of waste compared to other power plants because of the high energy density of nuclear fuel. Safe management of these byproducts of nuclear power, including their storage and disposal, is a difficult problem for any country using nuclear power.
Front end
Exploration
A deposit of uranium, such as uraninite, discovered by geophysical techniques, is evaluated and sampled to determine the amounts of uranium materials that are extractable at specified costs from the deposit. Uranium reserves are the amounts of ore that are estimated to be recoverable at stated costs.
Naturally occurring uranium consists primarily of two isotopes U-238 and U-235, with 99.28% of the metal being U-238 while 0.71% is U-235, and the remaining 0.01% is mostly U-234. The number in such names refers to the isotope's atomic mass number, which is the number of protons plus the number of neutrons in the atomic nucleus.
The atomic nucleus of U-235 will nearly always fission when struck by a free neutron, and the isotope is therefore said to be a "fissile" isotope. The nucleus of a U-238 atom on the other hand, rather than undergoing fission when struck by a free neutron, will nearly always absorb the neutron and yield an atom of the isotope U-239. This isotope then undergoes natural radioactive decay to yield Pu-239, which, like U-235, is a fissile isotope. The atoms of U-238 are said to be fertile, because, through neutron irradiation in the core, some eventually yield atoms of fissile Pu-239.
Mining
Uranium ore can be extracted through conventional mining in open pit and underground methods similar to those used for mining other metals. In-situ leach mining methods also are used to mine uranium in the United States. In this technology, uranium is leached from the in-place ore through an array of regularly spaced wells and is then recovered from the leach solution at a surface plant. Uranium ores in the United States typically range from about 0.05 to 0.3% uranium oxide (U3O8). Some uranium deposits developed in other countries are of higher grade and are also larger than deposits mined in the United States. Uranium is also present in very low-grade amounts (50 to 200 parts per million) in some domestic phosphate-bearing deposits of marine origin. Because very large quantities of phosphate-bearing rock are mined for the production of wet-process phosphoric acid used in high analysis fertilizers and other phosphate chemicals, at some phosphate processing plants the uranium, although present in very low concentrations, can be economically recovered from the process stream.
Milling
When Uranium is mined out of the ground it does not contain enough pure uranium per pound to be used. The process of milling is how the cycle extracts the usable uranium from the rest of the materials, also known as tailings. To begin the milling process the ore is either ground into fine dust with water or crushed into dust without water. Once the Materials have been physically treated, they then begin the process of being chemically treated by being doused in acids. Acids used include hydrochloric and nitrous acids but the most common acids are sulfuric acids. Alternatively if the material that the ore is made of is particularly resistant to acids then an alkali is used instead. After being treated chemically the uranium particles are dissolved into the solution used to treat them. This solution is then filtered until what solids remain are separated from the liquids that contain the uranium. The undesirable solids are disposed of as tailings. Once the solution has had the tailings removed the uranium is extracted from the rest of the liquid solution, in one of two ways, solvent exchange or ion exchange. In the first of these a solvent is mixed into the solution. The dissolved uranium binds to the solvent and floats to the top while the other dissolved materials remain in the mixture. During ion exchange a different material is mixed into the solution and the uranium binds to it. Once filtered the material is panned out and washed off. The solution will repeat this process of filtration to pull as much usable uranium out as possible. The filtered uranium is then dried out into U3O8 uranium. The milling process commonly yields dry powder-form material consisting of natural uranium, "yellowcake", which is sold on the uranium market as U3O8. Note that the material is not always yellow.
Uranium conversion
Usually milled uranium oxide, U3O8 (triuranium octoxide) is then processed into either of two substances depending on the intended use.
For use in most reactors, U3O8 is usually converted to uranium hexafluoride (UF6), the input stock for most commercial uranium enrichment facilities. A solid at room temperature, uranium hexafluoride becomes gaseous at 57 °C (134 °F). At this stage of the cycle, the uranium hexafluoride conversion product still has the natural isotopic mix (99.28% of U-238 plus 0.71% of U-235).
There are two ways to convert uranium oxide into its usable forms uranium dioxide and uranium hexafluoride; the wet option and the dry option. In the wet option the yellowcake is dissolved in nitric acid then extracted using tributyl phosphate. The resulting mixture is then dried and washed resulting in uranium trioxide. The uranium trioxide is then mixed with pure hydrogen resulting in uranium dioxide and dihydrogen monoxide or water. After that the uranium dioxide is mixed with four parts hydrogen fluoride resulting in more water and uranium tetrafluoride. Finally the end product of uranium hexafluoride is created by simply adding more fluoride to the mixture.
For use in reactors such as CANDU which do not require enriched fuel, the U3O8 may instead be converted to uranium dioxide (UO2) which can be included in ceramic fuel elements.
In the current nuclear industry, the volume of material converted directly to UO2 is typically quite small compared to that converted to UF6.
Enrichment
The natural concentration (0.71%) of the fissile isotope U-235 is less than that required to sustain a nuclear chain reaction in light water reactor cores. Accordingly, UF6 produced from natural uranium sources must be enriched to a higher concentration of the fissionable isotope before being used as nuclear fuel in such reactors. The level of enrichment for a particular nuclear fuel order is specified by the customer according to the application they will use it for: light-water reactor fuel normally is enriched to 3.5% U-235, but uranium enriched to lower concentrations is also required. Enrichment is accomplished using any of several methods of isotope separation. Gaseous diffusion and gas centrifuge are the commonly used uranium enrichment methods, but new enrichment technologies are currently being developed.
The bulk (96%) of the byproduct from enrichment is depleted uranium (DU), which can be used for armor, kinetic energy penetrators, radiation shielding and ballast. As of 2008 there are vast quantities of depleted uranium in storage. The United States Department of Energy alone has 470,000 tonnes. About 95% of depleted uranium is stored as uranium hexafluoride (UF6).
Fabrication
For use as nuclear fuel, enriched uranium hexafluoride is converted into uranium dioxide (UO2) powder that is then processed into pellet form. The pellets are then fired in a high temperature sintering furnace to create hard, ceramic pellets of enriched uranium. The cylindrical pellets then undergo a grinding process to achieve a uniform pellet size. The pellets are stacked, according to each nuclear reactor core's design specifications, into tubes of corrosion-resistant metal alloy. The tubes are sealed to contain the fuel pellets: these tubes are called fuel rods. The finished fuel rods are grouped in special fuel assemblies that are then used to build up the nuclear fuel core of a power reactor.
The alloy used for the tubes depends on the design of the reactor. Stainless steel was used in the past, but most reactors now use a zirconium alloy. For the most common types of reactors, boiling water reactors (BWR) and pressurized water reactors (PWR), the tubes are assembled into bundles with the tubes spaced precise distances apart. These bundles are then given a unique identification number, which enables them to be tracked from manufacture through use and into disposal.
Service period
Transport of radioactive materials
Transport is an integral part of the nuclear fuel cycle. There are nuclear power reactors in operation in several countries but uranium mining is viable in only a few areas. Also, in the course of over forty years of operation by the nuclear industry, a number of specialized facilities have been developed in various locations around the world to provide fuel cycle services and there is a need to transport nuclear materials to and from these facilities. Most transports of nuclear fuel material occur between different stages of the cycle, but occasionally a material may be transported between similar facilities. With some exceptions, nuclear fuel cycle materials are transported in solid form, the exception being uranium hexafluoride (UF6) which is considered a gas. Most of the material used in nuclear fuel is transported several times during the cycle. Transports are frequently international, and are often over large distances. Nuclear materials are generally transported by specialized transport companies.
Since nuclear materials are radioactive, it is important to ensure that radiation exposure of those involved in the transport of such materials and of the general public along transport routes is limited. Packaging for nuclear materials includes, where appropriate, shielding to reduce potential radiation exposures. In the case of some materials, such as fresh uranium fuel assemblies, the radiation levels are negligible and no shielding is required. Other materials, such as spent fuel and high-level waste, are highly radioactive and require special handling. To limit the risk in transporting highly radioactive materials, containers known as spent nuclear fuel shipping casks are used which are designed to maintain integrity under normal transportation conditions and during hypothetical accident conditions.
While transport casks vary in design, material, size, and purpose, they are typically long tubes made of stainless steel or concrete with the ends sealed shut to prevent leaks. Frequently the casks' shell will have at least one layer of radiation-resistant material, such as lead. The inside of the tube will also vary depending on what is being transported. For example casks that are transporting depleted or unused fuel rods will have sleeves that keep the rods separate, while casks that transport uranium hexafluoride typically have no internal organization. Depending on the purpose and radioactivity of the materials some casks have systems of ventilation, thermal protection, impact protection, and other features more specific to the route and cargo.
In-core fuel management
A nuclear reactor core is composed of a few hundred "assemblies", arranged in a regular array of cells, each cell being formed by a fuel or control rod surrounded, in most designs, by a moderator and coolant, which is water in most reactors.
Because of the fission process that consumes the fuels, the old fuel rods must be replaced periodically with fresh ones (this is called a (replacement) cycle). During a given replacement cycle only some of the assemblies (typically one-third) are replaced since fuel depletion occurs at different rates at different places within the reactor core. Furthermore, for efficiency reasons, it is not a good policy to put the new assemblies exactly at the location of the removed ones. Even bundles of the same age will have different burn-up levels due to their previous positions in the core. Thus the available bundles must be arranged in such a way that the yield is maximized, while safety limitations and operational constraints are satisfied. Consequently, reactor operators are faced with the so-called optimal fuel reloading problem, which consists of optimizing the rearrangement of all the assemblies, the old and fresh ones, while still maximizing the reactivity of the reactor core so as to maximise fuel burn-up and minimise fuel-cycle costs.
This is a discrete optimization problem, and computationally infeasible by current combinatorial methods, due to the huge number of permutations and the complexity of each computation. Many numerical methods have been proposed for solving it and many commercial software packages have been written to support fuel management. This is an ongoing issue in reactor operations as no definitive solution to this problem has been found. Operators use a combination of computational and empirical techniques to manage this problem.
The study of used fuel
Used nuclear fuel is studied in Post irradiation examination, where used fuel is examined to know more about the processes that occur in fuel during use, and how these might alter the outcome of an accident. For example, during normal use, the fuel expands due to thermal expansion, which can cause cracking. Most nuclear fuel is uranium dioxide, which is a cubic solid with a structure similar to that of calcium fluoride. In used fuel the solid state structure of most of the solid remains the same as that of pure cubic uranium dioxide. SIMFUEL is the name given to the simulated spent fuel which is made by mixing finely ground metal oxides, grinding as a slurry, spray drying it before heating in hydrogen/argon to 1700 °C. In SIMFUEL, 4.1% of the volume of the solid was in the form of metal nanoparticles which are made of molybdenum, ruthenium, rhodium and palladium. Most of these metal particles are of the ε phase (hexagonal) of Mo-Ru-Rh-Pd alloy, while smaller amounts of the α (cubic) and σ (tetragonal) phases of these metals were found in the SIMFUEL. Also present within the SIMFUEL was a cubic perovskite phase which is a barium strontium zirconate (BaxSr1−xZrO3).
Uranium dioxide is minimally soluable in water, but after oxidation it can be converted to uranium trioxide or another uranium(VI) compound which is much more soluble. Uranium dioxide (UO2) can be oxidised to an oxygen rich hyperstoichiometric oxide (UO2+x) which can be further oxidised to U4O9, U3O7, U3O8 and UO3.2H2O.
Because used fuel contains alpha emitters (plutonium and the minor actinides), the effect of adding an alpha emitter (238Pu) to uranium dioxide on the leaching rate of the oxide has been investigated. For the crushed oxide, adding 238Pu tended to increase the rate of leaching, but the difference in the leaching rate between 0.1 and 10% 238Pu was very small.
The concentration of carbonate in the water which is in contact with the used fuel has a considerable effect on the rate of corrosion, because uranium(VI) forms soluble anionic carbonate complexes such as [UO2(CO3)2]2− and [UO2(CO3)3]4−. When carbonate ions are absent, and the water is not strongly acidic, the hexavalent uranium compounds which form on oxidation of uranium dioxide often form insoluble hydrated uranium trioxide phases.
Thin films of uranium dioxide can be deposited upon gold surfaces by ‘sputtering’ using uranium metal and an argon/oxygen gas mixture. These gold surfaces modified with uranium dioxide have been used for both cyclic voltammetry and AC impedance experiments, and these offer an insight into the likely leaching behaviour of uranium dioxide.
Fuel cladding interactions
The study of the nuclear fuel cycle includes the study of the behaviour of nuclear materials both under normal conditions and under accident conditions. For example, there has been much work on how uranium dioxide based fuel interacts with the zirconium alloy tubing used to cover it. During use, the fuel swells due to thermal expansion and then starts to react with the surface of the zirconium alloy, forming a new layer which contains both fuel and zirconium (from the cladding). Then, on the fuel side of this mixed layer, there is a layer of fuel which has a higher caesium to uranium ratio than most of the fuel. This is because xenon isotopes are formed as fission products that diffuse out of the lattice of the fuel into voids such as the narrow gap between the fuel and the cladding. After diffusing into these voids, it decays to caesium isotopes. Because of the thermal gradient which exists in the fuel during use, the volatile fission products tend to be driven from the centre of the pellet to the rim area. Below is a graph of the temperature of uranium metal, uranium nitride and uranium dioxide as a function of distance from the centre of a 20 mm diameter pellet with a rim temperature of 200 °C. The uranium dioxide (because of its poor thermal conductivity) will overheat at the centre of the pellet, while the other more thermally conductive forms of uranium remain below their melting points.
Normal and abnormal conditions
The nuclear chemistry associated with the nuclear fuel cycle can be divided into two main areas; one area is concerned with operation under the intended conditions while the other area is concerned with maloperation conditions where some alteration from the normal operating conditions has occurred or (more rarely) an accident is occurring.
The releases of radioactivity from normal operations are the small planned releases from uranium ore processing, enrichment, power reactors, reprocessing plants and waste stores. These can be in different chemical/physical form from releases which could occur under accident conditions. In addition the isotope signature of a hypothetical accident may be very different from that of a planned normal operational discharge of radioactivity to the environment.
Just because a radioisotope is released it does not mean it will enter a human and then cause harm. For instance, the migration of radioactivity can be altered by the binding of the radioisotope to the surfaces of soil particles. For example, caesium (Cs) binds tightly to clay minerals such as illite and montmorillonite, hence it remains in the upper layers of soil where it can be accessed by plants with shallow roots (such as grass). Hence grass and mushrooms can carry a considerable amount of 137Cs which can be transferred to humans through the food chain. But 137Cs is not able to migrate quickly through most soils and thus is unlikely to contaminate well water. Colloids of soil minerals can migrate through soil so simple binding of a metal to the surfaces of soil particles does not completely fix the metal.
According to Jiří Hála's text book, the distribution coefficient Kd is the ratio of the soil's radioactivity (Bq g−1) to that of the soil water (Bq ml−1). If the radioisotope is tightly bound to the minerals in the soil, then less radioactivity can be absorbed by crops and grass growing on the soil.
Cs-137 Kd = 1000
Pu-239 Kd = 10000 to 100000
Sr-90 Kd = 80 to 150
I-131 Kd = 0.007 to 50
In dairy farming, one of the best countermeasures against 137Cs is to mix up the soil by deeply ploughing the soil. This has the effect of putting the 137Cs out of reach of the shallow roots of the grass, hence the level of radioactivity in the grass will be lowered. Also after a nuclear war or serious accident, the removal of top few cm of soil and its burial in a shallow trench will reduce the long-term gamma dose to humans due to 137Cs, as the gamma photons will be attenuated by their passage through the soil.
Even after the radioactive element arrives at the roots of the plant, the metal may be rejected by the biochemistry of the plant. The details of the uptake of 90Sr and 137Cs into sunflowers grown under hydroponic conditions has been reported. The caesium was found in the leaf veins, in the stem and in the apical leaves. It was found that 12% of the caesium entered the plant, and 20% of the strontium. This paper also reports details of the effect of potassium, ammonium and calcium ions on the uptake of the radioisotopes.
In livestock farming, an important countermeasure against 137Cs is to feed animals a small amount of Prussian blue. This iron potassium cyanide compound acts as an ion-exchanger. The cyanide is so tightly bonded to the iron that it is safe for a human to eat several grams of Prussian blue per day. The Prussian blue reduces the biological half-life (different from the nuclear half-life) of the caesium. The physical or nuclear half-life of 137Cs is about 30 years. This is a constant which can not be changed but the biological half-life is not a constant. It will change according to the nature and habits of the organism for which it is expressed. Caesium in humans normally has a biological half-life of between one and four months. An added advantage of the Prussian blue is that the caesium which is stripped from the animal in the droppings is in a form which is not available to plants. Hence it prevents the caesium from being recycled. The form of Prussian blue required for the treatment of humans or animals is a special grade. Attempts to use the pigment grade used in paints have not been successful. Note that a source of data on the subject of caesium in Chernobyl fallout exists at (Ukrainian Research Institute for Agricultural Radiology).
Release of radioactivity from fuel during normal use and accidents
The IAEA assume that under normal operation the coolant of a water-cooled reactor will contain some radioactivity but during a reactor accident the coolant radioactivity level may rise. The IAEA states that under a series of different conditions different amounts of the core inventory can be released from the fuel, the four conditions the IAEA consider are normal operation, a spike in coolant activity due to a sudden shutdown/loss of pressure (core remains covered with water), a cladding failure resulting in the release of the activity in the fuel/cladding gap (this could be due to the fuel being uncovered by the loss of water for 15–30 minutes where the cladding reached a temperature of 650–1250 °C) or a melting of the core (the fuel will have to be uncovered for at least 30 minutes, and the cladding would reach a temperature in excess of 1650 °C).
Based upon the assumption that a Pressurized water reactor contains 300 tons of water, and that the activity of the fuel of a 1 GWe reactor is as the IAEA predicts, then the coolant activity after an accident such as the Three Mile Island accident (where a core is uncovered and then recovered with water) can be predicted.
Releases from reprocessing under normal conditions
It is normal to allow used fuel to stand after the irradiation to allow the short-lived and radiotoxic iodine isotopes to decay away. In one experiment in the US, fresh fuel which had not been allowed to decay was reprocessed (the Green run ) to investigate the effects of a large iodine release from the reprocessing of short cooled fuel. It is normal in reprocessing plants to scrub the off gases from the dissolver to prevent the emission of iodine. In addition to the emission of iodine the noble gases and tritium are released from the fuel when it is dissolved. It has been proposed that by voloxidation (heating the fuel in a furnace under oxidizing conditions) the majority of the tritium can be recovered from the fuel.
A paper was written on the radioactivity in oysters found in the Irish Sea. These were found by gamma spectroscopy to contain 141Ce, 144Ce, 103Ru, 106Ru, 137Cs, 95Zr and 95Nb. Additionally, a zinc activation product (65Zn) was found, which is thought to be due to the corrosion of magnox fuel cladding in spent fuel pools. It is likely that the modern releases of all these isotopes from the Windscale event is smaller.
On-load reactors
Some reactor designs, such as RBMKs or CANDU reactors, can be refueled without being shut down. This is achieved through the use of many small pressure tubes to contain the fuel and coolant, as opposed to one large pressure vessel as in pressurized water reactor (PWR) or boiling water reactor (BWR) designs. Each tube can be individually isolated and refueled by an operator-controlled fueling machine, typically at a rate of up to 8 channels per day out of roughly 400 in CANDU reactors. On-load refueling allows for the optimal fuel reloading problem to be dealt with continuously, leading to more efficient use of fuel. This increase in efficiency is partially offset by the added complexity of having hundreds of pressure tubes and the fueling machines to service them.
Interim storage
After its operating cycle, the reactor is shut down for refueling. The fuel discharged at that time (spent fuel) is stored either at the reactor site (commonly in a spent fuel pool) or potentially in a common facility away from reactor sites. If on-site pool storage capacity is exceeded, it may be desirable to store the now cooled aged fuel in modular dry storage facilities known as Independent Spent Fuel Storage Installations (ISFSI) at the reactor site or at a facility away from the site. The spent fuel rods are usually stored in water or boric acid, which provides both cooling (the spent fuel continues to generate decay heat as a result of residual radioactive decay) and shielding to protect the environment from residual ionizing radiation, although after at least a year of cooling they may be moved to dry cask storage.
Transportation
Reprocessing
Spent fuel discharged from reactors contains appreciable quantities of fissile (U-235 and Pu-239), fertile (U-238), and other radioactive materials, including reaction poisons, which is why the fuel had to be removed. These fissile and fertile materials can be chemically separated and recovered from the spent fuel. The recovered uranium and plutonium can, if economic and institutional conditions permit, be recycled for use as nuclear fuel. This is currently not done for civilian spent nuclear fuel in the United States, however it is done in Russia. Russia aims to maximise recycling of fissile materials from used fuel. Hence reprocessing used fuel is a basic practice, with reprocessed uranium being recycled and plutonium used in MOX, at present only for fast reactors.
Mixed oxide, or MOX fuel, is a blend of reprocessed uranium and plutonium and depleted uranium which behaves similarly, although not identically, to the enriched uranium feed for which most nuclear reactors were designed. MOX fuel is an alternative to low-enriched uranium (LEU) fuel used in the light water reactors which predominate nuclear power generation.
Currently, plants in Europe are reprocessing spent fuel from utilities in Europe and Japan. Reprocessing of spent commercial-reactor nuclear fuel is currently not permitted in the United States due to the perceived danger of nuclear proliferation. The Bush Administration's Global Nuclear Energy Partnership proposed that the U.S. form an international partnership to see spent nuclear fuel reprocessed in a way that renders the plutonium in it usable for nuclear fuel but not for nuclear weapons.
Partitioning and transmutation
As an alternative to the disposal of the PUREX raffinate in glass or Synroc matrix, the most radiotoxic elements could be removed through advanced reprocessing. After separation, the minor actinides and some long-lived fission products could be converted to short-lived or stable isotopes by either neutron or photon irradiation. This is called transmutation. Strong and long-term international cooperation, and many decades of research and huge investments remain necessary before to reach a mature industrial scale where the safety and the economical feasibility of partitioning and transmutation (P&T) could be demonstrated.
Waste disposal
A current concern in the nuclear power field is the safe disposal and isolation of either spent fuel from reactors or, if the reprocessing option is used, wastes from reprocessing plants. These materials must be isolated from the biosphere until the radioactivity contained in them has diminished to a safe level. In the U.S., under the Nuclear Waste Policy Act of 1982 as amended, the Department of Energy has responsibility for the development of the waste disposal system for spent nuclear fuel and high-level radioactive waste. Current plans call for the ultimate disposal of the wastes in solid form in a licensed deep, stable geologic structure called a deep geological repository. The Department of Energy chose Yucca Mountain as the location for the repository. Its opening has been repeatedly delayed. Since 1999 thousands of nuclear waste shipments have been stored at the Waste Isolation Pilot Plant in New Mexico.
Fast-neutron reactors can fission all actinides, while the thorium fuel cycle produces low levels of transuranics. Unlike LWRs, in principle these fuel cycles could recycle their plutonium and minor actinides and leave only fission products and activation products as waste. The highly radioactive medium-lived fission products Cs-137 and Sr-90 diminish by a factor of 10 each century; while the long-lived fission products have relatively low radioactivity, often compared favorably to that of the original uranium ore.
Horizontal drillhole disposal describes proposals to drill over one kilometer vertically, and two kilometers horizontally in the Earth's crust, for the purpose of disposing of high-level waste forms such as spent nuclear fuel, Caesium-137, or Strontium-90. After the emplacement and the retrievability period, drillholes would be backfilled and sealed. A series of tests of the technology were carried out in November 2018 and then again publicly in January 2019 by a U.S. based private company. The test demonstrated the emplacement of a test-canister in a horizontal drillhole and retrieval of the same canister. There was no actual high-level waste used in this test.
Fuel cycles
Although the most common terminology is fuel cycle, some argue that the term fuel chain is more accurate, because the spent fuel is never fully recycled. Spent fuel includes fission products, which generally must be treated as waste, as well as uranium, plutonium, and other transuranic elements. Where plutonium is recycled, it is normally reused once in light water reactors, although fast reactors could lead to more complete recycling of plutonium.
Once-through nuclear fuel cycle
Not a cycle per se, fuel is used once and then sent to storage without further processing save additional packaging to provide for better isolation from the biosphere. This method is favored by six countries: the United States, Canada, Sweden, Finland, Spain and South Africa. Some countries, notably Finland, Sweden and Canada, have designed repositories to permit future recovery of the material should the need arise, while others plan for permanent sequestration in a geological repository like the Yucca Mountain nuclear waste repository in the United States.
Plutonium cycle
Several countries, including Japan, Switzerland, and previously Spain and Germany, are using or have used the reprocessing services offered by Areva NC and previously THORP. Fission products, minor actinides, activation products, and reprocessed uranium are separated from the reactor-grade plutonium, which can then be fabricated into MOX fuel. Because the proportion of the non-fissile even-mass isotopes of plutonium rises with each pass through the cycle, there are currently no plans to reuse plutonium from used MOX fuel for a third pass in a thermal reactor. If fast reactors become available, they may be able to burn these, or almost any other actinide isotopes.
The use of a medium-scale reprocessing facility onsite, and the use of pyroprocessing rather than the present day aqueous reprocessing, is claimed to potentially be able to considerably reduce the nuclear proliferation potential or possible diversion of fissile material as the processing facility is in-situ. Similarly as plutonium is not separated on its own in the pyroprocessing cycle, rather all actinides are "electro-won" or "refined" from the spent fuel, the plutonium is never separated on its own, instead it comes over into the new fuel mixed with gamma and alpha emitting actinides, species that "self-protect" it in numerous possible thief scenarios.
Beginning in 2016 Russia has been testing and is now deploying Remix Fuel in which the spent nuclear fuel is put through a process like Pyroprocessing that separates the reactor Grade Plutonium and remaining Uranium from the fission products and fuel cladding. This mixed metal is then combined with a small quantity of medium enriched Uranium with approximately 17% U-235 concentration to make a new combined metal oxide fuel with 1% Reactor Grade plutonium and a U-235 concentration of 4%. These fuel rods are suitable for use in standard PWR reactors as the Plutonium content is no higher than that which exists at the end of cycle in the spent nuclear fuel. As of February 2020 Russia was deploying this fuel in some of their fleet of VVER reactors.
Minor actinides recycling
It has been proposed that in addition to the use of plutonium, the minor actinides could be used in a critical power reactor. Tests are already being conducted in which americium is being used as a fuel.
A number of reactor designs, like the Integral Fast Reactor, have been designed for this rather different fuel cycle. In principle, it should be possible to derive energy from the fission of any actinide nucleus. With a careful reactor design, all the actinides in the fuel can be consumed, leaving only lighter elements with short half-lives. Whereas this has been done in prototype plants, no such reactor has ever been operated on a large scale.
It so happens that the neutron cross-section of many actinides decreases with increasing neutron energy, but the ratio of fission to simple activation (neutron capture) changes in favour of fission as the neutron energy increases. Thus with a sufficiently high neutron energy, it should be possible to destroy even curium without the generation of the transcurium metals. This could be very desirable as it would make it significantly easier to reprocess and handle the actinide fuel.
One promising alternative from this perspective is an accelerator-driven sub-critical reactor / subcritical reactor. Here a beam of either protons (United States and European designs) or electrons (Japanese design) is directed into a target. In the case of protons, very fast neutrons will spall off the target, while in the case of the electrons, very high energy photons will be generated. These high-energy neutrons and photons will then be able to cause the fission of the heavy actinides.
Such reactors compare very well to other neutron sources in terms of neutron energy:
Thermal 0 to 100 eV
Epithermal 100 eV to 100 keV
Fast (from nuclear fission) 100 keV to 3 MeV
DD fusion 2.5 MeV
DT fusion 14 MeV
Accelerator driven core 200 MeV (lead driven by 1.6 GeV protons)
Muon-catalyzed fusion 7 GeV.
As an alternative, the curium-244, with a half-life of 18 years, could be left to decay into plutonium-240 before being used in fuel in a fast reactor.
Fuel or targets for this actinide transmutation
To date the nature of the fuel (targets) for actinide transformation has not been chosen.
If actinides are transmuted in a Subcritical reactor, it is likely that the fuel will have to be able to tolerate more thermal cycles than conventional fuel. Due to current particle accelerators not being optimized for long continuous operation at least the first generation of accelerator-driven sub-critical reactor is unlikely to be able to maintain a constant operation period for equally long times as a critical reactor, and each time the accelerator stops then the fuel will cool down.
On the other hand, if actinides are destroyed using a fast reactor, such as an Integral Fast Reactor, then the fuel will most likely not be exposed to many more thermal cycles than in a normal power station.
Depending on the matrix the process can generate more transuranics from the matrix. This could either be viewed as good (generate more fuel) or can be viewed as bad (generation of more radiotoxic transuranic elements). A series of different matrices exists which can control this production of heavy actinides.
Fissile nuclei (such as 233U, 235U, and 239Pu) respond well to delayed neutrons and are thus important to keep a critical reactor stable; this limits the amount of minor actinides that can be destroyed in a critical reactor. As a consequence, it is important that the chosen matrix allows the reactor to keep the ratio of fissile to non-fissile nuclei high, as this enables it to destroy the long-lived actinides safely. In contrast, the power output of a sub-critical reactor is limited by the intensity of the driving particle accelerator, and thus it need not contain any uranium or plutonium at all. In such a system, it may be preferable to have an inert matrix that does not produce additional long-lived isotopes. Having a low fraction of delayed neutrons is not only not a problem in a subcritical reactor, it may even be slightly advantageous as criticality can be brought closer to unity, while still staying subcritical.
Actinides in an inert matrix
The actinides will be mixed with a metal which will not form more actinides; for instance, an alloy of actinides in a solid such as zirconia could be used.
The raison d’être of the Initiative for Inert Matrix Fuel (IMF) is to contribute to Research and Development studies on inert matrix fuels that could be used to utilise, reduce and dispose both weapon- and light water reactor-grade plutonium excesses. In addition to plutonium, the amounts of minor actinides are also increasing. These actinides have to be consequently disposed in a safe, ecological and economical way. The promising strategy that consists of utilising plutonium and minor actinides using a once-through fuel approach within existing commercial nuclear power reactors e.g. US, European, Russian or Japanese Light Water Reactors (LWR), Canadian Pressured Heavy Water Reactors, or in future transmutation units, has been emphasised since the beginning of the initiative. The approach, which makes use of inert matrix fuel is now studied by several groups in the world. This option has the advantage of reducing the plutonium amounts and potentially minor actinide contents prior to geological disposal. The second option is based on using a uranium-free fuel leachable for reprocessing and by following a multi-recycling strategy. In both cases, the advanced fuel material produces energy while consuming plutonium or the minor actinides. This material must, however, be robust. The selected material must be the result of a careful system study including inert matrix – burnable absorbent – fissile material as minimum components and with the addition of stabiliser. This yields a single-phase solid solution or more simply if this option is not selected a composite inert matrix–fissile component. In screening studies pre-selected elements were identified as suitable. In the 90s an IMF once through strategy was adopted considering the following properties:
neutron properties i.e. low absorption cross-section, optimal constant reactivity, suitable Doppler coefficient,
phase stability, chemical inertness, and compatibility,
acceptable thermo-physical properties i.e. heat capacity, thermal conductivity,
good behaviour under irradiation i.e. phase stability, minimum swelling,
retention of fission products or residual actinides, and
optimal properties after irradiation with insolubility for once through then out.
This once-through then out strategy may be adapted as a last cycle after multi-recycling if the fission yield is not large enough, in which case the following property is required good leaching properties for reprocessing and multi-recycling.
Actinides in a thorium matrix
Upon neutron bombardment, thorium can be converted to uranium-233. 233U is fissile, and has a larger fission cross section than both 235U and 238U, and thus it is far less likely to produce higher actinides through neutron capture.
Actinides in a uranium matrix
If the actinides are incorporated into a uranium-metal or uranium-oxide matrix, then the neutron capture of 238U is likely to generate new plutonium-239. An advantage of mixing the actinides with uranium and plutonium is that the large fission cross sections of 235U and 239Pu for the less energetic delayed neutrons could make the reaction stable enough to be carried out in a critical fast reactor, which is likely to be both cheaper and simpler than an accelerator driven system.
Mixed matrix
It is also possible to create a matrix made from a mix of the above-mentioned materials. This is most commonly done in fast reactors where one may wish to keep the breeding ratio of new fuel high enough to keep powering the reactor, but still low enough that the generated actinides can be safely destroyed without transporting them to another site. One way to do this is to use fuel where actinides and uranium is mixed with inert zirconium, producing fuel elements with the desired properties.
Uranium cycle in renewable mode
To fulfill the conditions required for a nuclear renewable energy concept, one has to explore a combination of processes going from the front end of the nuclear fuel cycle to the fuel production and the energy conversion using specific fluid fuels and reactors, as reported by Degueldre et al. (2019). Extraction of uranium from a diluted fluid ore such as seawater has been studied in various countries worldwide. This extraction should be carried out parsimoniously, as suggested by Degueldre (2017). An extraction rate of kilotons of U per year over centuries would not modify significantly the equilibrium concentration of uranium in the oceans (3.3 ppb). This equilibrium results from the input of 10 kilotons of U per year by river waters and its scavenging on the sea floor from the 1.37 exatons of water in the oceans. For a renewable uranium extraction, the use of a specific biomass material is suggested to adsorb uranium and subsequently other transition metals. The uranium loading on the biomass would be around 100 mg per kg. After contact time, the loaded material would be dried and burned ( neutral) with heat conversion into electricity. The uranium ‘burning’ in a molten salt fast reactor helps to optimize the energy conversion by burning all actinide isotopes with an excellent yield for producing a maximum amount of thermal energy from fission and converting it into electricity. This optimisation can be reached by reducing the moderation and the fission product concentration in the liquid fuel/coolant. These effects can be achieved by using a maximum amount of actinides and a minimum amount of alkaline/earth alkaline elements yielding a harder neutron spectrum. Under these optimal conditions the consumption of natural uranium would be 7 tons per year and per gigawatt (GW) of produced electricity. The coupling of uranium extraction from the sea and its optimal utilisation in a molten salt fast reactor should allow nuclear energy to gain the label renewable. In addition, the amount of seawater used by a nuclear power plant to cool the last coolant fluid and the turbine would be ~2.1 giga tons per year for a fast molten salt reactor, corresponding to 7 tons of natural uranium extractable per year. This practice justifies the label renewable.
Thorium cycle
In the thorium fuel cycle thorium-232 absorbs a neutron in either a fast or thermal reactor. The thorium-233 beta decays to protactinium-233 and then to uranium-233, which in turn is used as fuel. Hence, like uranium-238, thorium-232 is a fertile material.
\overset{neutron}{n} + ^{232}_{90}Th -> ^{233}_{90}Th ->[\beta^-] ^{233}_{91}Pa ->[\beta^-] \overset{fuel}{^{233}_{92}U}
After starting the reactor with existing U-233 or some other fissile material such as U-235 or Pu-239, a breeding cycle similar to but more efficient than that with U-238 and plutonium can be created. The Th-232 absorbs a neutron to become Th-233 which quickly decays to protactinium-233. Protactinium-233 in turn decays with a half-life of 27 days to U-233. In some molten salt reactor designs, the Pa-233 is extracted and protected from neutrons (which could transform it to Pa-234 and then to U-234), until it has decayed to U-233. This is done in order to improve the breeding ratio which is low compared to fast reactors.
Thorium is at least 4-5 times more abundant in nature than all of uranium isotopes combined; thorium is fairly evenly spread around Earth with a lot of countries having huge supplies of it; preparation of thorium fuel does not require difficult and expensive enrichment processes; the thorium fuel cycle creates mainly Uranium-233 contaminated with Uranium-232 which makes it harder to use in a normal, pre-assembled nuclear weapon which is stable over long periods of time (unfortunately drawbacks are much lower for immediate use weapons or where final assembly occurs just prior to usage time); elimination of at least the transuranic portion of the nuclear waste problem is possible in MSR and other breeder reactor designs.
One of the earliest efforts to use a thorium fuel cycle took place at Oak Ridge National Laboratory in the 1960s. An experimental reactor was built based on molten salt reactor technology to study the feasibility of such an approach, using thorium fluoride salt kept hot enough to be liquid, thus eliminating the need for fabricating fuel elements. This effort culminated in the Molten-Salt Reactor Experiment that used 232Th as the fertile material and 233U as the fissile fuel. Due to a lack of funding, the MSR program was discontinued in 1976.
Thorium was first used commercially in the Indian Point Unit 1 reactor which began operation in 1962. The cost of recovering U-233 from the spent fuel was deemed uneconomical, since less than 1% of the thorium was converted to U-233. The plant's owner switched to uranium fuel, which was used until the reactor was permanently shut down in 1974.
Current industrial activity
Currently the only isotopes used as nuclear fuel are uranium-235 (U-235), uranium-238 (U-238) and plutonium-239, although the proposed thorium fuel cycle has advantages. Some modern reactors, with minor modifications, can use thorium. Thorium is approximately three times more abundant in the Earth's crust than uranium (and 550 times more abundant than uranium-235). There has been little exploration for thorium resources, and thus the proven reserves are comparatively small. Thorium is more plentiful than uranium in some countries, notably India. The main thorium-bearing mineral, monazite is currently mostly of interest due to its content of rare earth elements and most of the thorium is simply dumped on spoils tips similar to uranium mine tailings. As mining for rare earth elements occurs mainly in China and as it is not associated in the public consciousness with the nuclear fuel cycle, Thorium-containing mine tailings - despite their radioactivity - are not commonly seen as a nuclear waste issue and are not treated as such by regulators.
Virtually all ever deployed heavy water reactors and some graphite-moderated reactors can use natural uranium, but the vast majority of the world's reactors require enriched uranium, in which the ratio of U-235 to U-238 is increased. In civilian reactors, the enrichment is increased to 3-5% U-235 and 95% U-238, but in naval reactors there is as much as 93% U-235. The fissile content in spent fuel from most light water reactors is high enough to allow its use as fuel for reactors capable of using natural uranium based fuel. However, this would require at least mechanical and/or thermal reprocessing (forming the spent fuel into a new fuel assembly) and is thus not currently widely done.
The term nuclear fuel is not normally used in respect to fusion power, which fuses isotopes of hydrogen into helium to release energy.
See also
Horizontal Drillhole
Deep Geological Repository
Deep Borehole
References
External links
World Nuclear Transport Institute
Hazardous materials
Nuclear fuel infrastructure
Nuclear fuels
Nuclear reprocessing
Nuclear technology
Radioactive waste | Nuclear fuel cycle | [
"Physics",
"Chemistry",
"Technology"
] | 10,858 | [
"Radioactive waste",
"Nuclear technology",
"Materials",
"Environmental impact of nuclear power",
"Hazardous waste",
"Radioactivity",
"Nuclear physics",
"Hazardous materials",
"Matter"
] |
194,227 | https://en.wikipedia.org/wiki/Heat%20capacity | Heat capacity or thermal capacity is a physical property of matter, defined as the amount of heat to be supplied to an object to produce a unit change in its temperature. The SI unit of heat capacity is joule per kelvin (J/K).
Heat capacity is an extensive property. The corresponding intensive property is the specific heat capacity, found by dividing the heat capacity of an object by its mass. Dividing the heat capacity by the amount of substance in moles yields its molar heat capacity. The volumetric heat capacity measures the heat capacity per volume. In architecture and civil engineering, the heat capacity of a building is often referred to as its thermal mass.
Definition
Basic definition
The heat capacity of an object, denoted by , is the limit
where is the amount of heat that must be added to the object (of mass M) in order to raise its temperature by .
The value of this parameter usually varies considerably depending on the starting temperature of the object and the pressure applied to it. In particular, it typically varies dramatically with phase transitions such as melting or vaporization (see enthalpy of fusion and enthalpy of vaporization). Therefore, it should be considered a function of those two variables.
Variation with temperature
The variation can be ignored in contexts when working with objects in narrow ranges of temperature and pressure. For example, the heat capacity of a block of iron weighing one pound is about 204 J/K when measured from a starting temperature T = 25 °C and P = 1 atm of pressure. That approximate value is adequate for temperatures between 15 °C and 35 °C, and surrounding pressures from 0 to 10 atmospheres, because the exact value varies very little in those ranges. One can trust that the same heat input of 204 J will raise the temperature of the block from 15 °C to 16 °C, or from 34 °C to 35 °C, with negligible error.
Heat capacities of a homogeneous system undergoing different thermodynamic processes
At constant pressure, δQ = dU + pdV (isobaric process)
At constant pressure, heat supplied to the system contributes to both the work done and the change in internal energy, according to the first law of thermodynamics. The heat capacity is called and defined as:
From the first law of thermodynamics follows and the inner energy as a function of and is:
For constant pressure the equation simplifies to:
where the final equality follows from the appropriate Maxwell relations, and is commonly used as the definition of the isobaric heat capacity.
At constant volume, dV = 0, δQ = dU (isochoric process)
A system undergoing a process at constant volume implies that no expansion work is done, so the heat supplied contributes only to the change in internal energy. The heat capacity obtained this way is denoted The value of is always less than the value of ( < )
Expressing the inner energy as a function of the variables and gives:
For a constant volume () the heat capacity reads:
The relation between and is then:
Calculating Cp and CV for an ideal gas
Mayer's relation:
where:
is the number of moles of the gas,
is the universal gas constant,
is the heat capacity ratio (which can be calculated by knowing the number of degrees of freedom of the gas molecule).
Using the above two relations, the specific heats can be deduced as follows:
Following from the equipartition of energy, it is deduced that an ideal gas has the isochoric heat capacity
where is the number of degrees of freedom of each individual particle in the gas, and is the number of internal degrees of freedom, where the number 3 comes from the three translational degrees of freedom (for a gas in 3D space). This means that a monoatomic ideal gas (with zero internal degrees of freedom) will have isochoric heat capacity .
At constant temperature (Isothermal process)
No change in internal energy (as the temperature of the system is constant throughout the process) leads to only work done by the total supplied heat, and thus an infinite amount of heat is required to increase the temperature of the system by a unit temperature, leading to infinite or undefined heat capacity of the system.
At the time of phase change (Phase transition)
Heat capacity of a system undergoing phase transition is infinite, because the heat is utilized in changing the state of the material rather than raising the overall temperature.
Heterogeneous objects
The heat capacity may be well-defined even for heterogeneous objects, with separate parts made of different materials; such as an electric motor, a crucible with some metal, or a whole building. In many cases, the (isobaric) heat capacity of such objects can be computed by simply adding together the (isobaric) heat capacities of the individual parts.
However, this computation is valid only when all parts of the object are at the same external pressure before and after the measurement. That may not be possible in some cases. For example, when heating an amount of gas in an elastic container, its volume and pressure will both increase, even if the atmospheric pressure outside the container is kept constant. Therefore, the effective heat capacity of the gas, in that situation, will have a value intermediate between its isobaric and isochoric capacities and .
For complex thermodynamic systems with several interacting parts and state variables, or for measurement conditions that are neither constant pressure nor constant volume, or for situations where the temperature is significantly non-uniform, the simple definitions of heat capacity above are not useful or even meaningful. The heat energy that is supplied may end up as kinetic energy (energy of motion) and potential energy (energy stored in force fields), both at macroscopic and atomic scales. Then the change in temperature will depend on the particular path that the system followed through its phase space between the initial and final states. Namely, one must somehow specify how the positions, velocities, pressures, volumes, etc. changed between the initial and final states; and use the general tools of thermodynamics to predict the system's reaction to a small energy input. The "constant volume" and "constant pressure" heating modes are just two among infinitely many paths that a simple homogeneous system can follow.
Measurement
The heat capacity can usually be measured by the method implied by its definition: start with the object at a known uniform temperature, add a known amount of heat energy to it, wait for its temperature to become uniform, and measure the change in its temperature. This method can give moderately accurate values for many solids; however, it cannot provide very precise measurements, especially for gases.
Units
International system (SI)
The SI unit for heat capacity of an object is joule per kelvin (J/K or J⋅K−1). Since an increment of temperature of one degree Celsius is the same as an increment of one kelvin, that is the same unit as J/°C.
The heat capacity of an object is an amount of energy divided by a temperature change, which has the dimension L2⋅M⋅T−2⋅Θ−1. Therefore, the SI unit J/K is equivalent to kilogram meter squared per second squared per kelvin (kg⋅m2⋅s−2⋅K−1 ).
English (Imperial) engineering units
Professionals in construction, civil engineering, chemical engineering, and other technical disciplines, especially in the United States, may use the so-called English Engineering units, that include the pound (lb = 0.45359237 kg) as the unit of mass, the degree Fahrenheit or Rankine (K, about 0.55556 K) as the unit of temperature increment, and the British thermal unit (BTU ≈ 1055.06 J), as the unit of heat. In those contexts, the unit of heat capacity is 1 BTU/°R ≈ 1900 J/K. The BTU was in fact defined so that the average heat capacity of one pound of water would be 1 BTU/°F. In this regard, with respect to mass, note conversion of 1 Btu/lb⋅°R ≈ 4,187 J/kg⋅K and the calorie (below).
Calories
In chemistry, heat amounts are often measured in calories. Confusingly, two units with that name, denoted "cal" or "Cal", have been commonly used to measure amounts of heat:
The "small calorie" (or "gram-calorie", "cal") is exactly 4.184 J. It was originally defined so that the heat capacity of 1 gram of liquid water would be 1 cal/°C.
The "grand calorie" (also "kilocalorie", "kilogram-calorie", or "food calorie"; "kcal" or "Cal") is 1000 cal, that is, exactly 4184 J. It was originally defined so that the heat capacity of 1 kg of water would be 1 kcal/°C.
With these units of heat energy, the units of heat capacity are
1 cal/°C = 4.184 J/K ;
1 kcal/°C = 4184 J/K.
Physical basis
Negative heat capacity
Most physical systems exhibit a positive heat capacity; constant-volume and constant-pressure heat capacities, rigorously defined as partial derivatives, are always positive for homogeneous bodies. However, even though it can seem paradoxical at first, there are some systems for which the heat capacity / is negative. Examples include a reversibly and nearly adiabatically expanding ideal gas, which cools, < 0, while a small amount of heat > 0 is put in, or combusting methane with increasing temperature, > 0, and giving off heat, < 0. Others are inhomogeneous systems that do not meet the strict definition of thermodynamic equilibrium. They include gravitating objects such as stars and galaxies, and also some nano-scale clusters of a few tens of atoms close to a phase transition. A negative heat capacity can result in a negative temperature.
Stars and black holes
According to the virial theorem, for a self-gravitating body like a star or an interstellar gas cloud, the average potential energy Upot and the average kinetic energy Ukin are locked together in the relation
The total energy U (= Upot + Ukin) therefore obeys
If the system loses energy, for example, by radiating energy into space, the average kinetic energy actually increases. If a temperature is defined by the average kinetic energy, then the system therefore can be said to have a negative heat capacity.
A more extreme version of this occurs with black holes. According to black-hole thermodynamics, the more mass and energy a black hole absorbs, the colder it becomes. In contrast, if it is a net emitter of energy, through Hawking radiation, it will become hotter and hotter until it boils away.
Consequences
According to the second law of thermodynamics, when two systems with different temperatures interact via a purely thermal connection, heat will flow from the hotter system to the cooler one (this can also be understood from a statistical point of view). Therefore, if such systems have equal temperatures, they are at thermal equilibrium. However, this equilibrium is stable only if the systems have positive heat capacities. For such systems, when heat flows from a higher-temperature system to a lower-temperature one, the temperature of the first decreases and that of the latter increases, so that both approach equilibrium. In contrast, for systems with negative heat capacities, the temperature of the hotter system will further increase as it loses heat, and that of the colder will further decrease, so that they will move farther from equilibrium. This means that the equilibrium is unstable.
For example, according to theory, the smaller (less massive) a black hole is, the smaller its Schwarzschild radius will be, and therefore the greater the curvature of its event horizon will be, as well as its temperature. Thus, the smaller the black hole, the more thermal radiation it will emit and the more quickly it will evaporate by Hawking radiation.
See also
Quantum statistical mechanics
Heat capacity ratio
Statistical mechanics
Thermodynamic equations
Thermodynamic databases for pure substances
Heat equation
Heat transfer coefficient
Enthalpy of mixing
Latent heat
Thermodynamic properties of materials
Joback method (estimation of heat capacities)
Specific heat of fusion (enthalpy of fusion)
Specific heat of vaporization (enthalpy of vaporization)
Volumetric heat capacity
Thermal mass
R-value (insulation)
Storage heater
Frenkel line
Table of specific heat capacities
Thermodynamics
References
Further reading
Encyclopædia Britannica, 2015, "Heat capacity (Alternate title: thermal capacity)".
Thermodynamic properties | Heat capacity | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,630 | [
"Thermodynamic properties",
"Quantity",
"Thermodynamics",
"Physical quantities"
] |
194,465 | https://en.wikipedia.org/wiki/Delta-v | Delta-v (also known as "change in velocity"), symbolized as and pronounced , as used in spacecraft flight dynamics, is a measure of the impulse per unit of spacecraft mass that is needed to perform a maneuver such as launching from or landing on a planet or moon, or an in-space orbital maneuver. It is a scalar that has the units of speed. As used in this context, it is not the same as the physical change in velocity of said spacecraft.
A simple example might be the case of a conventional rocket-propelled spacecraft, which achieves thrust by burning fuel. Such a spacecraft's delta-v, then, would be the change in velocity that spacecraft can achieve by burning its entire fuel load.
Delta-v is produced by reaction engines, such as rocket engines, and is proportional to the thrust per unit mass and the burn time. It is used to determine the mass of propellant required for the given maneuver through the Tsiolkovsky rocket equation.
For multiple maneuvers, delta-v sums linearly.
For interplanetary missions, delta-v is often plotted on a porkchop plot, which displays the required mission delta-v as a function of launch date.
Definition
where
is the instantaneous thrust at time .
is the instantaneous mass at time .
Change in velocity is useful in many cases, such as determining the change in momentum (impulse), where: , where is momentum and m is mass.
Specific cases
In the absence of external forces:
where is the coordinate acceleration.
When thrust is applied in a constant direction ( is constant) this simplifies to:
which is simply the magnitude of the change in velocity. However, this relation does not hold in the general case: if, for instance, a constant, unidirectional acceleration is reversed after then the velocity difference is 0, but delta-v is the same as for the non-reversed thrust.
For rockets, "absence of external forces" is taken to mean the absence of gravity drag and atmospheric drag, as well as the absence of aerostatic back pressure on the nozzle, and hence the vacuum I is used for calculating the vehicle's delta-v capacity via the rocket equation. In addition, the costs for atmospheric losses and gravity drag are added into the delta-v budget when dealing with launches from a planetary surface.
Orbital maneuvers
Orbit maneuvers are made by firing a thruster to produce a reaction force acting on the spacecraft. The size of this force will be
where
is the velocity of the exhaust gas in rocket frame
is the propellant flow rate to the combustion chamber
The acceleration of the spacecraft caused by this force will be
where is the mass of the spacecraft
During the burn the mass of the spacecraft will decrease due to use of fuel, the time derivative of the mass being
If now the direction of the force, i.e. the direction of the nozzle, is fixed during the burn one gets the velocity increase from the thruster force of a burn starting at time and ending at as
Changing the integration variable from time to the spacecraft mass one gets
Assuming to be a constant not depending on the amount of fuel left this relation is integrated to
which is the Tsiolkovsky rocket equation.
If for example 20% of the launch mass is fuel giving a constant of 2100 m/s (a typical value for a hydrazine thruster) the capacity of the reaction control system is
If is a non-constant function of the amount of fuel left
the capacity of the reaction control system is computed by the integral ().
The acceleration () caused by the thruster force is just an additional acceleration to be added to the other accelerations (force per unit mass) affecting the spacecraft and the orbit can easily be propagated with a numerical algorithm including also this thruster force. But for many purposes, typically for studies or for maneuver optimization, they are approximated by impulsive maneuvers as illustrated in figure 1 with a as given by (). Like this one can for example use a "patched conics" approach modeling the maneuver as a shift from one Kepler orbit to another by an instantaneous change of the velocity vector.
This approximation with impulsive maneuvers is in most cases very accurate, at least when chemical propulsion is used. For low thrust systems, typically electrical propulsion systems, this approximation is less accurate. But even for geostationary spacecraft using electrical propulsion for out-of-plane control with thruster burn periods extending over several hours around the nodes this approximation is fair.
Production
Delta-v is typically provided by the thrust of a rocket engine, but can be created by other engines. The time-rate of change of delta-v is the magnitude of the acceleration caused by the engines, i.e., the thrust per total vehicle mass. The actual acceleration vector would be found by adding thrust per mass on to the gravity vector and the vectors representing any other forces acting on the object.
The total delta-v needed is a good starting point for early design decisions since consideration of the added complexities are deferred to later times in the design process.
The rocket equation shows that the required amount of propellant increases exponentially with increasing delta-v. Therefore, in modern spacecraft propulsion systems considerable study is put into reducing the total delta-v needed for a given spaceflight, as well as designing spacecraft that are capable of producing larger delta-v.
Increasing the delta-v provided by a propulsion system can be achieved by:
staging
increasing specific impulse
improving propellant mass fraction
Multiple maneuvers
Because the mass ratios apply to any given burn, when multiple maneuvers are performed in sequence, the mass ratios multiply.
Thus it can be shown that, provided the exhaust velocity is fixed, this means that delta-v can be summed:
When are the mass ratios of the maneuvers, and are the delta-v of the first and second maneuvers
where and . This is just the rocket equation applied to the sum of the two maneuvers.
This is convenient since it means that delta-v can be calculated and simply added and the mass ratio calculated only for the overall vehicle for the entire mission. Thus delta-v is commonly quoted rather than mass ratios which would require multiplication.
Delta-v budgets
When designing a trajectory, delta-v budget is used as a good indicator of how much propellant will be required. Propellant usage is an exponential function of delta-v in accordance with the rocket equation, it will also depend on the exhaust velocity.
It is not possible to determine delta-v requirements from conservation of energy by considering only the total energy of the vehicle in the initial and final orbits since energy is carried away in the exhaust (see also below). For example, most spacecraft are launched in an orbit with inclination fairly near to the latitude at the launch site, to take advantage of the Earth's rotational surface speed. If it is necessary, for mission-based reasons, to put the spacecraft in an orbit of different inclination, a substantial delta-v is required, though the specific kinetic and potential energies in the final orbit and the initial orbit are equal.
When rocket thrust is applied in short bursts the other sources of acceleration may be negligible, and the magnitude of the velocity change of one burst may be simply approximated by the delta-v. The total delta-v to be applied can then simply be found by addition of each of the delta-v'''s needed at the discrete burns, even though between bursts the magnitude and direction of the velocity changes due to gravity, e.g. in an elliptic orbit.
For examples of calculating delta-v, see Hohmann transfer orbit, gravitational slingshot, and Interplanetary Transport Network. It is also notable that large thrust can reduce gravity drag.
Delta-v is also required to keep satellites in orbit and is expended in propulsive orbital stationkeeping maneuvers. Since the propellant load on most satellites cannot be replenished, the amount of propellant initially loaded on a satellite may well determine its useful lifetime.
Oberth effect
From power considerations, it turns out that when applying delta-v in the direction of the velocity the specific orbital energy gained per unit delta-v is equal to the instantaneous speed. This is called the Oberth effect.
For example, a satellite in an elliptical orbit is boosted more efficiently at high speed (that is, small altitude) than at low speed (that is, high altitude).
Another example is that when a vehicle is making a pass of a planet, burning the propellant at closest approach rather than further out gives significantly higher final speed, and this is even more so when the planet is a large one with a deep gravity field, such as Jupiter.
Porkchop plot
Due to the relative positions of planets changing over time, different delta-vs are required at different launch dates. A diagram that shows the required delta-v plotted against time is sometimes called a porkchop plot. Such a diagram is useful since it enables calculation of a launch window, since launch should only occur when the mission is within the capabilities of the vehicle to be employed.
Around the Solar System
Delta-v needed for various orbital manoeuvers using conventional rockets; red arrows show where optional aerobraking can be performed in that particular direction, black numbers give delta-v in km/s that apply in either direction. Gives figures of 8.6 from Earth's surface to LEO, 4.1 and 3.8 for LEO to lunar orbit (or L5) and GEO resp., 0.7 for L5 to lunar orbit, and 2.2 for lunar orbit to lunar surface. Figures are said to come from Chapter 2 of Space Settlements: A Design Study on the NASA website . Lower-delta-v transfers than shown can often be achieved, but involve rare transfer windows or take significantly longer, see: .
C3 Escape orbit
GEO Geosynchronous orbit
GTO Geostationary transfer orbit
L4/5 Earth–Moon Lagrangian point
LEO Low Earth orbit
LEO reentry
For example the Soyuz spacecraft makes a de-orbit from the ISS in two steps. First, it needs a delta-v'' of 2.18 m/s for a safe separation from the space station. Then it needs another 128 m/s for reentry.
See also
Delta-v budget
Gravity drag
Orbital maneuver
Orbital stationkeeping
Spacecraft propulsion
Orbital propellant depot
Specific impulse
Tsiolkovsky rocket equation
Delta-v (physics)
References
Astrodynamics
Spacecraft propulsion
Physical quantities | Delta-v | [
"Physics",
"Mathematics",
"Engineering"
] | 2,130 | [
"Physical phenomena",
"Astrodynamics",
"Physical quantities",
"Quantity",
"Aerospace engineering",
"Physical properties"
] |
194,619 | https://en.wikipedia.org/wiki/Morale | Morale ( , ) is the capacity of a group's members to maintain belief in an institution or goal, particularly in the face of opposition or hardship. Morale is often referenced by authority figures as a generic value judgment of the willpower, obedience, and self-discipline of a group tasked with performing duties assigned by a superior. According to Alexander H. Leighton, "morale is the capacity of a group of people to pull together persistently and consistently in pursuit of a common purpose". With good morale, a force will be less likely to give up or surrender. Morale is usually assessed at a collective, rather than an individual level. In wartime, civilian morale is also important.
Definition
Military history experts have not agreed on a precise definition of "morale". Clausewitz's comments on the subject have been described as "deliberately vague" by modern scholars. George Francis Robert Henderson, a widely read military author of the pre-World War I era, viewed morale as related to the instinct of self-preservation, the suppression of which he said was "the moral fear of turning back", in other words, that a willingness to fight was bolstered by a strong sense of duty. Henderson wrote:
Human nature must be the basis of every leader's calculations. To sustain the moral[e] of his own men; to break down the moral[e] of his enemy—these are the great objects which, if he be ambitious of success, he must always keep in view.
During the proceedings of the Southborough Committee inquiry concerning shellshock, testimony by Colonel J. F. C. Fuller defined morale as "the acquired quality which in highly-trained troops counterbalances the influence of the instinct of self-preservation." Of Henderson's "moral fear", the soldier's sense of duty, it is contrasted with the fear of death, and to control one's troops required of a commander more than authoritarian force, but other strategies to be deployed to that purpose.
Military
In military science, there are two meanings to morale: individual perseverance and unit cohesion. Morale is often highly dependent on soldier effectiveness, health, comfort, safety, and belief-in-purpose, and therefore an army with good supply lines, sound air cover, and a clear objective will typically possess, as a whole, better morale than one without. "Will to fight" is the single most important factor in war. Will to fight helps determine whether a military unit stays in the fight and also how well it fights.
Historically, elite military units such as special operations forces have "high morale" due to their training and pride in their unit. When a unit's morale is said to be "depleted", it means it is close to "crack and surrender". It is well worth noting that generally speaking, most commanders do not look at the morale of specific individuals but rather the "fighting spirit" of squadrons, divisions, battalions, ships, etc.
In August 2012, an article entitled "Army morale declines in survey" states that "only a quarter of the [US] Army's officers and enlisted soldiers believe the nation's largest military branch is headed in the right direction." The "... most common reasons cited for the bleak outlook were "ineffective leaders at senior levels," a fear of losing the best and the brightest after a decade of war, and the perception, especially among senior enlisted soldiers, that "the Army is too soft" and lacks sufficient discipline."
Employee morale
Employee morale is proven to have a direct effect on productivity; it is one of the cornerstones of business.
See also
Military psychology
Collective identity
Committee for National Morale
Demoralization (warfare)
Information warfare
Motivation
Pre-work assembly
Psychological warfare
Rank theory of depression
References
External links
Matteo Ermacora: Civilian Morale, in: 1914-1918-online. International Encyclopedia of the First World War.
Adi Sherzer & Samuel Boumendil, The Morale Component of the Russia–Ukraine War, in: Uzi Ben-Shalom & others (eds.), Military Heroism in a Post-Heroic Era (Springer, 2024).
Psychological warfare
Psychological attitude
Motivation
Group processes | Morale | [
"Biology"
] | 850 | [
"Ethology",
"Behavior",
"Motivation",
"Human behavior"
] |
194,690 | https://en.wikipedia.org/wiki/Enzyme%20Commission%20number | The Enzyme Commission number (EC number) is a numerical classification scheme for enzymes, based on the chemical reactions they catalyze. As a system of enzyme nomenclature, every EC number is associated with a recommended name for the corresponding enzyme-catalyzed reaction.
EC numbers do not specify enzymes but enzyme-catalyzed reactions. If different enzymes (for instance from different organisms) catalyze the same reaction, then they receive the same EC number. Furthermore, through convergent evolution, completely different protein folds can catalyze an identical reaction (these are sometimes called non-homologous isofunctional enzymes) and therefore would be assigned the same EC number. By contrast, UniProt identifiers uniquely specify a protein by its amino acid sequence.
Format of number
Every enzyme code consists of the letters "EC" followed by four numbers separated by periods. Those numbers represent a progressively finer classification of the enzyme. Preliminary EC numbers exist and have an 'n' as part of the fourth (serial) digit (e.g. EC 3.5.1.n3).
For example, the tripeptide aminopeptidases have the code "EC 3.4.11.4", whose components indicate the following groups of enzymes:
EC 3 enzymes are hydrolases (enzymes that use water to break up some other molecule)
EC 3.4 are hydrolases that act on peptide bonds
EC 3.4.11 are those hydrolases that cleave off the amino-terminal amino acid from a polypeptide
EC 3.4.11.4 are those that cleave off the amino-terminal end from a tripeptide
Top level codes
NB:The enzyme classification number is different from the 'FORMAT NUMBER'
Reaction similarity
Similarity between enzymatic reactions can be calculated by using bond changes, reaction centres or substructure metrics (formerly EC-BLAST], now the EMBL-EBI Enzyme Portal).
History
Before the development of the EC number system, enzymes were named in an arbitrary fashion, and names like old yellow enzyme and malic enzyme that give little or no clue as to what reaction was catalyzed were in common use. Most of these names have fallen into disuse, though a few, especially proteolyic enzymes with very low specificity, such as pepsin and papain, are still used, as rational classification on the basis of specificity has been very difficult.
By the 1950s the chaos was becoming intolerable, and after Hoffman-Ostenhof and Dixon and Webb had proposed somewhat similar schemes for classifying enzyme-catalyzed reactions, the International Congress of Biochemistry in Brussels set up the Commission on Enzymes under the chairmanship of Malcolm Dixon in 1955. The first version was published in 1961, and the Enzyme Commission was dissolved at that time, though its name lives on in the term EC Number. The current sixth edition, published by the International Union of Biochemistry and Molecular Biology in 1992 as the last version published as a printed book, contains 3196 different enzymes. Supplements 1-4 were published 1993–1999. Subsequent supplements have been published electronically, at the website of the Nomenclature Committee of the International Union of Biochemistry and Molecular Biology. In August 2018, the IUBMB modified the system by adding the top-level EC 7 category containing translocases.
See also
List of EC numbers
List of enzymes
TC number (classification of membrane transport proteins)
References
External links
Enzyme Nomenclature, authoritative website by the Nomenclature Committee of the International Union of Biochemistry and Molecular Biology, maintained by G.P. Moss
Enzyme nomenclature database — by ExPASy
List of all EC numbers — by BRENDA
Browse PDB structures by EC number
Browse SCOP domains by EC number — by dcGO
Compare EC numbers using EC-Blast
Bioinformatics
Cheminformatics
Chemical numbering schemes | Enzyme Commission number | [
"Chemistry",
"Mathematics",
"Engineering",
"Biology"
] | 788 | [
"Biological engineering",
"Mathematical objects",
"Chemical numbering schemes",
"Bioinformatics",
"Computational chemistry",
"nan",
"Cheminformatics",
"Numbers"
] |
195,351 | https://en.wikipedia.org/wiki/Jacobian%20matrix%20and%20determinant | In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and (if applicable) the determinant are often referred to simply as the Jacobian in literature. They are named after Carl Gustav Jacob Jacobi.
Motivation
The Jacobian can be understood by considering a unit area in the new coordinate space; and examining how that unit area transforms when mapped into xy coordinate space in which the integral is visually understood. The process involves taking partial derivatives with respect to the new coordinates, then applying the determinant and hence obtaining the Jacobian.
Definition
Suppose is a function such that each of its first-order partial derivatives exists on . This function takes a point as input and produces the vector as output. Then the Jacobian matrix of , denoted , is defined such that its entry is , or explicitly
where is the transpose (row vector) of the gradient of the -th component.
The Jacobian matrix, whose entries are functions of , is denoted in various ways; other common notations include , , and . Some authors define the Jacobian as the transpose of the form given above.
The Jacobian matrix represents the differential of at every point where is differentiable. In detail, if is a displacement vector represented by a column matrix, the matrix product is another displacement vector, that is the best linear approximation of the change of in a neighborhood of , if is differentiable at . This means that the function that maps to is the best linear approximation of for all points close to . The linear map is known as the derivative or the differential of at .
When , the Jacobian matrix is square, so its determinant is a well-defined function of , known as the Jacobian determinant of . It carries important information about the local behavior of . In particular, the function has a differentiable inverse function in a neighborhood of a point if and only if the Jacobian determinant is nonzero at (see inverse function theorem for an explanation of this and Jacobian conjecture for a related problem of global invertibility). The Jacobian determinant also appears when changing the variables in multiple integrals (see substitution rule for multiple variables).
When , that is when is a scalar-valued function, the Jacobian matrix reduces to the row vector ; this row vector of all first-order partial derivatives of is the transpose of the gradient of , i.e.
. Specializing further, when , that is when is a scalar-valued function of a single variable, the Jacobian matrix has a single entry; this entry is the derivative of the function .
These concepts are named after the mathematician Carl Gustav Jacob Jacobi (1804–1851).
Jacobian matrix
The Jacobian of a vector-valued function in several variables generalizes the gradient of a scalar-valued function in several variables, which in turn generalizes the derivative of a scalar-valued function of a single variable. In other words, the Jacobian matrix of a scalar-valued function in several variables is (the transpose of) its gradient and the gradient of a scalar-valued function of a single variable is its derivative.
At each point where a function is differentiable, its Jacobian matrix can also be thought of as describing the amount of "stretching", "rotating" or "transforming" that the function imposes locally near that point. For example, if is used to smoothly transform an image, the Jacobian matrix , describes how the image in the neighborhood of is transformed.
If a function is differentiable at a point, its differential is given in coordinates by the Jacobian matrix. However, a function does not need to be differentiable for its Jacobian matrix to be defined, since only its first-order partial derivatives are required to exist.
If is differentiable at a point in , then its differential is represented by . In this case, the linear transformation represented by is the best linear approximation of near the point , in the sense that
where is a quantity that approaches zero much faster than the distance between and does as approaches . This approximation specializes to the approximation of a scalar function of a single variable by its Taylor polynomial of degree one, namely
In this sense, the Jacobian may be regarded as a kind of "first-order derivative" of a vector-valued function of several variables. In particular, this means that the gradient of a scalar-valued function of several variables may too be regarded as its "first-order derivative".
Composable differentiable functions and satisfy the chain rule, namely for in .
The Jacobian of the gradient of a scalar function of several variables has a special name: the Hessian matrix, which in a sense is the "second derivative" of the function in question.
Jacobian determinant
If , then is a function from to itself and the Jacobian matrix is a square matrix. We can then form its determinant, known as the Jacobian determinant. The Jacobian determinant is sometimes simply referred to as "the Jacobian".
The Jacobian determinant at a given point gives important information about the behavior of near that point. For instance, the continuously differentiable function is invertible near a point if the Jacobian determinant at is non-zero. This is the inverse function theorem. Furthermore, if the Jacobian determinant at is positive, then preserves orientation near ; if it is negative, reverses orientation. The absolute value of the Jacobian determinant at gives us the factor by which the function expands or shrinks volumes near ; this is why it occurs in the general substitution rule.
The Jacobian determinant is used when making a change of variables when evaluating a multiple integral of a function over a region within its domain. To accommodate for the change of coordinates the magnitude of the Jacobian determinant arises as a multiplicative factor within the integral. This is because the -dimensional element is in general a parallelepiped in the new coordinate system, and the -volume of a parallelepiped is the determinant of its edge vectors.
The Jacobian can also be used to determine the stability of equilibria for systems of differential equations by approximating behavior near an equilibrium point.
Inverse
According to the inverse function theorem, the matrix inverse of the Jacobian matrix of an invertible function is the Jacobian matrix of the inverse function. That is, the Jacobian matrix of the inverse function at a point is
and the Jacobian determinant is
If the Jacobian is continuous and nonsingular at the point in , then is invertible when restricted to some neighbourhood of . In other words, if the Jacobian determinant is not zero at a point, then the function is locally invertible near this point.
The (unproved) Jacobian conjecture is related to global invertibility in the case of a polynomial function, that is a function defined by n polynomials in n variables. It asserts that, if the Jacobian determinant is a non-zero constant (or, equivalently, that it does not have any complex zero), then the function is invertible and its inverse is a polynomial function.
Critical points
If is a differentiable function, a critical point of is a point where the rank of the Jacobian matrix is not maximal. This means that the rank at the critical point is lower than the rank at some neighbour point. In other words, let be the maximal dimension of the open balls contained in the image of ; then a point is critical if all minors of rank of are zero.
In the case where , a point is critical if the Jacobian determinant is zero.
Examples
Example 1
Consider a function with given by
Then we have
and
The Jacobian matrix of is
and the Jacobian determinant is
Example 2: polar-Cartesian transformation
The transformation from polar coordinates to Cartesian coordinates (x, y), is given by the function with components
The Jacobian determinant is equal to . This can be used to transform integrals between the two coordinate systems:
Example 3: spherical-Cartesian transformation
The transformation from spherical coordinates to Cartesian coordinates (x, y, z), is given by the function with components
The Jacobian matrix for this coordinate change is
The determinant is . Since is the volume for a rectangular differential volume element (because the volume of a rectangular prism is the product of its sides), we can interpret as the volume of the spherical differential volume element. Unlike rectangular differential volume element's volume, this differential volume element's volume is not a constant, and varies with coordinates ( and ). It can be used to transform integrals between the two coordinate systems:
Example 4
The Jacobian matrix of the function with components
is
This example shows that the Jacobian matrix need not be a square matrix.
Example 5
The Jacobian determinant of the function with components
is
From this we see that reverses orientation near those points where and have the same sign; the function is locally invertible everywhere except near points where or . Intuitively, if one starts with a tiny object around the point and apply to that object, one will get a resulting object with approximately times the volume of the original one, with orientation reversed.
Other uses
Dynamical systems
Consider a dynamical system of the form , where is the (component-wise) derivative of with respect to the evolution parameter (time), and is differentiable. If , then is a stationary point (also called a steady state). By the Hartman–Grobman theorem, the behavior of the system near a stationary point is related to the eigenvalues of , the Jacobian of at the stationary point. Specifically, if the eigenvalues all have real parts that are negative, then the system is stable near the stationary point. If any eigenvalue has a real part that is positive, then the point is unstable. If the largest real part of the eigenvalues is zero, the Jacobian matrix does not allow for an evaluation of the stability.
Newton's method
A square system of coupled nonlinear equations can be solved iteratively by Newton's method. This method uses the Jacobian matrix of the system of equations.
Regression and least squares fitting
The Jacobian serves as a linearized design matrix in statistical regression and curve fitting; see non-linear least squares. The Jacobian is also used in random matrices, moments, local sensitivity and statistical diagnostics.
See also
Center manifold
Hessian matrix
Pushforward (differential)
Notes
References
Further reading
External links
Mathworld A more technical explanation of Jacobians
Multivariable calculus
Differential calculus
Generalizations of the derivative
Determinants
Matrices
Differential operators | Jacobian matrix and determinant | [
"Mathematics"
] | 2,249 | [
"Mathematical analysis",
"Calculus",
"Mathematical objects",
"Matrices (mathematics)",
"Differential calculus",
"Multivariable calculus",
"Differential operators"
] |
195,407 | https://en.wikipedia.org/wiki/Einstein%20notation | In mathematics, especially the usage of linear algebra in mathematical physics and differential geometry, Einstein notation (also known as the Einstein summation convention or Einstein summation notation) is a notational convention that implies summation over a set of indexed terms in a formula, thus achieving brevity. As part of mathematics it is a notational subset of Ricci calculus; however, it is often used in physics applications that do not distinguish between tangent and cotangent spaces. It was introduced to physics by Albert Einstein in 1916.
Introduction
Statement of convention
According to this convention, when an index variable appears twice in a single term and is not otherwise defined (see Free and bound variables), it implies summation of that term over all the values of the index. So where the indices can range over the set ,
is simplified by the convention to:
The upper indices are not exponents but are indices of coordinates, coefficients or basis vectors. That is, in this context should be understood as the second component of rather than the square of (this can occasionally lead to ambiguity). The upper index position in is because, typically, an index occurs once in an upper (superscript) and once in a lower (subscript) position in a term (see below). Typically, would be equivalent to the traditional .
In general relativity, a common convention is that
the Greek alphabet is used for space and time components, where indices take on values 0, 1, 2, or 3 (frequently used letters are ),
the Latin alphabet is used for spatial components only, where indices take on values 1, 2, or 3 (frequently used letters are ),
In general, indices can range over any indexing set, including an infinite set. This should not be confused with a typographically similar convention used to distinguish between tensor index notation and the closely related but distinct basis-independent abstract index notation.
An index that is summed over is a summation index, in this case "". It is also called a dummy index since any symbol can replace "" without changing the meaning of the expression (provided that it does not collide with other index symbols in the same term).
An index that is not summed over is a free index and should appear only once per term. If such an index does appear, it usually also appears in every other term in an equation. An example of a free index is the "" in the equation , which is equivalent to the equation .
Application
Einstein notation can be applied in slightly different ways. Typically, each index occurs once in an upper (superscript) and once in a lower (subscript) position in a term; however, the convention can be applied more generally to any repeated indices within a term. When dealing with covariant and contravariant vectors, where the position of an index indicates the type of vector, the first case usually applies; a covariant vector can only be contracted with a contravariant vector, corresponding to summation of the products of coefficients. On the other hand, when there is a fixed coordinate basis (or when not considering coordinate vectors), one may choose to use only subscripts; see below.
Vector representations
Superscripts and subscripts versus only subscripts
In terms of covariance and contravariance of vectors,
upper indices represent components of contravariant vectors (vectors),
lower indices represent components of covariant vectors (covectors).
They transform contravariantly or covariantly, respectively, with respect to change of basis.
In recognition of this fact, the following notation uses the same symbol both for a vector or covector and its components, as in:
where is the vector and are its components (not the th covector ), is the covector and are its components. The basis vector elements are each column vectors, and the covector basis elements are each row covectors. (See also ; duality, below and the examples)
In the presence of a non-degenerate form (an isomorphism , for instance a Riemannian metric or Minkowski metric), one can raise and lower indices.
A basis gives such a form (via the dual basis), hence when working on with a Euclidean metric and a fixed orthonormal basis, one has the option to work with only subscripts.
However, if one changes coordinates, the way that coefficients change depends on the variance of the object, and one cannot ignore the distinction; see Covariance and contravariance of vectors.
Mnemonics
In the above example, vectors are represented as matrices (column vectors), while covectors are represented as matrices (row covectors).
When using the column vector convention:
"Upper indices go up to down; lower indices go left to right."
"Covariant tensors are row vectors that have indices that are below (co-row-below)."
Covectors are row vectors: Hence the lower index indicates which column you are in.
Contravariant vectors are column vectors: Hence the upper index indicates which row you are in.
Abstract description
The virtue of Einstein notation is that it represents the invariant quantities with a simple notation.
In physics, a scalar is invariant under transformations of basis. In particular, a Lorentz scalar is invariant under a Lorentz transformation. The individual terms in the sum are not. When the basis is changed, the components of a vector change by a linear transformation described by a matrix. This led Einstein to propose the convention that repeated indices imply the summation is to be done.
As for covectors, they change by the inverse matrix. This is designed to guarantee that the linear function associated with the covector, the sum above, is the same no matter what the basis is.
The value of the Einstein convention is that it applies to other vector spaces built from using the tensor product and duality. For example, , the tensor product of with itself, has a basis consisting of tensors of the form . Any tensor in can be written as:
, the dual of , has a basis , , ..., which obeys the rule
where is the Kronecker delta. As
the row/column coordinates on a matrix correspond to the upper/lower indices on the tensor product.
Common operations in this notation
In Einstein notation, the usual element reference for the -th row and -th column of matrix becomes . We can then write the following operations in Einstein notation as follows.
Inner product
The inner product of two vectors is the sum of the products of their corresponding components, with the indices of one vector lowered (see #Raising and lowering indices):
In the case of an orthonormal basis, we have , and the expression simplifies to:
Vector cross product
In three dimensions, the cross product of two vectors with respect to a positively oriented orthonormal basis, meaning that , can be expressed as:
Here, is the Levi-Civita symbol. Since the basis is orthonormal, raising the index does not alter the value of , when treated as a tensor.
Matrix-vector multiplication
The product of a matrix with a column vector is:
equivalent to
This is a special case of matrix multiplication.
Matrix multiplication
The matrix product of two matrices and is:
equivalent to
Trace
For a square matrix , the trace is the sum of the diagonal elements, hence the sum over a common index .
Outer product
The outer product of the column vector by the row vector yields an matrix :
Since and represent two different indices, there is no summation and the indices are not eliminated by the multiplication.
Raising and lowering indices
Given a tensor, one can raise an index or lower an index by contracting the tensor with the metric tensor, . For example, taking the tensor , one can lower an index:
Or one can raise an index:
See also
Tensor
Abstract index notation
Bra–ket notation
Penrose graphical notation
Levi-Civita symbol
DeWitt notation
Notes
This applies only for numerical indices. The situation is the opposite for abstract indices. Then, vectors themselves carry upper abstract indices and covectors carry lower abstract indices, as per the example in the introduction of this article. Elements of a basis of vectors may carry a lower numerical index and an upper abstract index.
References
Bibliography
.
External links
Mathematical notation
Multilinear algebra
Tensors
Riemannian geometry
Mathematical physics
Albert Einstein | Einstein notation | [
"Physics",
"Mathematics",
"Engineering"
] | 1,708 | [
"Tensors",
"Applied mathematics",
"Theoretical physics",
"nan",
"Mathematical physics"
] |
195,693 | https://en.wikipedia.org/wiki/X-ray%20burster | X-ray bursters are one class of X-ray binary stars exhibiting X-ray bursts, periodic and rapid increases in luminosity (typically a factor of 10 or greater) that peak in the X-ray region of the electromagnetic spectrum. These astrophysical systems are composed of an accreting neutron star and a main sequence companion 'donor' star. There are two types of X-ray bursts, designated I and II. Type I bursts are caused by thermonuclear runaway, while type II arise from the release of gravitational (potential) energy liberated through accretion. For type I (thermonuclear) bursts, the mass transferred from the donor star accumulates on the surface of the neutron star until it ignites and fuses in a burst, producing X-rays. The behaviour of X-ray bursters is similar to the behaviour of recurrent novae. In the latter case the compact object is a white dwarf that accretes hydrogen that finally undergoes explosive burning.
The compact object of the broader class of X-ray binaries is either a neutron star or a black hole; however, with the emission of an X-ray burst, the compact object can immediately be classified as a neutron star, since black holes do not have a surface and all of the accreting material disappears past the event horizon. X-ray binaries hosting a neutron star can be further subdivided based on the mass of the donor star; either a high mass (above 10 solar masses ()) or low mass (less than ) X-ray binary, abbreviated as HMXB and LMXB, respectively.
X-ray bursts typically exhibit a sharp rise time (1–10 seconds) followed by spectral softening (a property of cooling black bodies). Individual burst energetics are characterized by an integrated flux of 1032–1033 joules, compared to the steady luminosity which is of the order 1030 W for steady accretion onto a neutron star. As such the ratio α of the burst flux to the persistent flux ranges from 10 to 1000 but is typically on the order of 100. The X-ray bursts emitted from most of these systems recur on timescales ranging from hours to days, although more extended recurrence times are exhibited in some systems, and weak bursts with recurrence times between 5–20 minutes have yet to be explained but are observed in some less usual cases. The abbreviation XRB can refer either to the object (X-ray burster) or to the associated emission (X-ray burst).
Thermonuclear burst astrophysics
When a star in a binary fills its Roche lobe (either due to being very close to its companion or having a relatively large radius), it begins to lose matter, which streams towards its neutron star companion. The star may also undergo mass loss by exceeding its Eddington luminosity, or through strong stellar winds, and some of this material may become gravitationally attracted to the neutron star. In the circumstance of a short orbital period and a massive partner star, both of these processes may contribute to the transfer of material from the companion to the neutron star. In both cases, the falling material originates from the surface layers of the partner star and is thus rich in hydrogen and helium. The matter streams from the donor into the accretor at the intersection of the two Roche lobes, which is also the location of the first Lagrange point, L1. Because of the revolution of the two stars around a common centre of gravity, the material then forms a jet travelling towards the accretor. Because compact stars have high gravitational fields, the material falls with a high velocity and angular momentum towards the neutron star. The angular momentum prevents it from immediately joining the surface of the accreting star. It continues to orbit the accretor in the orbital plane, colliding with other accreting material en route, thereby losing energy, and in so doing forming an accretion disk, which also lies in the orbital plane.
In an X-ray burster, this material accretes onto the surface of the neutron star, where it forms a dense layer. After mere hours of accumulation and gravitational compression, nuclear fusion starts in this matter. This begins as a stable process, the hot CNO cycle. However, continued accretion creates a degenerate shell of matter, in which the temperature rises (greater than 109 kelvin) but this does not alleviate thermodynamic conditions. This causes the triple-α cycle to quickly become favored, resulting in an helium flash. The additional energy provided by this flash allows the CNO burning to break out into thermonuclear runaway. The early phase of the burst is powered by the alpha-p process, which quickly yields to the rp-process. Nucleosynthesis can proceed as high as mass number 100, but was shown to end definitively at isotopes of tellurium that undergo alpha decay such as 107Te. Within seconds, most of the accreted material is burned, powering a bright X-ray flash that is observable with X-ray (or gamma ray) telescopes. Theory suggests that there are several burning regimes which cause variations in the burst, such as ignition condition, energy released, and recurrence, with the regimes caused by the nuclear composition, both of the accreted material and the burst ashes. This is mostly dependent on hydrogen, helium, or carbon content. Carbon ignition may also be the cause of the extremely rare "superbursts".
Observation of bursts
Because an enormous amount of energy is released in a short period of time, much of it is released as high energy photons in accordance with the theory of black-body radiation, in this case X-rays. This release of energy powers the X-ray burst, and may be observed as in increase in the star's luminosity with a space telescope. These bursts cannot be observed on Earth's surface because our atmosphere is opaque to X-rays. Most X-ray bursting stars exhibit recurrent bursts because the bursts are not powerful enough to disrupt the stability or orbit of either star, and the whole process may begin again.
Most X-ray bursters have irregular burst periods, which can be on the order of a few hours to many months, depending on factors such as the masses of the stars, the distance between the two stars, the rate of accretion, and the exact composition of the accreted material. Observationally, the X-ray burst categories exhibit different features. A Type I X-ray burst has a sharp rise followed by a slow and gradual decline of the luminosity profile. A Type II X-ray burst exhibits a quick pulse shape and may have many fast bursts separated by minutes. Most observed X-ray bursts are of Type I, as Type II X-ray bursts have been observed from only two sources.
More finely detailed variations in burst observation have been recorded as the X-ray imaging telescopes improve. Within the familiar burst lightcurve shape, anomalies such as oscillations (called quasi-periodic oscillations) and dips have been observed, with various nuclear and physical explanations being offered, though none yet has been proven.
X-ray spectroscopy has revealed in bursts from EXO 0748-676 a 4 keV absorption feature and H and He-like absorption lines in Fe. The subsequent derivation of redshift of Z=0.35 implies a constraint for the mass-radius equation of the neutron star, a relationship which is still a mystery but is a major priority for the astrophysics community. However, the narrow line profiles are inconsistent with the rapid (552 Hz) spin of the neutron star in this object, and it seems more likely that the line features arise from the accretion disc.
Applications to astronomy
Luminous X-ray bursts can be considered standard candles, since the mass of the neutron star determines the luminosity of the burst. Therefore, comparing the observed X-ray flux to the predicted value yields relatively accurate distances. Observations of X-ray bursts also allow the determination of the radius of the neutron star.
See also
Gamma-ray burst
Soft X-ray transient
References
Stellar phenomena
Neutron stars
Variable stars
Standard candles
X-ray burster
Semidetached binaries
Nucleosynthesis | X-ray burster | [
"Physics",
"Chemistry"
] | 1,714 | [
"Nuclear fission",
"Physical phenomena",
"Standard candles",
"Astrophysics",
"Nucleosynthesis",
"Nuclear physics",
"Nuclear fusion",
"Stellar phenomena"
] |
195,729 | https://en.wikipedia.org/wiki/Period%203%20element | A period 3 element is one of the chemical elements in the third row (or period) of the periodic table of the chemical elements. The periodic table is laid out in rows to illustrate recurring (periodic) trends in the chemical behavior of the elements as their atomic number increases: a new row is begun when chemical behavior begins to repeat, meaning that elements with similar behavior fall into the same vertical columns. The third period contains eight elements: sodium, magnesium, aluminium, silicon, phosphorus, sulfur, chlorine and argon. The first two, sodium and magnesium, are members of the s-block of the periodic table, while the others are members of the p-block. All of the period 3 elements occur in nature and have at least one stable isotope.
Atomic structure
In a quantum mechanical description of atomic structure, this period corresponds to the buildup of electrons in the third () shell, more specifically filling its 3s and 3p subshells. There is a 3d subshell, but—in compliance with the Aufbau principle—it is not filled until period 4. This makes all eight elements analogs of the period 2 elements in the same exact sequence. The octet rule generally applies to period 3 in the same way as to period 2 elements, because the 3d subshell is normally non-acting.
Elements
Sodium
Sodium (symbol Na) is a soft, silvery-white, highly reactive metal and is a member of the alkali metals; its only stable isotope is 23Na. It is an abundant element that exists in numerous minerals such as feldspars, sodalite and rock salt. Many salts of sodium are highly soluble in water and are thus present in significant quantities in the Earth's bodies of water, most abundantly in the oceans as sodium chloride.
Many sodium compounds are useful, such as sodium hydroxide (lye) for soapmaking, and sodium chloride for use as a deicing agent and a nutrient. The same ion is also a component of many minerals, such as sodium nitrate.
The free metal, elemental sodium, does not occur in nature but must be prepared from sodium compounds. Elemental sodium was first isolated by Humphry Davy in 1807 by the electrolysis of sodium hydroxide.
Magnesium
Magnesium (symbol Mg) is an alkaline earth metal and has common oxidation number +2. It is the eighth most abundant element in the Earth's crust and the ninth in the known universe as a whole. Magnesium is the fourth most common element in the Earth as a whole (behind iron, oxygen and silicon), making up 13% of the planet's mass and a large fraction of the planet's mantle. It is relatively abundant because it is easily built up in supernova stars by sequential additions of three helium nuclei to carbon (which in turn is made from three helium nuclei). Due to the magnesium ion's high solubility in water, it is the third most abundant element dissolved in seawater.
The free element (metal) is not found naturally on Earth, as it is highly reactive (though once produced, it is coated in a thin layer of oxide [see passivation], which partly masks this reactivity). The free metal burns with a characteristic brilliant white light, making it a useful ingredient in flares. The metal is now mainly obtained by electrolysis of magnesium salts obtained from brine. Commercially, the chief use for the metal is as an alloying agent to make aluminium-magnesium alloys, sometimes called "magnalium" or "magnelium". Since magnesium is less dense than aluminium, these alloys are prized for their relative lightness and strength.
Magnesium ions are sour to the taste, and in low concentrations help to impart a natural tartness to fresh mineral waters.
Aluminium
Aluminium (symbol Al) or aluminum (American English) is a silvery white member of the boron group of chemical elements and a p-block metal classified by some chemists as a post-transition metal. It is not soluble in water under normal circumstances. Aluminium is the third most abundant element (after oxygen and silicon), and the most abundant metal, in the Earth's crust. It makes up about 8% by weight of the Earth's solid surface. Aluminium metal is too reactive chemically to occur natively. Instead, it is found combined in over 270 different minerals. The chief ore of aluminium is bauxite.
Aluminium is remarkable for the metal's low density and for its ability to resist corrosion due to the phenomenon of passivation. Structural components made from aluminium and its alloys are vital to the aerospace industry and are important in other areas of transportation and structural materials. The most useful compounds of aluminium, at least on a weight basis, are the oxides and sulfates.
Silicon
Silicon (symbol Si) is a group 14 metalloid. It is less reactive than its chemical analog carbon, the nonmetal directly above it in the periodic table, but more reactive than germanium, the metalloid directly below it in the table. Controversy about silicon's character dates from its discovery: silicon was first prepared and characterized in pure form in 1824, and given the name silicium (from , flints), with an -ium word-ending to suggest a metal. However, its final name, suggested in 1831, reflects the more chemically similar elements carbon and boron.
Silicon is the eighth most common element in the universe by mass, but very rarely occurs as the pure free element in nature. It is most widely distributed in dusts, sands, planetoids and planets as various forms of silicon dioxide (silica) or silicates. Over 90% of the Earth's crust is composed of silicate minerals, making silicon the second most abundant element in the Earth's crust (about 28% by mass) after oxygen.
Most silicon is used commercially without being separated, and indeed often with little processing of compounds from nature. These include direct industrial building use of clays, silica sand and stone. Silica is used in ceramic brick. Silicate goes into Portland cement for mortar and stucco, and combined with silica sand and gravel, to make concrete. Silicates are also in whiteware ceramics such as porcelain, and in traditional quartz-based soda–lime glass. More modern silicon compounds such as silicon carbide form abrasives and high-strength ceramics. Silicon is the basis of the ubiquitous synthetic silicon-based polymers called silicones.
Elemental silicon also has a large impact on the modern world economy. Although most free silicon is used in the steel refining, aluminum-casting, and fine chemical industries (often to make fumed silica), the relatively small portion of very highly purified silicon that is used in semiconductor electronics (< 10%) is perhaps even more critical. Because of wide use of silicon in integrated circuits, the basis of most computers, a great deal of modern technology depends on it.
Phosphorus
Phosphorus (symbol P) is a multivalent nonmetal of the nitrogen group, phosphorus as a mineral is almost always present in its maximally oxidized (pentavalent) state, as inorganic phosphate rocks. Elemental phosphorus exists in two major forms—white phosphorus and red phosphorus—but due to its high reactivity, phosphorus is never found as a free element on Earth.
The first form of elemental phosphorus to be produced (white phosphorus, in 1669) emits a faint glow upon exposure to oxygen – hence its name given from Greek mythology, meaning "light-bearer" (Latin: Lucifer), referring to the "Morning Star", the planet Venus. Although the term "phosphorescence", meaning glow after illumination, derives from this property of phosphorus, the glow of phosphorus originates from oxidation of the white (but not red) phosphorus and should be called chemiluminescence. It is also the lightest element to easily produce stable exceptions to the octet rule.
The vast majority of phosphorus compounds are consumed as fertilizers. Other applications include the role of organophosphorus compounds in detergents, pesticides and nerve agents and matches.
Sulfur
Sulfur (symbol S) is an abundant multivalent nonmetal, one of chalcogens. Under normal conditions, sulfur atoms form cyclic octatomic molecules with chemical formula S8. Elemental sulfur is a bright yellow crystalline solid when at room temperature. Chemically, sulfur can react as either an oxidant or a reducing agent. It oxidizes most metals and several nonmetals, including carbon, which leads to its negative charge in most organosulfur compounds, but it reduces several strong oxidants, such as oxygen and fluorine.
In nature, sulfur can be found as the pure element and as sulfide and sulfate minerals. Elemental sulfur crystals are commonly sought after by mineral collectors for their brightly colored polyhedron shapes. Being abundant in native form, sulfur was known in ancient times, mentioned for its uses in ancient Greece, China and Egypt. Sulfur fumes were used as fumigants, and sulfur-containing medicinal mixtures were used as balms and antiparasitics. Sulfur is referenced in the Bible as brimstone in English, with this name still used in several nonscientific terms. Sulfur was considered important enough to receive its own alchemical symbol. It was needed to make the best quality of black gunpowder, and the bright yellow powder was hypothesized by alchemists to contain some of the properties of gold, which they sought to synthesize from it. In 1777, Antoine Lavoisier helped convince the scientific community that sulfur was a basic element, rather than a compound.
Elemental sulfur was once extracted from salt domes, where it sometimes occurs in nearly pure form, but this method has been obsolete since the late 20th century. Today, almost all elemental sulfur is produced as a byproduct of removing sulfur-containing contaminants from natural gas and petroleum. The element's commercial uses are primarily in fertilizers, because of the relatively high requirement of plants for it, and in the manufacture of sulfuric acid, a primary industrial chemical. Other well-known uses for the element are in matches, insecticides and fungicides. Many sulfur compounds are odiferous, and the smell of odorized natural gas, skunk scent, grapefruit, and garlic is due to sulfur compounds. Hydrogen sulfide produced by living organisms imparts the characteristic odor to rotting eggs and other biological processes.
Chlorine
Chlorine (symbol Cl) is the second-lightest halogen. The element forms diatomic molecules under standard conditions, called dichlorine. It has the highest electron affinity and the one of highest electronegativity of all the elements; thus chlorine is a strong oxidizing agent.
The most common compound of chlorine, sodium chloride (table salt), has been known since ancient times; however, around 1630, chlorine gas was obtained by the Belgian chemist and physician Jan Baptist van Helmont. The synthesis and characterization of elemental chlorine occurred in 1774 by Swedish chemist Carl Wilhelm Scheele, who called it "dephlogisticated muriatic acid air", as he thought he synthesized the oxide obtained from the hydrochloric acid, because acids were thought at the time to necessarily contain oxygen. A number of chemists, including Claude Berthollet, suggested that Scheele's "dephlogisticated muriatic acid air" must be a combination of oxygen and the yet undiscovered element, and Scheele named the supposed new element within this oxide as muriaticum. The suggestion that this newly discovered gas was a simple element was made in 1809 by Joseph Louis Gay-Lussac and Louis-Jacques. This was confirmed in 1810 by Sir Humphry Davy, who named it chlorine, from the Greek word χλωρός (chlōros), meaning "green-yellow".
Chlorine is a component of many other compounds. It is the second most abundant halogen and 21st most abundant element in Earth's crust. The great oxidizing power of chlorine led it to its bleaching and disinfectant uses, as well as being an essential reagent in the chemical industry. As a common disinfectant, chlorine compounds are used in swimming pools to keep them clean and sanitary. In the upper atmosphere, chlorine-containing molecules such as chlorofluorocarbons have been implicated in ozone depletion.
Argon
Argon (symbol Ar) is the third element in group 18, the noble gases. Argon is the third most common gas in the Earth's atmosphere, at 0.93%, making it more common than carbon dioxide. Nearly all of this argon is radiogenic argon-40 derived from the decay of potassium-40 in the Earth's crust. In the universe, argon-36 is by far the most common argon isotope, being the preferred argon isotope produced by stellar nucleosynthesis.
The name "argon" is derived from the Greek neuter adjective ἀργόν, meaning "lazy" or "the inactive one", as the element undergoes almost no chemical reactions. The complete octet (eight electrons) in the outer atomic shell makes argon stable and resistant to bonding with other elements. Its triple point temperature of 83.8058 K is a defining fixed point in the International Temperature Scale of 1990.
Argon is produced industrially by the fractional distillation of liquid air. Argon is mostly used as an inert shielding gas in welding and other high-temperature industrial processes where ordinarily non-reactive substances become reactive: for example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. Argon gas also has uses in incandescent and fluorescent lighting, and other types of gas discharge tubes. Argon makes a distinctive blue–green gas laser.
Biological roles
Sodium is an essential element for all animals and some plants. In animals, sodium ions are used against potassium ions to build up charges on cell membranes, allowing transmission of nerve impulses when the charge is dissipated; it is therefore classified as a dietary inorganic macromineral.
Magnesium is the eleventh most abundant element by mass in the human body; its ions are essential to all living cells, where they play a major role in manipulating important biological polyphosphate compounds like ATP, DNA, and RNA. Hundreds of enzymes thus require magnesium ions to function. Magnesium is also the metallic ion at the center of chlorophyll, and is thus a common additive to fertilizers. Magnesium compounds are used medicinally as common laxatives, antacids (e.g., milk of magnesia), and in a number of situations where stabilization of abnormal nerve excitation and blood vessel spasm is required (e.g., to treat eclampsia).
Despite its prevalence in the environment, aluminium salts are not known to be used by any form of life. In keeping with its pervasiveness, it is well tolerated by plants and animals. Because of their prevalence, potential beneficial (or otherwise) biological roles of aluminium compounds are of continuing interest.
Silicon is an essential element in biology, although only tiny traces of it appear to be required by animals, though various sea sponges need silicon in order to have structure. It is much more important to the metabolism of plants, particularly many grasses, and silicic acid (a type of silica) forms the basis of the striking array of protective shells of the microscopic diatoms.
Phosphorus is essential for life. As phosphate, it is a component of DNA, RNA, ATP, and also the phospholipids that form all cell membranes. Demonstrating the link between phosphorus and life, elemental phosphorus was historically first isolated from human urine, and bone ash was an important early phosphate source. Phosphate minerals are fossils. Low phosphate levels are an important limit to growth in some aquatic systems. Today, the most important commercial use of phosphorus-based chemicals is the production of fertilizers, to replace the phosphorus that plants remove from the soil.
Sulfur is an essential element for all life, and is widely used in biochemical processes. In metabolic reactions, sulfur compounds serve as both fuels and respiratory (oxygen-replacing) materials for simple organisms. Sulfur in organic form is present in the vitamins biotin and thiamine, the latter being named for the Greek word for sulfur. Sulfur is an important part of many enzymes and in antioxidant molecules like glutathione and thioredoxin. Organically bonded sulfur is a component of all proteins, as the amino acids cysteine and methionine. Disulfide bonds are largely responsible for the mechanical strength and insolubility of the protein keratin, found in outer skin, hair, and feathers, and the element contributes to their pungent odor when burned.
Elemental chlorine is extremely dangerous and poisonous for all lifeforms, and is used as a pulmonary agent in chemical warfare; however, chlorine is necessary to most forms of life, including humans, in the form of chloride ions.
Argon has no biological role. Like any gas besides oxygen, argon is an asphyxiant.
Table of elements
Notes
References
Periods (periodic table)
Pages containing element color directly | Period 3 element | [
"Chemistry"
] | 3,577 | [
"Periodic table",
"Periods (periodic table)"
] |
1,096,102 | https://en.wikipedia.org/wiki/Z%20Pulsed%20Power%20Facility | The Z Pulsed Power Facility, informally known as the Z machine or simply Z, is the largest high frequency electromagnetic wave generator in the world, operated by Sandia National Laboratories in Albuquerque, New Mexico.
It has primarily been used as an inertial confinement fusion (ICF) research facility, including the magnetized liner inertial fusion (MagLIF) approach, and for testing materials in conditions of extreme temperature and pressure.
In particular, it gathers data to aid in computer modeling of nuclear weapons and eventual fusion pulsed power plants.
History
The Z machine's origins can be traced to the Department of Energy (DoE) needing to replicate the fusion reactions of a thermonuclear bomb in a lab environment to better understand the physics involved.
Since the 1970s, the DoE has also been looking into ways to generate electricity from fusion reactions.
The first research at Sandia, headed by Gerold Yonas – the particle-beam fusion program – dates back to 1971. This program tried to generate fusion by compressing fuel with beams of charged particles. Electrons were the first particles to be thought of, because the pulsed power accelerators at the time had already concentrated them at high power in small areas. However, shortly thereafter it was realized that electrons can not possibly heat the fusion fuel rapidly enough for the purpose. The program then moved away from electrons in favor of protons. These turned out to be too light to control well enough to concentrate onto a target, and the program moved on to light ions, lithium. The accelerators names reflect the change in emphasis: first
the accelerator's name was EBFA-I (electron beam fusion accelerator), shortly thereafter PBFA-I, which became Saturn. Protons demanded another accelerator, PBFA-II, which became Z.
The November 1978 issue of Scientific American carried Yonas' first general-public article, "Fusion power with particle beams".
In 1985, the PBFA-II was created. Sandia continued to target heavy ion fusion at a slow pace despite the National Academies report.
Meanwhile, defense-related research was also ongoing at Sandia with the Hermes III machine and Saturn (1987), upgraded from PBFA-I, which operated at lower total power than PBFA-II but advanced Sandia's knowledge in high voltage and high current and was therefore a useful predecessor to the Z machine.
Also in 1996, the PBFA-II machine was once again upgraded into PBFA-Z or simply "Z machine", described for the first time to the general public in August 1998 in Scientific American.
Physics of the Z machine
The Z machine uses the well known principle of Z-pinch to produce hot short-lived plasmas. The plasma can be used as a source of x-rays, as a surrogate for the inside of a thermonuclear weapon, or as a surrogate for the core of a fusion power plant.
In a Z-pinch, the fast discharge of current through a column of plasma causes it to be compressed towards its axis by the resulting Lorentz forces, thus heating it. Willard Harrison Bennett successfully researched the application of Z-pinches to plasma compression. The Z machine layout is cylindrical. On the outside it houses huge capacitors discharging through Marx generators which generate a one microsecond high-voltage pulse. This pulse is then compressed by a factor of 10 to enable the creation of 100 ns discharges.
Most experiments on the Z machine run the current discharge through a conductive tube (called a liner) filled with gas. This approach is known as magnetized liner inertial fusion, or MagLIF. The compression of a MagLIF Z-pinch is limited because the current flow is highly unstable and rotates along the cylinder which causes twisting of the imploding tube therefore decreasing the quality of the compression.
The Z machine has also conducted experiments with arrays of tungsten wires rather than liners. The space inside the wire array was filled with polystyrene, which helps homogenize the X-ray flux. By removing the polystyrene core, Sandia was able to obtain a thin 1.5 mm plasma cord in which 10 million amperes flowed with 90 megabars of pressure.
Early operation 1996–2006
The key attributes of Sandia's Z machine are its 18 million amperes of current and a discharge time of less than 100 nanoseconds. This current discharge was initially run through an array of tungsten wires.
In 1999, Sandia tested the idea of nested wire arrays; the second array, out of phase with the first, compensates for Rayleigh-Taylor instabilities.
In 2001, Sandia introduced the Z-Beamlet laser (from surplus equipment of the National Ignition Facility) as a tool to better image the compressing pellet. This confirmed the shaping uniformity of pellets compressed by the Z machine.
In 1999, Sandia started the Z-IFE project, which aimed to solve the practical difficulties in harnessing fusion power. Major problems included producing energy in a single Z-pinch shot, and quickly reloading the reactor after each shot. By their early estimates, an implosion of a fuel capsule every 10 seconds could economically produce 300 MW of fusion energy.
Sandia announced the fusing of small amounts of deuterium in the Z machine on April 7, 2003.
Besides being used as an X-ray generator, the Z machine propelled small plates at 34 kilometers a second, faster than the 30 kilometers per second that Earth travels in its orbit around the Sun, and four times Earth's escape velocity (3 times it at sea level). It also successfully created a special, hyperdense "hot ice" known as ice VII, by quickly compressing water to pressures of 70,000 to 120,000 atmospheres (7 to 12 GPa). Mechanical shock from impacting Z-machine accelerated projectiles is able to melt diamonds.
During this period the power of X-ray produced jumped from 10 to 300TW. In order to target the next milestone of fusion breakeven, another upgrade was then necessary
After refurbishment (2007–)
A $60 million (raised to $90 million) retrofit program called ZR (Z Refurbished) was announced in 2004 to increase its power by 50%. The Z machine was dismantled in July 2006 for this upgrade, including the installation of newly designed hardware and components and more powerful Marx generators. The deionized water section of the machine has been reduced to about half the previous size while the oil section has been expanded significantly in order to house larger intermediate storage lines (i-stores) and new laser towers, which used to sit in the water section. The refurbishment was completed in October 2007.
The newer Z machine can now shoot around 26 million amperes (instead of 18 million amperes previously) in 95 nanoseconds. The radiated power has been raised to 350 terawatts and the X-ray energy output to 2.7 megajoules. In 2006 wire array experiments reach ultra-high temperatures (2.66 to 3.7 billion kelvins).
Sandia's roadmap for the future includes another Z machine version called ZN (Z Neutron) to test higher yields in fusion power and automation systems. ZN is planned to give between 20 and 30 MJ of hydrogen fusion power with a shot per hour using a Russian Linear Transformer Driver (LTD) replacing the current Marx generators. After 8 to 10 years of operation, ZN would become a transmutation pilot plant capable of a fusion shot every 100 seconds.
The next step planned would be the Z-IFE (Z-inertial fusion energy) test facility, the first true z-pinch driven prototype fusion power plant. It is suggested it would integrate Sandia's latest designs using LTDs. Sandia Labs recently proposed a conceptual 1 petawatt (1015 watts) LTD Z-pinch power plant, where the electric discharge would reach 70 million amperes. As of 2012, fusion shot simulations at 60 to 70 million amperes are showing a 100 to 1000 fold return on input energy. Tests at the Z machine's current design maximum of 26-27 million amperes were set to begin in 2013.
See also
Stockpile stewardship
Pulsed power
Inertial fusion power plant
References
External links
Home Page Z Pulsed Power Facility
"A machine called Z", an article from The Observer
Physics News Update, February 28, 2006
Inertial confinement fusion
Nuclear research institutes
Nuclear stockpile stewardship
Military research of the United States
Plasma physics facilities
Sandia National Laboratories | Z Pulsed Power Facility | [
"Physics",
"Engineering"
] | 1,766 | [
"Nuclear research institutes",
"Nuclear organizations",
"Plasma physics facilities",
"Plasma physics"
] |
1,096,396 | https://en.wikipedia.org/wiki/Categorical%20theory | In mathematical logic, a theory is categorical if it has exactly one model (up to isomorphism). Such a theory can be viewed as defining its model, uniquely characterizing the model's structure.
In first-order logic, only theories with a finite model can be categorical. Higher-order logic contains categorical theories with an infinite model. For example, the second-order Peano axioms are categorical, having a unique model whose domain is the set of natural numbers
In model theory, the notion of a categorical theory is refined with respect to cardinality. A theory is -categorical (or categorical in ) if it has exactly one model of cardinality up to isomorphism. Morley's categoricity theorem is a theorem of stating that if a first-order theory in a countable language is categorical in some uncountable cardinality, then it is categorical in all uncountable cardinalities.
extended Morley's theorem to uncountable languages: if the language has cardinality and a theory is categorical in some uncountable cardinal greater than or equal to then it is categorical in all cardinalities greater than .
History and motivation
Oswald Veblen in 1904 defined a theory to be categorical if all of its models are isomorphic. It follows from the definition above and the Löwenheim–Skolem theorem that any first-order theory with a model of infinite cardinality cannot be categorical. One is then immediately led to the more subtle notion of -categoricity, which asks: for which cardinals is there exactly one model of cardinality of the given theory T up to isomorphism? This is a deep question and significant progress was only made in 1954 when Jerzy Łoś noticed that, at least for complete theories T over countable languages with at least one infinite model, he could only find three ways for T to be -categorical at some :
T is totally categorical, i.e. T is -categorical for all infinite cardinals .
T is uncountably categorical, i.e. T is -categorical if and only if is an uncountable cardinal.
T is countably categorical, i.e. T is -categorical if and only if is a countable cardinal.
In other words, he observed that, in all the cases he could think of, -categoricity at any one uncountable cardinal implied -categoricity at all other uncountable cardinals. This observation spurred a great amount of research into the 1960s, eventually culminating in Michael Morley's famous result that these are in fact the only possibilities. The theory was subsequently extended and refined by Saharon Shelah in the 1970s and beyond, leading to stability theory and Shelah's more general programme of classification theory.
Examples
There are not many natural examples of theories that are categorical in some uncountable cardinal. The known examples include:
Pure identity theory (with no functions, constants, predicates other than "=", or axioms).
The classic example is the theory of algebraically closed fields of a given characteristic. Categoricity does not say that all algebraically closed fields of characteristic 0 as large as the complex numbers C are the same as C; it only asserts that they are isomorphic as fields to C. It follows that although the completed p-adic closures Cp are all isomorphic as fields to C, they may (and in fact do) have completely different topological and analytic properties. The theory of algebraically closed fields of a given characteristic is not categorical in (the countable infinite cardinal); there are models of transcendence degree 0, 1, 2, ..., .
Vector spaces over a given countable field. This includes abelian groups of given prime exponent (essentially the same as vector spaces over a finite field) and divisible torsion-free abelian groups (essentially the same as vector spaces over the rationals).
The theory of the set of natural numbers with a successor function.
There are also examples of theories that are categorical in but not categorical in uncountable cardinals.
The simplest example is the theory of an equivalence relation with exactly two equivalence classes, both of which are infinite. Another example is the theory of dense linear orders with no endpoints; Cantor proved that any such countable linear order is isomorphic to the rational numbers: see Cantor's isomorphism theorem.
Properties
Every categorical theory is complete. However, the converse does not hold.
Any theory T categorical in some infinite cardinal is very close to being complete. More precisely, the Łoś–Vaught test states that if a satisfiable theory has no finite models and is categorical in some infinite cardinal at least equal to the cardinality of its language, then the theory is complete. The reason is that all infinite models are first-order equivalent to some model of cardinal by the Löwenheim–Skolem theorem, and so are all equivalent as the theory is categorical in . Therefore, the theory is complete as all models are equivalent. The assumption that the theory have no finite models is necessary.
See also
Spectrum of a theory
Notes
References
Hodges, Wilfrid, "First-order Model Theory", The Stanford Encyclopedia of Philosophy (Summer 2005 Edition), Edward N. Zalta (ed.).
(IX, 1.19, pg.49)
Mathematical logic
Model theory
Theorems in the foundations of mathematics | Categorical theory | [
"Mathematics"
] | 1,135 | [
"Foundations of mathematics",
"Mathematical logic",
"Model theory",
"Mathematical problems",
"Mathematical theorems",
"Theorems in the foundations of mathematics"
] |
1,097,925 | https://en.wikipedia.org/wiki/Conserved%20current | In physics a conserved current is a current, , that satisfies the continuity equation . The continuity equation represents a conservation law, hence the name.
Indeed, integrating the continuity equation over a volume , large enough to have no net currents through its surface, leads to the conservation lawwhere is the conserved quantity.
In gauge theories the gauge fields couple to conserved currents. For example, the electromagnetic field couples to the conserved electric current.
Conserved quantities and symmetries
Conserved current is the flow of the canonical conjugate of a quantity possessing a continuous translational symmetry. The continuity equation for the conserved current is a statement of a conservation law. Examples of canonical conjugate quantities are:
Time and energy - the continuous translational symmetry of time implies the conservation of energy
Space and momentum - the continuous translational symmetry of space implies the conservation of momentum
Space and angular momentum - the continuous rotational symmetry of space implies the conservation of angular momentum
Wave function phase and electric charge - the continuous phase angle symmetry of the wave function implies the conservation of electric charge
Conserved currents play an extremely important role in theoretical physics, because Noether's theorem connects the existence of a conserved current to the existence of a symmetry of some quantity in the system under study. In practical terms, all conserved currents are the Noether currents, as the existence of a conserved current implies the existence of a symmetry. Conserved currents play an important role in the theory of partial differential equations, as the existence of a conserved current points to the existence of constants of motion, which are required to define a foliation and thus an integrable system. The conservation law is expressed as the vanishing of a 4-divergence, where the Noether charge forms the zeroth component of the 4-current.
Examples
Electromagnetism
The conservation of charge, for example, in the notation of Maxwell's equations,
where
ρ is the free electric charge density (in units of C/m3)
J is the current density with v as the velocity of the charges.
The equation would apply equally to masses (or other conserved quantities), where the word mass is substituted for the words electric charge above.
Complex scalar field
The Lagrangian density
of a complex scalar field is invariant under the symmetry transformation
Defining we find the Noether current
which satisfies the continuity equation.
See also
Conservation law (physics)
Noether's theorem
References
Electromagnetism
Theoretical physics
Conservation equations
Symmetry | Conserved current | [
"Physics",
"Mathematics"
] | 500 | [
"Electromagnetism",
"Physical phenomena",
"Conservation laws",
"Theoretical physics",
"Mathematical objects",
"Equations",
"Fundamental interactions",
"Geometry",
"Conservation equations",
"Symmetry",
"Physics theorems"
] |
1,098,915 | https://en.wikipedia.org/wiki/Bradford%20protein%20assay | The Bradford protein assay (also known as the Coomassie protein assay) was developed by Marion M. Bradford in 1976. It is a quick and accurate spectroscopic analytical procedure used to measure the concentration of protein in a solution. The reaction is dependent on the amino acid composition of the measured proteins.
Principle
The Bradford assay, a colorimetric protein assay, is based on an absorbance shift of the dye Coomassie brilliant blue G-250. The Coomassie brilliant blue G-250 dye exists in three forms: anionic (blue), neutral (green), and cationic (red). Under acidic conditions, the red form of the dye is converted into its blue form, binding to the protein being assayed. If there's no protein to bind, then the solution will remain brown. The dye forms a strong, noncovalent complex with the protein's carboxyl group by van der Waals force and amino group through electrostatic interactions. During the formation of this complex, the red form of Coomassie dye first donates its free electron to the ionizable groups on the protein, which causes a disruption of the protein's native state, consequently exposing its hydrophobic pockets. These pockets in the protein's tertiary structure bind non-covalently to the non-polar region of the dye via the first bond interaction (van der Waals forces) which position the positive amine groups in proximity with the negative charge of the dye. The bond is further strengthened by the second bond interaction between the two, the ionic interaction. When the dye binds to the protein, it causes a shift from 465 nm to 595 nm, which is why the absorbance readings are taken at 595 nm.
The cationic (unbound) form is green / red and has an absorption spectrum maximum historically held to be at 465 nm. The anionic bound form of the dye which is held together by hydrophobic and ionic interactions, has an absorption spectrum maximum historically held to be at 595 nm. The increase of absorbance at 595 nm is proportional to the amount of bound dye, and thus to the amount (concentration) of protein present in the sample.
Unlike other protein assays, the Bradford protein assay is less susceptible to interference by various chemical compounds such as sodium, potassium or even carbohydrates like sucrose, that may be present in protein samples. An exception of note is elevated concentrations of detergent. Sodium dodecyl sulfate (SDS), a common detergent, may be found in protein extracts because it is used to lyse cells by disrupting the membrane lipid bilayer and to denature proteins for SDS-PAGE. While other detergents interfere with the assay at high concentration, the interference caused by SDS is of two different modes, and each occurs at a different concentration. When SDS concentrations are below critical micelle concentration (known as CMC, 0.00333%W/V to 0.0667%) in a Coomassie dye solution, the detergent tends to bind strongly with the protein, inhibiting the protein binding sites for the dye reagent. This can cause underestimations of protein concentration in solution. When SDS concentrations are above CMC, the detergent associates strongly with the green form of the Coomassie dye, causing the equilibrium to shift, thereby producing more of the blue form. This causes an increase in the absorbance at 595 nm independent of protein presence.
Other interference may come from the buffer used when preparing the protein sample. A high concentration of buffer will cause an overestimated protein concentration due to depletion of free protons from the solution by conjugate base from the buffer. This will not be a problem if a low concentration of protein (subsequently the buffer) is used.
In order to measure the absorbance of a colorless compound a Bradford assay must be performed. Some colorless compounds such as proteins can be quantified at an Optical Density of 280 nm due to the presence of aromatic rings such as Tryptophan, Tyrosine and Phenylalanine but if none of these amino acids are present then the absorption cannot be measured at 280 nm.
Advantages
Many protein-containing solutions have the highest absorption at 280 nm in the spectrophotometer, the UV range. This requires spectrophotometers capable of measuring in the UV range, which many cannot. Additionally, the absorption maxima at 280 nm requires that proteins contain aromatic amino acids such as tyrosine (Y), phenylalanine (F) and/or tryptophan (W). Not all proteins contain these amino acids, a fact which will skew the concentration measurements. If nucleic acids are present in the sample, they would also absorb light at 280 nm, skewing the results further. By using the Bradford protein assay, one can avoid all of these complications by simply mixing the protein samples with the Coomassie brilliant blue G-250 dye (Bradford reagent) and measuring their absorbances at 595 nm, which is in the visible range and may be accurately measured by the use of a mobile smartphone camera.
The procedure for Bradford protein assay is very easy and simple to follow. It is done in one step where the Bradford reagent is added to a test tube along with the sample. After mixing well, the mixture almost immediately changes to a blue color. When the dye binds to the proteins through a process that takes about 2 minutes, a change in the absorption maximum of the dye from 465 nm to 595 nm in acidic solutions occurs. Additionally, protein binding triggers a metachromatic reaction, evidenced by the emergence of a species that absorbs light around 595 nm, indicative of the unprotonated form This dye creates strong noncovalent bonds with the proteins, via electrostatic interactions with the amino and carboxyl groups, as well as Van Der Waals interactions. Only the molecules that bind to the proteins in solution exhibit this change in absorption, which eliminates the concern that unbound molecules of the dye might contribute to the experimentally obtained absorption reading. This process is more beneficial since it is less pricey than other methods, easy to use, and has high sensitivity of the dye for protein.
After 5 minutes of incubation, the absorbance can be read at 595 nm using a spectrophotometer or a mobile smartphone camera (RGBradford method).
This assay is one of the fastest assays performed on proteins. The total time it takes to set up and complete the assay is under 30 minutes. The entire experiment is done at room temperature.
The Bradford protein assay can measure protein quantities as little as 1 to 20 μg. It is an extremely sensitive technique.
The dye reagent is a stable ready to use product prepared in phosphoric acid. It can remain at room temperature for up to 2 weeks before it starts to degrade.
Protein samples usually contain salts, solvents, buffers, preservatives, reducing agents and metal chelating agents. These molecules are frequently used for solubilizing and stabilizing proteins. Other protein assay like BCA and Lowry are ineffective because molecules like reducing agents interfere with the assay. Using Bradford can be advantageous against these molecules because they are compatible to each other and will not interfere.
The linear graph acquired from the assay (absorbance versus protein concentration in μg/mL) can be easily extrapolated to determine the concentration of proteins by using the slope of the line.
It is a sensitive technique. It is also very simple: measuring the OD at 595 nm after 5 minutes of incubation. This method can also make use of a Vis spectrophotometer or a mobile smartphone camera (RGBradford method).
Disadvantages
The Bradford assay is linear over a short range, typically from 0 μg/mL to 2000 μg/mL, often making dilutions of a sample necessary before analysis. In making these dilutions, error in one dilution is compounded in further dilutions resulting in a linear relationship that may not always be accurate.
Basic conditions and detergents, such as SDS, can interfere with the dye's ability to bind to the protein through its side chains.
The reagents in this method tend to stain the test tubes. Same test tubes cannot be used since the stain would affect the absorbance reading. This method is also time sensitive. When more than one solution is tested, it is important to make sure every sample is incubated for the same amount of time for accurate comparison.
A limiting factor in using Coomassie-based protein determination dyes stems from the significant variation in color yield observed across different proteins This limiting factor is notably evident in collagen-rich protein samples, like pancreatic extracts, where both the Lowry and Bradford methods tend to underestimate protein content.
It is also inhibited by the presence of detergents, although this problem can be alleviated by the addition of cyclodextrins to the assay mixture.
Much of the non-linearity stems from the equilibrium between two different forms of the dye which is perturbed by adding the protein. The Bradford assay linearizes by measuring the ratio of the absorbances, 595 over 450 nm. This modified Bradford assay is approximately 10 times more sensitive than the conventional one.
The Coomassie Blue G250 dye used to bind to the proteins in the original Bradford method readily binds to arginine and lysine groups of proteins. This is a disadvantage because the preference of the dye to bind to these amino acids can result in a varied response of the assay between different proteins. Changes to the original method, such as increasing the pH by adding NaOH or adding more dye have been made to correct this variation. Although these modifications result in a less sensitive assay, a modified method becomes sensitive to detergents that can interfere with sample.
Future of Bradford Protein Assay
New modifications for an improved Bradford Protein Assay have been underway that specifically focuses on enhancing detection accuracy for collagen proteins. One notable modification involves incorporating small amounts, approximately .0035%, of sodium dodecyl sulfate (SDS). This inclusion of SDS has been shown to result in a fourfold increase in color response for three key collagen proteins—Collagen types I, III, and IV—while simultaneously decreasing the absorbance of non-collagen proteins.
This simple modification in the preparation of the reagent resulted in Bradford Assays to produce similar response curves for both collagen and non-collagen proteins, expanding the use of Bradford Assays in samples containing high collagen proteins.
Sample Bradford procedure
Materials
Lyophilized bovine plasma gamma globulin
Coomassie brilliant blue 1
0.15 M NaCl
Spectrophotometer and cuvettes or a mobile smartphone camera (RGBradford method).
Micropipettes
Procedure (Standard Assay, 20-150 μg protein; 200-1500 μg/mL)
Prepare a series of standards diluted with 0.15 M NaCl to final concentrations of 0 (blank = No protein), 250, 500, 750 and 1500 μg/mL. Also prepare serial dilutions of the unknown sample to be measured.
Add 100 μL of each of the above to a separate test tube (or spectrophotometer tube if using a Spectronic 20).
Add 5.0 mL of Coomassie Blue to each tube and mix by vortex, or inversion.
Adjust the spectrophotometer to a wavelength of 595 nm, using the tube which contains no protein (blank).
Wait 5 minutes and read each of the standards and each of the samples at 595 nm wavelength.
Plot the absorbance of the standards vs. their concentration. Compute the extinction coefficient and calculate the concentrations of the unknown samples.
Procedure (Micro Assay, 1-10 μg protein/mL)
Prepare standard concentrations of protein of 1, 5, 7.5 and 10 μg/mL. Prepare a blank of NaCl only. Prepare a series of sample dilutions.
Add 100 μL of each of the above to separate tubes (use microcentrifuge tubes) and add 1.0 mL of Coomassie Blue to each tube.
Turn on and adjust a spectrophotometer to a wavelength of 595 nm, and blank the spectrophotometer using 1.5 mL cuvettes or use a mobile smartphone camera (RGBradford method).
Wait 2 minutes and read the absorbance of each standard and sample at 595 nm.
Plot the absorbance of the standards vs. their concentration. Compute the extinction coefficient and calculate the concentrations of the unknown samples.
Using data obtained to find concentration of unknown
In summary, in order to find a standard curve, one must use varying concentrations of BSA (Bovine Serum Albumin) in order to create a standard curve with concentration plotted on the x-axis and absorbance plotted on the y-axis. Only a narrow concentration of BSA is used (2-10 ug/mL) in order to create an accurate standard curve. Using a broad range of protein concentration will make it harder to determine the concentration of the unknown protein. This standard curve is then used to determine the concentration of the unknown protein. The following elaborates on how one goes from the standard curve to the concentration of the unknown.
First, add a line of best fit, or Linear regression and display the equation on the chart. Ideally, the R2 value will be as close to 1 as possible. R represents the sum of the square values of the fit subtracted from each data point. Therefore, if R2 is much less than one, consider redoing the experiment to get one with more reliable data.
The equation displayed on the chart gives a means for calculating the absorbance and therefore concentration of the unknown samples. In Graph 1, x is concentration and y is absorbance, so one must rearrange the equation to solve for x and enter the absorbance of the measured unknown. It is likely that the unknown will have absorbance numbers outside the range of the standard. These should not be included calculations, as the equation given cannot apply to numbers outside of its limitations.
In a large scale, one must compute the extinction coefficient using the Beer-Lambert Law A=εLC in which A is the measured absorbance, ε is the slope of the standard curve, L is the length of the cuvette, and C is the concentration being determined. In a micro scale, a cuvette may not be used and therefore one only has to rearrange to solve for x.
In order to attain a concentration that makes sense with the data, the dilutions, concentrations, and units of the unknown must be normalized (Table 1). To do this, one must divide concentration by volume of protein in order to normalize concentration and multiply by amount diluted to correct for any dilution made in the protein before performing the assay.
Alternative assays
Alternative protein assays include:
Ultraviolet–visible spectroscopy
RGBradford
Biuret protein assay
Lowry protein assay
BCA protein assay
Amido black protein assay
Colloidal gold protein assay
References
Further reading
External links
Bradford assay chemistry
Variable Pathlength Spectroscopy
OpenWetWare
Biochemistry detection reactions
Protein methods
Analytical chemistry
Chemical tests | Bradford protein assay | [
"Chemistry",
"Biology"
] | 3,220 | [
"Biochemistry methods",
"Protein methods",
"Biochemistry detection reactions",
"Protein biochemistry",
"Chemical tests",
"Biochemical reactions",
"Microbiology techniques",
"nan"
] |
1,098,962 | https://en.wikipedia.org/wiki/Ferromanganese | Ferromanganese is an alloy of iron and manganese, with other elements such as silicon, carbon, sulfur, nitrogen and phosphorus. The primary use of ferromanganese is as a type of processed manganese source to add to different types of steel, such as stainless steel. Global production of low-carbon ferromanganese (i.e. alloys with less than 2% carbon content) reached 1.5 megatons in 2010.
Physical and chemical properties
The properties of ferromanganese vary considerably with the precise type and composition of the alloy. The melting point is generally between and . The density of the alloy depend slightly on the types of impurities present, but is generally around .
Production
Sources of manganese ore generally also contain iron oxides. As manganese is harder to reduce than iron, during the reduction of manganese ore, iron is also reduced and mixed with the manganese in the melt, unlike other oxides such as SiO2, Al2O3 and CaO.
Reduction is achieved using a submerged arc furnance. There are two main industrial procedures to perform the reduction, the discard slag method (or flux method) and the duplex method (or fluxless method). Despite the name, the differences in the method are not in the addition of flux, but rather in the number of stages required. In the flux method, basic fluxes such as CaO are added in order to electrolytically reduce the manganese ore:
2MnO + C -> 2Mn + CO2
The remaining slag after the reduction process has approximately 15-20% manganese content, which is usually discarded.
In the fluxless method, carbon reduction is also used in the first stage, but the fluxes added do not necessarily increase the activity of the manganese. As a result, the remaining slag has a concentration of 30% to 50% of the manganese. This is then reprocessed with quartzite to make silicomanganese alloys. The resultant discarded slag has a manganese content of less than 5%, increasing the yield. As a result, this method is used more often in industry.
In both methods, due to the addition of carbon as an reducing agent, the alloy produced is referred to as high-carbon ferromanganese (HCFM), with a carbon content of up to 6%.
A correct mix of coke, flux and ore composition is required to give high yield and reliable furnance operation, by achieving the desired chemical properties, viscosity and smelting temperature in the resulting melt. Since the iron to manganese ratio of natural manganese sources vary greatly, mixing ores from several sources is sometimes done to give a certain desired ratio.
In the manufacture of steel, low-carbon ferromanganese (LCFM) is preferred due to the ability to accurately control the amount of carbon in the resultant steel. To arrive at LCFM from HCFM, there are also two main methods: silicothermal reduction and oxygen refinement.
In silicothermal reduction, silicomanganese from the second step of the duplex process is used as a reductant. After a variety of mixing and melting steps to reduce the silicon content, a low-carbon alloy with less than 0.8% carbon and 1% silicon by weight can be obtained.
In the oxygen refinement method, HCFM is melted and heated to a high temperature of . Oxygen is then blown in to oxidise the carbon into CO and CO2. The disadvantage of this process is that the metal is also oxidised at these high temperatures. Manganese oxide collects mainly in the form of Mn3O4 in the dust blown out from the crucible.
History
In 1856, Robert Forester Mushet "used manganese to improve the ability of steel produced by the Bessemer process to withstand rolling and forging at elevated temperatures."
In 1860, Henry Bessemer invented the use of ferromanganese as a method of introducing manganese in controlled proportions during the production of steel. The advantage of combining powdered iron oxide and manganese oxide together is the lower melting point of the combined alloy compared to pure manganese oxide.
In 1872, Lambert von Pantz produced ferromanganese in a blast furnace, with significantly higher manganese content than was previously possible (37% instead of the previous 12%). This won his company international recognition, including a gold medal at the 1873 World Exposition in Vienna and a certificate of award at the 1876 Centennial Exposition in Pennsylvania.
In an 1876 article, MF Gautier explained that the magnetic oxide needs to be slagged off by the addition of manganese (then in the form of spiegel iron) in order to befit it for rolling.
Gallery
References
Ferroalloys
Deoxidizers
Manganese | Ferromanganese | [
"Chemistry",
"Materials_science"
] | 1,009 | [
"Deoxidizers",
"Metallurgy"
] |
1,099,348 | https://en.wikipedia.org/wiki/Morphology%20%28biology%29 | Morphology in biology is the study of the form and structure of organisms and their specific structural features.
This includes aspects of the outward appearance (shape, structure, color, pattern, size), i.e. external morphology (or eidonomy), as well as the form and structure of internal parts like bones and organs, i.e. internal morphology (or anatomy). This is in contrast to physiology, which deals primarily with function. Morphology is a branch of life science dealing with the study of the gross structure of an organism or taxon and its component parts.
History
The etymology of the word "morphology" is from the Ancient Greek (), meaning "form", and (), meaning "word, study, research".
While the concept of form in biology, opposed to function, dates back to Aristotle (see Aristotle's biology), the field of morphology was developed by Johann Wolfgang von Goethe (1790) and independently by the German anatomist and physiologist Karl Friedrich Burdach (1800).
Among other important theorists of morphology are Lorenz Oken, Georges Cuvier, Étienne Geoffroy Saint-Hilaire, Richard Owen, Carl Gegenbaur and Ernst Haeckel.
In 1830, Cuvier and Saint-Hilaire engaged in a famous debate, which is said to exemplify the two major deviations in biological thinking at the time – whether animal structure was due to function or evolution.
Divisions of morphology
Comparative morphology is an analysis of the patterns of the locus of structures within the body plan of an organism, and forms the basis of taxonomical categorization.
Functional morphology is the study of the relationship between the structure and function of morphological features.
Experimental morphology is the study of the effects of external factors upon the morphology of organisms under experimental conditions, such as the effect of genetic mutation.
Anatomy is a "branch of morphology that deals with the structure of organisms".
Molecular morphology is a rarely used term, usually referring to the superstructure of polymers such as fiber formation or to larger composite assemblies. The term is commonly not applied to the spatial structure of individual molecules.
Gross morphology refers to the collective structures of an organism as a whole as a general description of the form and structure of an organism, taking into account all of its structures without specifying an individual structure.
Morphology and classification
Most taxa differ morphologically from other taxa. Typically, closely related taxa differ much less than more distantly related ones, but there are exceptions to this. Cryptic species are species which look very similar, or perhaps even outwardly identical, but are reproductively isolated. Conversely, sometimes unrelated taxa acquire a similar appearance as a result of convergent evolution or even mimicry. In addition, there can be morphological differences within a species, such as in Apoica flavissima where queens are significantly smaller than workers. A further problem with relying on morphological data is that what may appear morphologically to be two distinct species may in fact be shown by DNA analysis to be a single species. The significance of these differences can be examined through the use of allometric engineering in which one or both species are manipulated to phenocopy the other species.
A step relevant to the evaluation of morphology between traits/features within species, includes an assessment of the terms: homology and homoplasy. Homology between features indicates that those features have been derived from a common ancestor. Alternatively, homoplasy between features describes those that can resemble each other, but derive independently via parallel or convergent evolution.
3D cell morphology: classification
The invention and development of microscopy enabled the observation of 3-D cell morphology with both high spatial and temporal resolution. The dynamic processes of this cell morphology which are controlled by a complex system play an important role in varied important biological processes, such as immune and invasive responses.
See also
Comparative anatomy
Computational anatomy
Insect morphology
Morphometrics
Neuromorphology
Phenetics
Phenotype
Phenotypic plasticity
Plant morphology
References
External links
Branches of biology
Comparative anatomy | Morphology (biology) | [
"Biology"
] | 815 | [
"nan",
"Morphology (biology)"
] |
1,099,391 | https://en.wikipedia.org/wiki/Polypharmacy | Polypharmacy (polypragmasia) is an umbrella term to describe the simultaneous use of multiple medicines by a patient for their conditions. The term polypharmacy is often defined as regularly taking five or more medicines but there is no standard definition and the term has also been used in the context of when a person is prescribed 2 or more medications at the same time. Polypharmacy may be the consequence of having multiple long-term conditions, also known as multimorbidity and is more common in people who are older. In some cases, an excessive number of medications at the same time is worrisome, especially for people who are older with many chronic health conditions, because this increases the risk of an adverse event in that population. In many cases, polypharmacy cannot be avoided, but 'appropriate polypharmacy' practices are encouraged to decrease the risk of adverse effects. Appropriate polypharmacy is defined as the practice of prescribing for a person who has multiple conditions or complex health needs by ensuring that medications prescribed are optimized and follow 'best evidence' practices.
The prevalence of polypharmacy is estimated to be between 10% and 90% depending on the definition used, the age group studied, and the geographic location. Polypharmacy continues to grow in importance because of aging populations. Many countries are experiencing a fast growth of the older population, 65 years and older. This growth is a result of the baby-boomer generation getting older and an increased life expectancy as a result of ongoing improvement in health care services worldwide. About 21% of adults with intellectual disability are also exposed to polypharmacy. The level of polypharmacy has been increasing in the past decades. Research in the USA shows that the percentage of patients greater than 65 years-old using more than 5 medications increased from 24% to 39% between 1999 and 2012. Similarly, research in the UK found that the number of older people taking 5 plus medication had quadrupled from 12% to nearly 50% between 1994 and 2011.
Polypharmacy is not necessarily ill-advised, but in many instances can lead to negative outcomes or poor treatment effectiveness, often being more harmful than helpful or presenting too much risk for too little benefit. Therefore, health professionals consider it a situation that requires monitoring and review to validate whether all of the medications are still necessary. Concerns about polypharmacy include increased adverse drug reactions, drug interactions, prescribing cascade, and higher costs. A prescribing cascade occurs when a person is prescribed a drug and experiences an adverse drug effect that is misinterpreted as a new medical condition, so the patient is prescribed another drug. Polypharmacy also increases the burden of medication taking particularly in older people and is associated with medication non-adherence.
Polypharmacy is often associated with a decreased quality of life, including decreased mobility and cognition. Patient factors that influence the number of medications a patient is prescribed include a high number of chronic conditions requiring a complex drug regimen. Other systemic factors that impact the number of medications a patient is prescribed include a patient having multiple prescribers and multiple pharmacies that may not communicate.
Whether or not the advantages of polypharmacy (over taking single medications or monotherapy) outweigh the disadvantages or risks depends upon the particular combination and diagnosis involved in any given case. The use of multiple drugs, even in fairly straightforward illnesses, is not an indicator of poor treatment and is not necessarily overmedication. Moreover, it is well accepted in pharmacology that it is impossible to accurately predict the side effects or clinical effects of a combination of drugs without studying that particular combination of drugs in test subjects. Knowledge of the pharmacologic profiles of the individual drugs in question does not assure accurate prediction of the side effects of combinations of those drugs; and effects also vary among individuals because of genome-specific pharmacokinetics. Therefore, deciding whether and how to reduce a list of medications (deprescribe) is often not simple and requires the experience and judgment of a practicing clinician, as the clinician must weigh the pros and cons of keeping the patient on the medication. However, such thoughtful and wise review is an ideal that too often does not happen, owing to problems such as poorly handled care transitions (poor continuity of care, usually because of siloed information), overworked physicians and other clinical staff, and interventionism.
Appropriate medical uses
While polypharmacy is typically regarded as undesirable, prescription of multiple medications can be appropriate and therapeutically beneficial in some circumstances. “Appropriate polypharmacy” is described as prescribing for complex or multiple conditions in such a way that necessary medicines are used based on the best available evidence at the time to preserve safety and well-being. Polypharmacy is clinically indicated in some chronic conditions, for example in diabetes mellitus, but should be discontinued when evidence of benefit from the prescribed drugs no longer outweighs potential for harm (described below in Contraindications).
Often certain medications can interact with others in a positive way specifically intended when prescribed together, to achieve a greater effect than any of the single agents alone. This is particularly prominent in the field of anesthesia and pain management – where atypical agents such as antiepileptics, antidepressants, muscle relaxants, NMDA antagonists, and other medications are combined with more typical analgesics such as opioids, prostaglandin inhibitors, NSAIDS and others. This practice of pain management drug synergy is known as an analgesia sparing effect.
Examples
A legitimate treatment regimen in the first year after a myocardial infarction may include: a statin, an ACE inhibitor, a beta-blocker, aspirin, paracetamol and an antidepressant.
In anesthesia (particularly IV anesthesia and general anesthesia) multiple agents are almost always required – including hypnotics or analgesic inducing/maintenance agents such as midazolam or propofol, usually an opioid analgesic such as morphine or fentanyl, a paralytic such as vecuronium, and in inhaled general anesthesia generally a halogenated ether anesthetic such as sevoflurane or desflurane.
Special populations
People who are at greatest risk for negative polypharmacy consequences include elderly people, people with psychiatric conditions, patients with intellectual or developmental disabilities, people taking five or more drugs at the same time, those with multiple physicians and pharmacies, people who have been recently hospitalized, people who have concurrent comorbidities, people who live in rural communities, people with inadequate access to education, and those with impaired vision or dexterity. Marginalized populations may have a greater degrees of polypharmacy, which can occur more frequently in younger age groups.
It is not uncommon for people who are dependent or addicted to substances to enter or remain in a state of polypharmacy misuse. About 84% of prescription drug misusers reported using multiple drugs. Note, however, that the term polypharmacy and its variants generally refer to legal drug use as-prescribed, even when used in a negative or critical context.
Measures can be taken to limit polypharmacy to its truly legitimate and appropriate needs. This is an emerging area of research, frequently called deprescribing. Reducing the number of medications, as part of a clinical review, can be an effective healthcare intervention. Clinical pharmacists can perform drug therapy reviews and teach physicians and their patients about drug safety and polypharmacy, as well as collaborating with physicians and patients to correct polypharmacy problems. Similar programs are likely to reduce the potentially deleterious consequences of polypharmacy such as adverse drug events, non-adherence, hospital admissions, drug-drug interactions, geriatric syndromes, and mortality. Such programs hinge upon patients and doctors informing pharmacists of other medications being prescribed, as well as herbal, over-the-counter substances and supplements that occasionally interfere with prescription-only medication. Staff at residential aged care facilities have a range of views and attitudes towards polypharmacy that, in some cases, may contribute to an increase in medication use.
Risks of polypharmacy
The risk of polypharmacy increases with age, although there is some evidence that it may decrease slightly after age 90 years. Poorer health is a strong predictor of polypharmacy at any age, although it is unclear whether the polypharmacy causes the poorer health or if polypharmacy is used because of the poorer health. It appears possible that the risk factors for polypharmacy may be different for younger and middle-aged people compared to older people.
The use of polypharmacy is correlated to the use of potentially inappropriate medications. Potentially inappropriate medications are generally taken to mean those that have been agreed upon by expert consensus, such as by the Beers Criteria. These medications are generally inappropriate for older adults because the risks outweigh the benefits. Examples of these include urinary anticholinergics used to treat incontinence; the associated risks, with anticholinergics, include constipation, blurred vision, dry mouth, impaired cognition, and falls. Many older people living in long term care facilities experience polypharmacy, and under-prescribing of potentially indicated medicines and use of high risk medicines can also occur. Medicine use rises from 6.0 ± 3.8 regular medicines on average when people enter long term care to 8.9 ± 4.1 regular medicines after two years.
Polypharmacy is associated with an increased risk of falls in elderly people. Certain medications are well known to be associated with the risk of falls, including cardiovascular and psychoactive medications. There is some evidence that the risk of falls increases cumulatively with the number of medications. Although often not practical to achieve, withdrawing all medicines associated with falls risk can halve an individual's risk of future falls.
Every medication has potential adverse side-effects. With every drug added, there is an additive risk of side-effects. Also, some medications have interactions with other substances, including foods, other medications, and herbal supplements. 15% of older adults are potentially at risk for a major drug-drug interaction. Older adults are at a higher risk for a drug-drug interaction due to the increased number of medications prescribed and metabolic changes that occur with aging. When a new drug is prescribed, the risk of interactions increases exponentially. Doctors and pharmacists aim to avoid prescribing medications that interact; often, adjustments in the dose of medications need to be made to avoid interactions. For example, warfarin interacts with many medications and supplements that can cause it to lose its effect.
Pill burden
Pill burden is the number of pills (tablets or capsules, the most common dosage forms) that a person takes on a regular basis, along with all associated efforts that increase with that number - like storing, organizing, consuming, and understanding the various medications in one's regimen. The use of individual medications is growing faster than pill burden. A recent study found that older adults in long term care are taking an average of 14 to 15 tablets every day.
Poor medical adherence is a common challenge among individuals who have increased pill burden and are subject to polypharmacy. It also increases the possibility of adverse medication reactions (side effects) and drug-drug interactions. High pill burden has also been associated with an increased risk of hospitalization, medication errors, and increased costs for both the pharmaceuticals themselves and for the treatment of adverse events. Finally, pill burden is a source of dissatisfaction for many patients and family carers.
High pill burden was commonly associated with antiretroviral drug regimens to control HIV, and is also seen in other patient populations. For instance, adults with multiple common chronic conditions such as diabetes, hypertension, lymphedema, hypercholesterolemia, osteoporosis, constipation, inflammatory bowel disease, and clinical depression may be prescribed more than a dozen different medications daily. The combination of multiple drugs has been associated with an increased risk of adverse drug events.
Reducing pill burden is recognized as a way to improve medication compliance, also referred to as adherence. This is done through "deprescribing", where the risks and benefits are weighed when considering whether to continue a medication. This includes drugs such as bisphosphonates (for osteoporosis), which are often taken indefinitely although there is only evidence to use it for five to ten years. Patient educational programs, reminder messages, medication packaging, and the use of memory tricks has also been seen to improve adherence and reduce pill burden in several countries. These include associating medications with mealtimes, recording the dosage on the box, storing the medication in a special place, leaving it in plain sight in the living room, or putting the prescription sheet on the refrigerator. The development of applications has also shown some benefit in this regard. The use of a polypill regimen, such as combination pill for HIV treatment, as opposed to a multi-pill regimen, also alleviates pill burden and increases adherence.
The selection of long-acting active ingredients over short-acting ones may also reduce pill burden. For instance, ACE inhibitors are used in the management of hypertension. Both captopril and lisinopril are examples of ACE inhibitors. However, lisinopril is dosed once a day, whereas captopril may be dosed 2-3 times a day. Assuming that there are no contraindications or potential for drug interactions, using lisinopril instead of captopril may be an appropriate way to limit pill burden.
Interventions
The most common intervention to help people who are struggling with polypharmacy is deprescribing. Deprescribing can be confused with medication simplification, which does not attempt to reduce the number of medicines but rather reduce the number of dose forms and administration times. Deprescribing refers to reducing the number of medications that a person is prescribed and includes the identification and discontinuance of medications when the benefit no longer outweighs the harm. In elderly patients, this can commonly be done as a patient becomes more frail and treatment focus needs to shift from preventative to palliative. Deprescribing is feasible and effective in many settings including residential care, communities and hospitals. This preventative measure should be considered for anyone who exhibits one of the following: (1) a new symptom or adverse event arises, (2) when the person develops an end-stage disease, (3) if the combination of drugs is risky, or (4) if stopping the drug does not alter the disease trajectory.
Several tools exist to help physicians decide when to deprescribe and what medications can be added to a pharmaceutical regimen. The Beers Criteria and the STOPP/START criteria help identify medications that have the highest risk of adverse drug events (ADE) and drug-drug interactions. The Medication appropriateness tool for comorbid health conditions during dementia (MATCH-D) is the only tool available specifically for people with dementia, and also cautions against polypharmacy and complex medication regimens.
Barriers faced by both physicians and people taking the medications have made it challenging to apply deprescribing strategies in practice. For physicians, these include fear of consequences of deprescribing, the prescriber's own confidence in their skills and knowledge to deprescribe, reluctance to alter medications that are prescribed by specialists, the feasibility of deprescribing, lack of access to all of patients' clinical notes, and the complexity of having multiple providers. For patients who are prescribed or require the medication, barriers include attitudes or beliefs about the medications, inability to communicate with physicians, fears and uncertainties surrounding deprescribing, and influence of physicians, family, and the media. Barriers can include other health professionals or carers, such as in residential care, believing that the medicines are required.
In people with multiple long-term conditions (multimorbidity) and polypharmacy deprescribing represents a complex challenge as clinical guidelines are usually developed for single conditions. In these cases tools and guidelines like the Beers Criteria and STOPP/START could be used safely by clinicians but not all patients might benefit from stopping their medication. There is a need for clarity about how much clinicians can do beyond the guidelines and the responsibility they need to take could help them prescribing and deprescribing for complex cases. Further factors that can help clinicians tailor their decisions to the individual are: access to detailed data on the people in their care (including their backgrounds and personal medical goals), discussing plans to stop a medicine already when it is first prescribed, and a good relationship that involves mutual trust and regular discussions on progress. Furthermore, longer appointments for prescribing and deprescribing would allow time explain the process of deprescribing, explore related concerns, and support making the right decisions.
The effectiveness of specific interventions to improve the appropriate use of polypharmacy such as pharmaceutical care and computerised decision support is unclear. This is due to low quality of current evidence surrounding these interventions. High quality evidence is needed to make any conclusions about the effects of such interventions in any environment, including in care homes. Deprescribing is not influenced by whether medicines are prescribed through a paper-based or an electronic system. Deprescribing rounds has been proposed as a potentially successful methodology in reducing polypharmacy. Sharing of positive outcomes from physicians who have implemented deprescribing, increased communication between all practitioners involved in patient care, higher compensation for time spent deprescribing, and clear deprescribing guidelines can help enable the practice of deprescribing. Despite the difficulties, a recent blinded study of deprescribing reported that participants used an average of two fewer medicines each after 12 months showing again that deprescribing is feasible.
See also
Adverse effect
Classification of Pharmaco-Therapeutic Referrals
Compliance
Deprescribing
Multimorbidity
References
Further reading
External links
Pharmacokinetics
Pharmacy
Unnecessary health care | Polypharmacy | [
"Chemistry"
] | 3,793 | [
"Pharmacology",
"Drug safety",
"Pharmacokinetics",
"Pharmacy"
] |
1,099,396 | https://en.wikipedia.org/wiki/Drug%20interaction | In pharmaceutical sciences, drug interactions occur when a drug's mechanism of action is affected by the concomitant administration of substances such as foods, beverages, or other drugs. A popular example of drug–food interaction is the effect of grapefruit on the metabolism of drugs.
Interactions may occur by simultaneous targeting of receptors, directly or indirectly. For example, both Zolpidem and alcohol affect GABAA receptors, and their simultaneous consumption results in the overstimulation of the receptor, which can lead to loss of consciousness. When two drugs affect each other, it is a drug–drug interaction (DDI). The risk of a DDI increases with the number of drugs used.
A large share of elderly people regularly use five or more medications or supplements, with a significant risk of side-effects from drug–drug interactions.
Drug interactions can be of three kinds:
additive (the result is what you expect when you add together the effect of each drug taken independently),
synergistic (combining the drugs leads to a larger effect than expected), or
antagonistic (combining the drugs leads to a smaller effect than expected).
It may be difficult to distinguish between synergistic or additive interactions, as individual effects of drugs may vary.
Direct interactions between drugs are also possible and may occur when two drugs are mixed before intravenous injection. For example, mixing thiopentone and suxamethonium can lead to the precipitation of thiopentone.
Interactions based on pharmacodynamics
Pharmacodynamic interactions are the drug–drug interactions that occur at a biochemical level and depend mainly on the biological processes of organisms. These interactions occur due to action on the same targets; for example, the same receptor or signaling pathway.
Pharmacodynamic interactions can occur on protein receptors. Two drugs can be considered to be homodynamic, if they act on the same receptor. Homodynamic effects include drugs that act as (1) pure agonists, if they bind to the main locus of the receptor, causing a similar effect to that of the main drug, (2) partial agonists if, on binding to a secondary site, they have the same effect as the main drug, but with a lower intensity and (3) antagonists, if they bind directly to the receptor's main locus but their effect is opposite to that of the main drug. These may be competitive antagonists, if they compete with the main drug to bind with the receptor. or uncompetitive antagonists, when the antagonist binds to the receptor irreversibly. The drugs can be considered heterodynamic competitors, if they act on distinct receptor with similar downstream pathways.
The interaction my also occur via signal transduction mechanisms. For example, low blood glucose leads to a release of catecholamines, triggering symptoms that hint the organism to take action, like consuming sugary foods. If a patient is on insulin, which reduces blood sugar, and also beta-blockers, the body is less able to cope with an insulin overdose.
Interactions based on pharmacokinetics
Pharmacokinetics is the field of research studying the chemical and biochemical factors that directly affect dosage and the half-life of drugs in an organism, including absorption, transport, distribution, metabolism and excretion. Compounds may affect any of those process, ultimately interfering with the flux of drugs in the human body, increasing or reducing drug availability.
Based on absorption
Drugs that change intestinal motility may impact the level of other drugs taken. For example, prokinetic agents increase the intestinal motility, which may cause drugs to go through the digestive system too fast, reducing absorption.
The pharmacological modification of pH can affect other compounds. Drugs can be present in ionized or non-ionized forms depending on pKa, and neutral compounds are usually better absorbed by membranes. Medication like antacids can increase pH and inhibit the absorption of other drugs such as zalcitabine, tipranavir and amprenavir. The opposite is more common, with, for example, the antacid cimetidine stimulating the absorption of didanosine. Some resources describe that a gap of two to four hours between taking the two drugs is needed to avoid the interaction.
Factors such as food with high-fat content may also alter the solubility of drugs and impact its absorption. This is the case for oral anticoagulants and avocado. The formation of non-absorbable complexes may occur also via chelation, when cations can make certain drugs harder to absorb, for example between tetracycline or the fluoroquinolones and dairy products, due to the presence of calcium ions. . Other drugs bind to proteins. Some drugs such as sucralfate bind to proteins, especially if they have a high bioavailability. For this reason its administration is contraindicated in enteral feeding.
Some drugs also alter absorption by acting on the P-glycoprotein of the enterocytes. This appears to be one of the mechanisms by which grapefruit juice increases the bioavailability of various drugs beyond its inhibitory activity on first pass metabolism.
Based on transport and distribution
Drugs also may affect each other by competing for transport proteins in plasma, such as albumin. In these cases the drug that arrives first binds with the plasma protein, leaving the other drug dissolved in the plasma, modifying its expected concentration. The organism has mechanisms to counteract these situations (by, for example, increasing plasma clearance), and thus they are not usually clinically relevant. They may become relevant if other problems are present, such as issues with drug excretion.
Based on metabolism
Many drug interactions are due to alterations in drug metabolism. Further, human drug-metabolizing enzymes are typically activated through the engagement of nuclear receptors. One notable system involved in metabolic drug interactions is the enzyme system comprising the cytochrome P450 oxidases.
CYP450
Cytochrome P450 is a very large family of haemoproteins (hemoproteins) that are characterized by their enzymatic activity and their role in the metabolism of a large number of drugs. Of the various families that are present in humans, the most interesting in this respect are the 1, 2 and 3, and the most important enzymes are CYP1A2, CYP2C9, CYP2C19, CYP2D6, CYP2E1 and CYP3A4.
The majority of the enzymes are also involved in the metabolism of endogenous substances, such as steroids or sex hormones, which is also important should there be interference with these substances. The function of the enzymes can either be stimulated (enzyme induction) or inhibited (enzyme inhibition).
Through enzymatic inhibition and induction
If a drug is metabolized by a CYP450 enzyme and drug B blocks the activity of these enzymes, it can lead to pharmacokinetic alterations. A. This alteration results in drug A remaining in the bloodstream for an extended duration, and eventually increase in concentration.
In some instances, the inhibition may reduce the therapeutic effect, if instead the metabolites of the drug is responsible for the effect.
Compounds that increase the efficiency of the enzymes, on the other hand, may have the opposite effect and increase the rate of metabolism.
Examples of metabolism-based interactions
An example of this is shown in the following table for the CYP1A2 enzyme, showing the substrates (drugs metabolized by this enzyme) and some inductors and inhibitors of its activity:
Some foods also act as inductors or inhibitors of enzymatic activity. The following table shows the most common:
Based on excretion
Renal and biliary excretion
Drugs tightly bound to proteins (i.e. not in the free fraction) are not available for renal excretion.
Filtration depends on a number of factors including the pH of the urine. Drug interactions may affect those points.
With herbal medicines
Herb-drug interactions are drug interactions that occur between herbal medicines and conventional drugs. These types of interactions may be more common than drug-drug interactions because herbal medicines often contain multiple pharmacologically active ingredients, while conventional drugs typically contain only one. Some such interactions are clinically significant, although most herbal remedies are not associated with drug interactions causing serious consequences. Most catalogued herb-drug interactions are moderate in severity. The most commonly implicated conventional drugs in herb-drug interactions are warfarin, insulin, aspirin, digoxin, and ticlopidine, due to their narrow therapeutic indices. The most commonly implicated herbs involved in such interactions are those containing St. John’s Wort, magnesium, calcium, iron, or ginkgo.
Examples
Examples of herb-drug interactions include, but are not limited to:
St. John's wort affects the clearance of numerous drugs, including cyclosporin, SSRI antidepressants, digoxin, indinavir, and phenprocoumon. It may also interact with the anti-cancer drugs irinotecan and imatinib.
Salvia miltiorrhiza may enhance anticoagulation and bleeding among people taking warfarin.
Allium sativum has been found to decrease the plasma concentration of saquinavir, and may cause hypoglycemia when taken with chlorpropamide.
Ginkgo biloba can cause bleeding when combined with warfarin or aspirin.
Concomitant Ephedra and caffeine use has been reported to, in rare cases, cause fatalities.
Mechanisms
The mechanisms underlying most herb-drug interactions are not fully understood. Interactions between herbal medicines and anticancer drugs typically involve enzymes that metabolize cytochrome P450. For example, St. John's Wort has been shown to induce CYP3A4 and P-glycoprotein in vitro and in vivo.
Underlying factors
The factors or conditions that predispose the appearance of interactions include factors such as old age. This is where human physiology changing with age may affect the interaction of drugs. For example, liver metabolism, kidney function, nerve transmission, or the functioning of bone marrow all decrease with age. In addition, in old age, there is a sensory decrease that increases the chances of errors being made in the administration of drugs. The elderly are also more vulnerable to polypharmacy, and the more drugs a patient takes, the higher is the chance of an interaction.
Genetic factors may also affect the enzymes and receptors, thus altering the possibilities of interactions.
Patients with hepatic or renal diseases already may have difficulties metabolizing and excreting drugs, which may exacerbate the effect of interactions.
Some drugs present an intrinsic increased risk for a harmful interaction, including drugs with a narrow therapeutic index, where the difference between the effective dose and the toxic dose is small. The drug digoxin is an example of this type of drug.
Risks are also increased when the drug presents a steep dose-response curve, and small changes in the dosage produce large changes in the drug's concentration in the blood plasma.
Epidemiology
As of 2008, among adults in the United States of America older than 56, 4% were taking medication and/ or supplements that put them at risk of a major drug interaction. Potential drug-drug interactions have increased over time and are more common in the less-educated elderly even after controlling for age, sex, place of residence, and comorbidity.
See also
Deprescribing
Cytochrome P450
Classification of Pharmaco-Therapeutic Referrals
Notes
References
Bibliography
MA Cos. Interacciones de fármacos y sus implicancias clínicas. In: Farmacología Humana. Chap. 10, pp. 165–176. (J. Flórez y col. Eds). Masson SA, Barcelona. 1997.
External links
Drug Interactions: What You Should Know. U.S. Food and Drug Administration, Center for Drug Evaluation and Research, September 2013
COVID 19 Drug interaction check tool University of Liverpool
Clinical pharmacology
Pharmacokinetics | Drug interaction | [
"Chemistry"
] | 2,541 | [
"Pharmacology",
"Drug safety",
"Pharmacokinetics",
"Clinical pharmacology"
] |
1,099,709 | https://en.wikipedia.org/wiki/Hilbert%27s%20Theorem%2090 | In abstract algebra, Hilbert's Theorem 90 (or Satz 90) is an important result on cyclic extensions of fields (or to one of its generalizations) that leads to Kummer theory. In its most basic form, it states that if L/K is an extension of fields with cyclic Galois group G = Gal(L/K) generated by an element and if is an element of L of relative norm 1, that isthen there exists in L such thatThe theorem takes its name from the fact that it is the 90th theorem in David Hilbert's Zahlbericht , although it is originally due to .
Often a more general theorem due to is given the name, stating that if L/K is a finite Galois extension of fields with arbitrary Galois group G = Gal(L/K), then the first cohomology group of G, with coefficients in the multiplicative group of L, is trivial:
Examples
Let be the quadratic extension . The Galois group is cyclic of order 2, its generator acting via conjugation:
An element in has norm . An element of norm one thus corresponds to a rational solution of the equation or in other words, a point with rational coordinates on the unit circle. Hilbert's Theorem 90 then states that every such element a of norm one can be written as
where is as in the conclusion of the theorem, and c and d are both integers. This may be viewed as a rational parametrization of the rational points on the unit circle. Rational points on the unit circle correspond to Pythagorean triples, i.e. triples of integers satisfying .
Cohomology
The theorem can be stated in terms of group cohomology: if L× is the multiplicative group of any (not necessarily finite) Galois extension L of a field K with corresponding Galois group G, then
Specifically, group cohomology is the cohomology of the complex whose i-cochains are arbitrary functions from i-tuples of group elements to the multiplicative coefficient group, , with differentials defined in dimensions by:
where denotes the image of the -module element under the action of the group element .
Note that in the first of these we have identified a 0-cochain , with its unique image value .
The triviality of the first cohomology group is then equivalent to the 1-cocycles being equal to the 1-coboundaries , viz.:
For cyclic , a 1-cocycle is determined by , with and:On the other hand, a 1-coboundary is determined by . Equating these gives the original version of the Theorem.
A further generalization is to cohomology with non-abelian coefficients: that if H is either the general or special linear group over L, including , then Another generalization is to a scheme X:
where is the group of isomorphism classes of locally free sheaves of -modules of rank 1 for the Zariski topology, and is the sheaf defined by the affine line without the origin considered as a group under multiplication.
There is yet another generalization to Milnor K-theory which plays a role in Voevodsky's proof of the Milnor conjecture.
Proof
Let be cyclic of degree and generate . Pick any of norm
By clearing denominators, solving is the same as showing that has as an eigenvalue. We extend this to a map of -vector spaces via
The primitive element theorem gives for some . Since has minimal polynomial
we can identify
via
Here we wrote the second factor as a -polynomial in .
Under this identification, our map becomes
That is to say under this map
is an eigenvector with eigenvalue iff has norm .
References
Chapter II of J.S. Milne, Class Field Theory, available at his website .
External links
Theorems in algebraic number theory | Hilbert's Theorem 90 | [
"Mathematics"
] | 799 | [
"Theorems in algebraic number theory",
"Theorems in number theory"
] |
1,099,759 | https://en.wikipedia.org/wiki/Strongly%20interacting%20massive%20particle | A strongly interacting massive particle (SIMP) is a hypothetical particle that interacts strongly between themselves and weakly with ordinary matter, but could form the inferred dark matter despite this.
Strongly interacting massive particles have been proposed as a solution for the ultra-high-energy cosmic-ray problem and the absence of cooling flows in galactic clusters.
Various experiments and observations have set constraints on SIMP dark matter from 1990 onward.
SIMP annihilations would produce significant heat. DAMA set limits with NaI(Tl) crystals.
Measurements of Uranus's heat excess exclude SIMPs from 150 MeV to 104 GeV. Earth's heat flow significantly constrains any cross section.
See also
References
Further reading
Dark matter
Astroparticle physics
Hypothetical particles | Strongly interacting massive particle | [
"Physics",
"Astronomy"
] | 157 | [
"Dark matter",
"Hypothetical particles",
"Unsolved problems in astronomy",
"Concepts in astronomy",
"Astroparticle physics",
"Unsolved problems in physics",
"Astrophysics",
"Subatomic particles",
"Particle physics",
"Exotic matter",
"Physics beyond the Standard Model",
"Matter"
] |
1,100,001 | https://en.wikipedia.org/wiki/Einselection | In quantum mechanics, einselections, short for "environment-induced superselection", is a name coined by Wojciech H. Zurek
for a process which is claimed to explain the appearance of wavefunction collapse and the emergence of classical descriptions of reality from quantum descriptions. In this approach, classicality is described as an emergent property induced in open quantum systems by their environments. Due to the interaction with the environment, the vast majority of states in the Hilbert space of a quantum open system become highly unstable due to entangling interaction with the environment, which in effect monitors selected observables of the system. After a decoherence time, which for macroscopic objects is typically many orders of magnitude shorter than any other dynamical timescale, a generic quantum state decays into an uncertain state which can be expressed as a mixture of simple pointer states. In this way the environment induces effective superselection rules. Thus, einselection precludes stable existence of pure superpositions of pointer states. These 'pointer states' are stable despite environmental interaction. The einselected states lack coherence, and therefore do not exhibit the quantum behaviours of entanglement and superposition.
Advocates of this approach argue that since only quasi-local, essentially classical states survive the decoherence process, einselection can in many ways explain the emergence of a (seemingly) classical reality in a fundamentally quantum universe (at least to local observers). However, the basic program has been criticized as relying on a circular argument (e.g. by Ruth Kastner). So the question of whether the 'einselection' account can really explain the phenomenon of wave function collapse remains unsettled.
Definition
Zurek has defined einselection as follows: "Decoherence leads to einselection when the states of the environment corresponding to different pointer states become orthogonal:
",
Details
Einselected pointer states are distinguished by their ability to persist in spite of the environmental monitoring and therefore are the ones in which quantum open systems are observed. Understanding the nature of these states and the process of their dynamical selection is of fundamental importance. This process has been studied first in a measurement situation: When the system is an apparatus whose intrinsic dynamics can be neglected, pointer states turn out to be eigenstates of the interaction Hamiltonian between the apparatus and its environment. In more general situations, when the system's dynamics is relevant, einselection is more complicated. Pointer states result from the interplay between self-evolution and environmental monitoring.
To study einselection, an operational definition of pointer states has been introduced.
This is the "predictability sieve" criterion, based on an intuitive idea: Pointer states can be defined as the ones which become minimally entangled with the environment in the course of their evolution. The predictability sieve criterion is a way to quantify this idea by using the following algorithmic procedure: For every initial pure state , one measures the entanglement generated dynamically between the system and the environment by computing the entropy:
or some other measure of predictability from the reduced density matrix of the system (which is initially ).
The entropy is a function of time and a functional of the initial state . Pointer states are obtained by minimizing over and demanding that the answer be robust when varying the time .
The nature of pointer states has been investigated using the predictability sieve criterion only for a limited number of examples. Apart from the already mentioned case of the measurement situation (where pointer states are simply eigenstates of the interaction Hamiltonian) the most notable example is that of a quantum Brownian particle coupled through its position with a bath of independent harmonic oscillators. In such case pointer states are localized in phase space, even though the interaction Hamiltonian involves the position of the particle. Pointer states are the result of the interplay between self-evolution and interaction with the environment and turn out to be coherent states.
There is also a quantum limit of decoherence: When the spacing between energy levels of the system is large compared to the frequencies present in the environment, energy eigenstates are einselected nearly independently of the nature of the system-environment coupling.
Collisional decoherence
There has been significant work on correctly identifying the pointer states in the case of a massive particle decohered by collisions with a fluid environment, often known as collisional decoherence. In particular, Busse and Hornberger have identified certain solitonic wavepackets as being unusually stable in the presence of such decoherence.
See also
Mott problem
References
Quantum mechanics
Emergence | Einselection | [
"Physics"
] | 949 | [
"Theoretical physics",
"Quantum mechanics"
] |
1,100,094 | https://en.wikipedia.org/wiki/Branching%20fraction | In particle physics and nuclear physics, the branching fraction (or branching ratio) for a decay is the fraction of particles which decay by an individual decay mode or with respect to the total number of particles which decay. It applies to either the radioactive decay of atoms or the decay of elementary particles. It is equal to the ratio of the partial decay constant of the decay mode to the overall decay constant. Sometimes a partial half-life is given, but this term is misleading; due to competing modes, it is not true that half of the particles will decay through a particular decay mode after its partial half-life. The partial half-life is merely an alternate way to specify the partial decay constant , the two being related through:
For example, for decays of Cs, 98.13% are ε (electron capture) or β (positron) decays, and 1.87% are β (electron) decays. The half-life of this isotope is 6.480 days, which corresponds to a total decay constant of 0.1070 d. Then the partial decay constants, as computed from the branching fractions, are 0.1050 d for ε/β decays, and 2.14×10 d for β decays. Their respective partial half-lives are 6.603 d and 347 d.
Isotopes with significant branching of decay modes include copper-64, arsenic-74, rhodium-102, indium-112, iodine-126 and holmium-164.
Branching fractions of atomic states
In the field of atomic, molecular, and optical physics, a branching fraction refers to the probability of decay to a specific lower-lying energy states from some excited state. Suppose we drive a transition in an atomic system to an excited state , which can decay into either the ground state or a long-lived state . If the probability to decay (the branching fraction) into the state is , then the probability to decay into the other state would be . Further possible decays would split appropriately, with their probabilities summing to 1.
In some instances, instead of a branching fraction, a branching ratio is used. In this case, the branching ratio is just the ratio of the branching fractions between two states. To use our example from before, if the branching fraction to state is , then the branching ratio comparing the transition rates to and would be .
Branching fractions can be measured in a variety of ways, including time-resolved recording of the atom's fluorescence during a series of population transfers in the relevant states.
References
External links
LBNL Isotopes Project
Particle Data Group (listings for particle physics)
Nuclear Structure and Decay Data - IAEA for nuclear decays
Particle physics
Nuclear physics
Radioactivity
Ratios | Branching fraction | [
"Physics",
"Chemistry",
"Mathematics"
] | 561 | [
"Arithmetic",
"Particle physics",
"Radioactivity",
"Nuclear physics",
"Ratios"
] |
1,100,216 | https://en.wikipedia.org/wiki/Sphingolipid | Sphingolipids are a class of lipids containing a backbone of sphingoid bases, which are a set of aliphatic amino alcohols that includes sphingosine. They were discovered in brain extracts in the 1870s and were named after the mythological sphinx because of their enigmatic nature. These compounds play important roles in signal transduction and cell recognition. Sphingolipidoses, or disorders of sphingolipid metabolism, have particular impact on neural tissue. A sphingolipid with a terminal hydroxyl group is a ceramide. Other common groups bonded to the terminal oxygen atom include phosphocholine, yielding a sphingomyelin, and various sugar monomers or dimers, yielding cerebrosides and globosides, respectively. Cerebrosides and globosides are collectively known as glycosphingolipids.
Structure
The long-chain bases, sometimes simply known as sphingoid bases, are the first non-transient products of de novo sphingolipid synthesis in both yeast and mammals. These compounds, specifically known as phytosphingosine and dihydrosphingosine (also known as sphinganine, although this term is less common), are mainly C18 compounds, with somewhat lower levels of C20 bases. Ceramides and glycosphingolipids are N-acyl derivatives of these compounds.
The sphingosine backbone is O-linked to a (usually) charged head group such as ethanolamine, serine, or choline.
The backbone is also amide-linked to an acyl group, such as a fatty acid.
Types
Simple sphingolipids, which include the sphingoid bases and ceramides, make up the early products of the sphingolipid synthetic pathways.
Sphingoid bases are the fundamental building blocks of all sphingolipids. The main mammalian sphingoid bases are dihydrosphingosine and sphingosine, while dihydrosphingosine and phytosphingosine are the principal sphingoid bases in yeast. Sphingosine, dihydrosphingosine, and phytosphingosine may be phosphorylated.
Ceramides, as a general class, are N-acylated sphingoid bases lacking additional head groups.
Dihydroceramide is produced by N-acylation of dihydrosphingosine. Dihydroceramide is found in both yeast and mammalian systems.
Ceramide is produced in mammalian systems by desaturation of dihydroceramide by dihydroceramide desaturase 1 (DES1). This highly bioactive molecule may also be phosphorylated to form ceramide-1-phosphate.
Phytoceramide is produced in yeast by hydroxylation of dihydroceramide at C-4.
Complex sphingolipids may be formed by addition of head groups to ceramide or phytoceramide:
Sphingomyelins have a phosphocholine or phosphoethanolamine molecule with an ester linkage to the 1-hydroxy group of a ceramide.
Glycosphingolipids are ceramides with one or more sugar residues joined in a β-glycosidic linkage at the 1-hydroxyl position (see image).
Cerebrosides have a single glucose or galactose at the 1-hydroxy position.
Sulfatides are sulfated cerebrosides.
Gangliosides have at least three sugars, one of which must be sialic acid.
Inositol-containing ceramides, which are derived from phytoceramide, are produced in yeast. These include inositol phosphorylceramide, mannose inositol phosphorylceramide, and mannose diinositol phosphorylceramide.
Mammalian sphingolipid metabolism
De novo sphingolipid synthesis begins with formation of 3-keto-dihydrosphingosine by serine palmitoyltransferase. The preferred substrates for this reaction are palmitoyl-CoA and serine. However, studies have demonstrated that serine palmitoyltransferase has some activity toward other species of fatty acyl-CoA and alternative amino acids, and the diversity of sphingoid bases has recently been reviewed. Next, 3-keto-dihydrosphingosine is reduced to form dihydrosphingosine. Dihydrosphingosine is acylated by one of six (dihydro)-ceramide synthase, CerS - originally termed LASS - to form dihydroceramide. The six CerS enzymes have different specificity for acyl-CoA substrates, resulting in the generation of dihydroceramides with differing chain lengths (ranging from C14-C26). Dihydroceramides are then desaturated to form ceramide.
De novo generated ceramide is the central hub of the sphingolipid network and subsequently has several fates. It may be phosphorylated by ceramide kinase to form ceramide-1-phosphate. Alternatively, it may be glycosylated by glucosylceramide synthase or galactosylceramide synthase. Additionally, it can be converted to sphingomyelin by the addition of a phosphorylcholine headgroup by sphingomyelin synthase. Diacylglycerol is generated by this process. Finally, ceramide may be broken down by a ceramidase to form sphingosine. Sphingosine may be phosphorylated to form sphingosine-1-phosphate. This may be dephosphorylated to reform sphingosine.
Breakdown pathways allow the reversion of these metabolites to ceramide. The complex glycosphingolipids are hydrolyzed to glucosylceramide and galactosylceramide. These lipids are then hydrolyzed by beta-glucosidases and beta-galactosidases to regenerate ceramide. Similarly, sphingomyelin may be broken down by sphingomyelinase to form ceramide.
The only route by which sphingolipids are converted to non-sphingolipids is through sphingosine-1-phosphate lyase. This forms ethanolamine phosphate and hexadecenal.
Functions of mammalian sphingolipids
Sphingolipids are commonly believed to protect the cell surface against harmful environmental factors by forming a mechanically stable and chemically resistant outer leaflet of the plasma membrane lipid bilayer. Certain complex glycosphingolipids were found to be involved in specific functions, such as cell recognition and signaling. Cell recognition depends mainly on the physical properties of the sphingolipids, whereas signaling involves specific interactions of the glycan structures of glycosphingolipids with similar lipids present on neighboring cells or with proteins.
Recently, simple sphingolipid metabolites, such as ceramide and sphingosine-1-phosphate, have been shown to be important mediators in the signaling cascades involved in apoptosis, proliferation, stress responses, necrosis, inflammation, autophagy, senescence, and differentiation. Ceramide-based lipids self-aggregate in cell membranes and form separate phases less fluid than the bulk phospholipids. These sphingolipid-based microdomains, or "lipid rafts" were originally proposed to sort membrane proteins along the cellular pathways of membrane transport. At present, most research focuses on the organizing function during signal transduction.
Sphingolipids are synthesized in a pathway that begins in the ER and is completed in the Golgi apparatus, but these lipids are enriched in the plasma membrane and in endosomes, where they perform many of their functions. Transport occurs via vesicles and monomeric transport in the cytosol. Sphingolipids are virtually absent from mitochondria and the ER, but constitute a 20-35 molar fraction of plasma membrane lipids.
In experimental animals, feeding sphingolipids inhibits colon carcinogenesis, reduces LDL cholesterol and elevates HDL cholesterol.
Other sphingolipids
Sphingolipids are universal in eukaryotes but are rare in bacteria and archaea, meaning that they are evolutionally very old. Bacteria that do produce sphingolipids are found in some members of the superphylum FCB group (Sphingobacteria), particularly family Sphingomonadaceae, some members of the Bdellovibrionota, and some members of the Myxococcota.
Yeast sphingolipids
Because of the incredible complexity of mammalian systems, yeast are often used as a model organism for working out new pathways. These single-celled organisms are often more genetically tractable than mammalian cells, and strain libraries are available to supply strains harboring almost any non-lethal open reading frame single deletion. The two most commonly used yeasts are Saccharomyces cerevisiae and Schizosaccharomyces pombe, although research is also done in the pathogenic yeast Candida albicans.
In addition to the important structural functions of complex sphingolipids (inositol phosphorylceramide and its mannosylated derivatives), the sphingoid bases phytosphingosine and dihydrosphingosine (sphinganine) play vital signaling roles in S. cerevisiae. These effects include regulation of endocytosis, ubiquitin-dependent proteolysis (and, thus, regulation of nutrient uptake ), cytoskeletal dynamics, the cell cycle, translation, posttranslational protein modification, and the heat stress response. Additionally, modulation of sphingolipid metabolism by phosphatidylinositol (4,5)-bisphosphate signaling via Slm1p and Slm2p and calcineurin has recently been described. Additionally, a substrate-level interaction has been shown between complex sphingolipid synthesis and cycling of phosphatidylinositol 4-phosphate by the phosphatidylinositol kinase Stt4p and the lipid phosphatase Sac1p.
Plant sphingolipids
Higher plants contain a wider variety of sphingolipids than animals and fungi.
Disorders
There are several disorders of sphingolipid metabolism, known as sphingolipidoses. The main members of this group are Niemann-Pick disease, Fabry disease, Krabbe disease, Gaucher disease, Tay–Sachs disease and Metachromatic leukodystrophy. They are generally inherited in an autosomal recessive fashion, but notably Fabry disease is X-linked. Taken together, sphingolipidoses have an incidence of approximately 1 in 10,000, but substantially more in certain populations such as Ashkenazi Jews. Enzyme replacement therapy is available to treat mainly Fabry disease and Gaucher disease, and people with these types of sphingolipidoses may live well into adulthood. The other types are generally fatal by age 1 to 5 years for infantile forms, but progression may be mild for juvenile- or adult-onset forms.
Sphingolipids have also been implicated with the frataxin protein (Fxn), the deficiency of which is associated with Friedreich's ataxia (FRDA). Loss of Fxn in the nervous system in mice also activates an iron/sphingolipid/PDK1/Mef2 pathway, indicating that the mechanism is evolutionarily conserved. Furthermore, sphingolipid levels and PDK1 activity are also increased in hearts of FRDA patients, suggesting that a similar pathway is affected in FRDA. Other research has demonstrated that iron accumulation in the nervous systems of flies enhances the synthesis of sphingolipids, which in turn activates 3-phosphoinositide dependent protein kinase-1 (Pdk1) and myocyte enhancer factor-2 (Mef2) to trigger neurodegeneration of adult photoreceptors.
Sphingolipids play a key role in neuronal survival in Parkinson's Disease (PD) and their catabolic pathway alteration in the brain is partly represented in cerebrospinal fluid and blood tissues (Table1) and have the diagnostic potential.
Additional images
See also
Sphingosyl phosphatide
References
External links
Lipids
Cell biology | Sphingolipid | [
"Chemistry",
"Biology"
] | 2,701 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Cell biology",
"Lipids"
] |
1,100,516 | https://en.wikipedia.org/wiki/Model%20predictive%20control | Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.
Generalized predictive control (GPC) and dynamic matrix control (DMC) are classical examples of MPC.
Overview
The models used in MPC are generally intended to represent the behavior of complex and simple dynamical systems. The additional complexity of the MPC control algorithm is not generally needed to provide adequate control of simple systems, which are often controlled well by generic PID controllers. Common dynamic characteristics that are difficult for PID controllers include large time delays and high-order dynamics.
MPC models predict the change in the dependent variables of the modeled system that will be caused by changes in the independent variables. In a chemical process, independent variables that can be adjusted by the controller are often either the setpoints of regulatory PID controllers (pressure, flow, temperature, etc.) or the final control element (valves, dampers, etc.). Independent variables that cannot be adjusted by the controller are used as disturbances. Dependent variables in these processes are other measurements that represent either control objectives or process constraints.
MPC uses the current plant measurements, the current dynamic state of the process, the MPC models, and the process variable targets and limits to calculate future changes in the dependent variables. These changes are calculated to hold the dependent variables close to target while honoring constraints on both independent and dependent variables. The MPC typically sends out only the first change in each independent variable to be implemented, and repeats the calculation when the next change is required.
While many real processes are not linear, they can often be considered to be approximately linear over a small operating range. Linear MPC approaches are used in the majority of applications with the feedback mechanism of the MPC compensating for prediction errors due to structural mismatch between the model and the process. In model predictive controllers that consist only of linear models, the superposition principle of linear algebra enables the effect of changes in multiple independent variables to be added together to predict the response of the dependent variables. This simplifies the control problem to a series of direct matrix algebra calculations that are fast and robust.
When linear models are not sufficiently accurate to represent the real process nonlinearities, several approaches can be used. In some cases, the process variables can be transformed before and/or after the linear MPC model to reduce the nonlinearity. The process can be controlled with nonlinear MPC that uses a nonlinear model directly in the control application. The nonlinear model may be in the form of an empirical data fit (e.g. artificial neural networks) or a high-fidelity dynamic model based on fundamental mass and energy balances. The nonlinear model may be linearized to derive a Kalman filter or specify a model for linear MPC.
An algorithmic study by El-Gherwi, Budman, and El Kamel shows that utilizing a dual-mode approach can provide significant reduction in online computations while maintaining comparative performance to a non-altered implementation. The proposed algorithm solves N convex optimization problems in parallel based on exchange of information among controllers.
Theory behind MPC
MPC is based on iterative, finite-horizon optimization of a plant model. At time the current plant state is sampled and a cost minimizing control strategy is computed (via a numerical minimization algorithm) for a relatively short time horizon in the future: . Specifically, an online or on-the-fly calculation is used to explore state trajectories that emanate from the current state and find (via the solution of Euler–Lagrange equations) a cost-minimizing control strategy until time . Only the first step of the control strategy is implemented, then the plant state is sampled again and the calculations are repeated starting from the new current state, yielding a new control and new predicted state path. The prediction horizon keeps being shifted forward and for this reason MPC is also called receding horizon control. Although this approach is not optimal, in practice it has given very good results. Much academic research has been done to find fast methods of solution of Euler–Lagrange type equations, to understand the global stability properties of MPC's local optimization, and in general to improve the MPC method.
Principles of MPC
Model predictive control is a multivariable control algorithm that uses:
an internal dynamic model of the process
a cost function J over the receding horizon
an optimization algorithm minimizing the cost function J using the control input u
An example of a quadratic cost function for optimization is given by:
without violating constraints (low/high limits) with
: th controlled variable (e.g. measured temperature)
: th reference variable (e.g. required temperature)
: th manipulated variable (e.g. control valve)
: weighting coefficient reflecting the relative importance of
: weighting coefficient penalizing relative big changes in
etc.
Nonlinear MPC
Nonlinear model predictive control, or NMPC, is a variant of model predictive control that is characterized by the use of nonlinear system models in the prediction. As in linear MPC, NMPC requires the iterative solution of optimal control problems on a finite prediction horizon. While these problems are convex in linear MPC, in nonlinear MPC they are not necessarily convex anymore. This poses challenges for both NMPC stability theory and numerical solution.
The numerical solution of the NMPC optimal control problems is typically based on direct optimal control methods using Newton-type optimization schemes, in one of the variants: direct single shooting, direct multiple shooting methods, or direct collocation. NMPC algorithms typically exploit the fact that consecutive optimal control problems are similar to each other. This allows to initialize the Newton-type solution procedure efficiently by a suitably shifted guess from the previously computed optimal solution, saving considerable amounts of computation time. The similarity of subsequent problems is even further exploited by path following algorithms (or "real-time iterations") that never attempt to iterate any optimization problem to convergence, but instead only take a few iterations towards the solution of the most current NMPC problem, before proceeding to the next one, which is suitably initialized; see, e.g.,.. Another promising candidate for the nonlinear optimization problem is to use a randomized optimization method. Optimum solutions are found by generating random samples that satisfy the constraints in the solution space and finding the optimum one based on cost function.
While NMPC applications have in the past been mostly used in the process and chemical industries with comparatively slow sampling rates, NMPC is being increasingly applied, with advancements in controller hardware and computational algorithms, e.g., preconditioning, to applications with high sampling rates, e.g., in the automotive industry, or even when the states are distributed in space (Distributed parameter systems). As an application in aerospace, recently, NMPC has been used to track optimal terrain-following/avoidance trajectories in real-time.
Explicit MPC
Explicit MPC (eMPC) allows fast evaluation of the control law for some systems, in stark contrast to the online MPC. Explicit MPC is based on the parametric programming technique, where the solution to the MPC control problem formulated as optimization problem is pre-computed offline. This offline solution, i.e., the control law, is often in the form of a piecewise affine function (PWA), hence the eMPC controller stores the coefficients of the PWA for each a subset (control region) of the state space, where the PWA is constant, as well as coefficients of some parametric representations of all the regions. Every region turns out to geometrically be a convex polytope for linear MPC, commonly parameterized by coefficients for its faces, requiring quantization accuracy analysis. Obtaining the optimal control action is then reduced to first determining the region containing the current state and second a mere evaluation of PWA using the PWA coefficients stored for all regions. If the total number of the regions is small, the implementation of the eMPC does not require significant computational resources (compared to the online MPC) and is uniquely suited to control systems with fast dynamics. A serious drawback of eMPC is exponential growth of the total number of the control regions with respect to some key parameters of the controlled system, e.g., the number of states, thus dramatically increasing controller memory requirements and making the first step of PWA evaluation, i.e. searching for the current control region, computationally expensive.
Robust MPC
Robust variants of model predictive control are able to account for set bounded disturbance while still ensuring state constraints are met. Some of the main approaches to robust MPC are given below.
Min-max MPC. In this formulation, the optimization is performed with respect to all possible evolutions of the disturbance. This is the optimal solution to linear robust control problems, however it carries a high computational cost. The basic idea behind the min/max MPC approach is to modify the on-line "min" optimization to a "min-max" problem, minimizing the worst case of the objective function, maximized over all possible plants from the uncertainty set.
Constraint Tightening MPC. Here the state constraints are enlarged by a given margin so that a trajectory can be guaranteed to be found under any evolution of disturbance.
Tube MPC. This uses an independent nominal model of the system, and uses a feedback controller to ensure the actual state converges to the nominal state. The amount of separation required from the state constraints is determined by the robust positively invariant (RPI) set, which is the set of all possible state deviations that may be introduced by disturbance with the feedback controller.
Multi-stage MPC. This uses a scenario-tree formulation by approximating the uncertainty space with a set of samples and the approach is non-conservative because it takes into account that the measurement information is available at every time stage in the prediction and the decisions at every stage can be different and can act as recourse to counteract the effects of uncertainties. The drawback of the approach however is that the size of the problem grows exponentially with the number of uncertainties and the prediction horizon.
Tube-enhanced multi-stage MPC. This approach synergizes multi-stage MPC and tube-based MPC. It provides high degrees of freedom to choose the desired trade-off between optimality and simplicity by the classification of uncertainties and the choice of control laws in the predictions.
Commercially available MPC software
Commercial MPC packages are available and typically contain tools for model identification and analysis, controller design and tuning, as well as controller performance evaluation.
A survey of commercially available packages has been provided by S.J. Qin and T.A. Badgwell in Control Engineering Practice 11 (2003) 733–764.
MPC vs. LQR
Model predictive control and linear-quadratic regulators are both expressions of optimal control, with different schemes of setting up optimisation costs.
While a model predictive controller often looks at fixed length, often graduatingly weighted sets of error functions, the linear-quadratic regulator looks at all linear system inputs and provides the transfer function that will reduce the total error across the frequency spectrum, trading off state error against input frequency.
Due to these fundamental differences, LQR has better global stability properties, but MPC often has more locally optimal[?] and complex performance.
The main differences between MPC and LQR are that LQR optimizes across the entire time window (horizon) whereas MPC optimizes in a receding time window, and that with MPC a new solution is computed often whereas LQR uses the same single (optimal) solution for the whole time horizon. Therefore, MPC typically solves the optimization problem in a smaller time window than the whole horizon and hence may obtain a suboptimal solution. However, because MPC makes no assumptions about linearity, it can handle hard constraints as well as migration of a nonlinear system away from its linearized operating point, both of which are major drawbacks to LQR.
This means that LQR can become weak when operating away from stable fixed points. MPC can chart a path between these fixed points, but convergence of a solution is not guaranteed, especially if thought as to the convexity and complexity of the problem space has been neglected.
See also
Control engineering
Control theory
Feed-forward
System identification
References
Further reading
Rawlings, James B.; Mayne, David Q.; and Diehl, Moritz M.; Model Predictive Control: Theory, Computation, and Design (2nd Ed.), Nob Hill Publishing, LLC, (Oct. 2017)
Geyer, Tobias; Model predictive control of high power converters and industrial drives, Wiley, London, , Nov. 2016
External links
Case Study. Lancaster Waste Water Treatment Works, optimisation by means of Model Predictive Control from Perceptive Engineering
acados - Open-source framework for (nonlinear) model predictive control providing fast and embedded solvers for nonlinear optimization. (C, MATLAB and Python interface available)
μAO-MPC - Open Source Software package that generates tailored code for model predictive controllers on embedded systems in highly portable C code.
GRAMPC - Open source software framework for embedded nonlinear model predictive control using a gradient-based augmented Lagrangian method. (Plain C code, no code generation, MATLAB interface)
jMPC Toolbox - Open Source MATLAB Toolbox for Linear MPC.
Study on application of NMPC to superfluid cryogenics (PhD Project).
Nonlinear Model Predictive Control Toolbox for MATLAB and Python
Model Predictive Control Toolbox from MathWorks for design and simulation of model predictive controllers in MATLAB and Simulink
Pulse step model predictive controller - virtual simulator
Tutorial on MPC with Excel and MATLAB Examples
GEKKO: Model Predictive Control in Python
Control theory | Model predictive control | [
"Mathematics"
] | 3,052 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
9,105,867 | https://en.wikipedia.org/wiki/FitzHugh%E2%80%93Nagumo%20model | The FitzHugh–Nagumo model (FHN) describes a prototype of an excitable system (e.g., a neuron).
It is an example of a relaxation oscillator because, if the external stimulus exceeds a certain threshold value, the system will exhibit a characteristic excursion in phase space, before the variables and relax back to their rest values.
This behaviour is a sketch for neural spike generations, with a short, nonlinear elevation of membrane voltage , diminished over time by a slower, linear recovery variable representing sodium channel reactivation and potassium channel deactivation, after stimulation by an external input current.
The equations for this dynamical system read
The FitzHugh–Nagumo model is a simplified 2D version of the Hodgkin–Huxley model which models in a detailed manner activation and deactivation dynamics of a spiking neuron.
In turn, the Van der Pol oscillator is a special case of the FitzHugh–Nagumo model, with .
History
It was named after Richard FitzHugh (1922–2007) who suggested the system in 1961 and Jinichi Nagumo et al. who created the equivalent circuit the following year.
In the original papers of FitzHugh, this model was called Bonhoeffer–Van der Pol oscillator (named after Karl-Friedrich Bonhoeffer and Balthasar van der Pol) because it contains the Van der Pol oscillator as a special case for . The equivalent circuit was suggested by Jin-ichi Nagumo, Suguru Arimoto, and Shuji Yoshizawa.
Qualitative analysis
Qualitatively, the dynamics of this system is determined by the relation between the three branches of the cubic nullcline and the linear nullcline.
The cubic nullcline is defined by .
The linear nullcline is defined by .
In general, the two nullclines intersect at one or three points, each of which is an equilibrium point. At large values of , far from origin, the flow is a clockwise circular flow, consequently the sum of the index for the entire vector field is +1. This means that when there is one equilibrium point, it must be a clockwise spiral point or a node. When there are three equilibrium points, they must be two clockwise spiral points and one saddle point.
If the linear nullcline pierces the cubic nullcline from downwards then it is a clockwise spiral point or a node.
If the linear nullcline pierces the cubic nullcline from upwards in the middle branch, then it is a saddle point.
The type and stability of the index +1 can be numerically computed by computing the trace and determinant of its Jacobian: The point is stable iff the trace is negative. That is, .
The point is a spiral point iff . That is, .
The limit cycle is born when a stable spiral point becomes unstable by Hopf bifurcation.
Only when the linear nullcline pierces the cubic nullcline at three points, the system has a separatrix, being the two branches of the stable manifold of the saddle point in the middle.
If the separatrix is a curve, then trajectories to the left of the separatrix converge to the left sink, and similarly for the right.
If the separatrix is a cycle around the left intersection, then trajectories inside the separatrix converge to the left spiral point. Trajectories outside the separatrix converge to the right sink. The separatrix itself is the limit cycle of the lower branch of the stable manifold for the saddle point in the middle. Similarly for the case where the separatrix is a cycle around the right intersection.
Between the two cases, the system undergoes a homoclinic bifurcation.
Gallery figures: FitzHugh-Nagumo model, with , and varying . (They are animated. Open them to see the animation.)
See also
Autowave
Biological neuron model
Computational neuroscience
Hodgkin–Huxley model
Morris–Lecar model
Reaction–diffusion
Theta model
Chialvo map
References
Further reading
FitzHugh R. (1955) "Mathematical models of threshold phenomena in the nerve membrane". Bull. Math. Biophysics, 17:257—278
FitzHugh R. (1961) "Impulses and physiological states in theoretical models of nerve membrane". Biophysical J. 1:445–466
FitzHugh R. (1969) "Mathematical models of excitation and propagation in nerve". Chapter 1 (pp. 1–85 in H. P. Schwan, ed. Biological Engineering, McGraw–Hill Book Co., N.Y.)
Nagumo J., Arimoto S., and Yoshizawa S. (1962) "An active pulse transmission line simulating nerve axon". Proc. IRE. 50:2061–2070.
External links
FitzHugh–Nagumo model on Scholarpedia
Interactive FitzHugh-Nagumo. Java applet, includes phase space and parameters can be changed at any time.
Interactive FitzHugh–Nagumo in 1D. Java applet to simulate 1D waves propagating in a ring. Parameters can also be changed at any time.
Interactive FitzHugh–Nagumo in 2D. Java applet to simulate 2D waves including spiral waves. Parameters can also be changed at any time.
Java applet for two coupled FHN systems Options include time delayed coupling, self-feedback, noise induced excursions, data export to file. Source code available (BY-NC-SA license).
Nonlinear systems
Electrophysiology
Computational neuroscience
Biophysics
Articles containing video clips | FitzHugh–Nagumo model | [
"Physics",
"Mathematics",
"Biology"
] | 1,182 | [
"Nonlinear systems",
"Applied and interdisciplinary physics",
"Biophysics",
"Dynamical systems"
] |
9,105,950 | https://en.wikipedia.org/wiki/Global%20Ocean%20Data%20Analysis%20Project | The Global Ocean Data Analysis Project (GLODAP) is a synthesis project bringing together oceanographic data, featuring two major releases as of 2018. The central goal of GLODAP is to generate a global climatology of the World Ocean's carbon cycle for use in studies of both its natural and anthropogenically forced states. GLODAP is funded by the National Oceanic and Atmospheric Administration, the U.S. Department of Energy, and the National Science Foundation.
The first GLODAP release (v1.1) was produced from data collected during the 1990s by research cruises on the World Ocean Circulation Experiment, Joint Global Ocean Flux Study and Ocean-Atmosphere Exchange Study programmes. The second GLODAP release (v2) extended the first using data from cruises from 2000 to 2013. The data are available both as individual "bottle data" from sample sites, and as interpolated fields on a standard longitude, latitude, depth grid.
Dataset
The GLODAPv1.1 climatology contains analysed fields of "present day" (1990s) dissolved inorganic carbon (DIC), alkalinity, carbon-14 (14C), CFC-11 and CFC-12. The fields consist of three-dimensional, objectively-analysed global grids at 1° horizontal resolution, interpolated onto 33 standardised vertical intervals from the surface (0 m) to the abyssal seafloor (5500 m). In terms of temporal resolution, the relative scarcity of the source data mean that, unlike the World Ocean Atlas, averaged fields are only produced for the annual time-scale. The GLODAP climatology is missing data in certain oceanic provinces including the Arctic Ocean, the Caribbean Sea, the Mediterranean Sea and Maritime Southeast Asia.
Additionally, analysis has attempted to separate natural from anthropogenic DIC, to produce fields of pre-industrial (18th century) DIC and "present day" anthropogenic . This separation allows estimation of the magnitude of the ocean sink for anthropogenic , and is important for studies of phenomena such as ocean acidification. However, as anthropogenic DIC is chemically and physically identical to natural DIC, this separation is difficult. GLODAP used a mathematical technique known as C* (C-star) to deconvolute anthropogenic from natural DIC (there are a number of alternative methods). This uses information about ocean biogeochemistry and surface disequilibrium together with other ocean tracers including carbon-14, CFC-11 and CFC-12 (which indicate water mass age) to try to separate out natural from that added during the ongoing anthropogenic transient. The technique is not straightforward and has associated errors, although it is gradually being refined to improve it. Its findings are generally supported by independent predictions made by dynamic models.
The GLODAPv2 climatology largely repeats the earlier format, but makes use of the large number of observations of the ocean's carbon cycle made over the intervening period (2000–2013). The analysed "present-day" fields in the resulting dataset are normalised to year 2002. Anthropogenic carbon was estimated in GLODAPv2 using a "transit-time distribution" (TTD) method (an approach using a Green's function). In addition to updated fields of DIC (total and anthropogenic) and alkalinity, GLODAPv2 includes fields of seawater pH and calcium carbonate saturation state (Ω; omega). The latter is a non-dimensional number calculated by dividing the local carbonate ion concentration by the ambient saturation concentration for calcium carbonate (for the biomineral polymorphs calcite and aragonite), and relates to an oceanographic property, the carbonate compensation depth. Values of this below 1 indicate undersaturation, and potential dissolution, while values above 1 indicate supersaturation, and relative stability.
Gallery
The following panels show sea surface concentrations of fields prepared by GLODAPv1.1. The "pre-industrial" is the 18th century, while "present-day" is approximately the 1990s.
The following panels show sea surface concentrations of fields prepared by GLODAPv2. The "pre-industrial" is the 18th century, while "present-day" is normalised to 2002. Note that these properties are shown in mass units (per kilogram of seawater) rather than the volume units (per cubic metre of seawater) used in the GLODAPv1.1 panels.
See also
Biogeochemical cycle
Biological pump
Continental shelf pump
Geochemical Ocean Sections Study
Joint Global Ocean Flux Study
Ocean acidification
Solubility pump
World Ocean Atlas
World Ocean Circulation Experiment
References
External links
GLODAP website, Bjerknes Climate Data Centre
GLODAP v1.1 website, National Oceanic and Atmospheric Administration
GLODAP v2 website, National Oceanic and Atmospheric Administration
Biological oceanography
Carbon
Chemical oceanography
Chlorofluorocarbons
Oceanography
Physical oceanography | Global Ocean Data Analysis Project | [
"Physics",
"Chemistry",
"Environmental_science"
] | 1,063 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Chemical oceanography",
"Physical oceanography"
] |
9,107,270 | https://en.wikipedia.org/wiki/UTOPIA%20%28bioinformatics%20tools%29 | UTOPIA (User-friendly Tools for Operating Informatics Applications) is a suite of free tools for visualising and analysing bioinformatics data. Based on an ontology-driven data model, it contains applications for viewing and aligning protein sequences, rendering complex molecular structures in 3D, and for finding and using resources such as web services and data objects. There are two major components, the protein analysis suite and UTOPIA documents.
Utopia Protein Analysis suite
The Utopia Protein Analysis suite is a collection of interactive tools for analysing protein sequence and protein structure. Up front are user-friendly and responsive visualisation applications, behind the scenes a sophisticated model that allows these to work together and hides much of the tedious work of dealing with file formats and web services.
Utopia Documents
Utopia Documents brings a fresh new perspective to reading the scientific literature, combining the convenience and reliability of the Portable Document Format (pdf) with the flexibility and power of the web.
History
Between 2003 and 2005 work on UTOPIA was funded via The e-Science North West Centre based at The University of Manchester by the Engineering and Physical Sciences Research Council, UK Department of Trade And Industry, and the European Molecular Biology Network (EMBnet). Since 2005 work continues under the EMBRACE European Network of Excellence.
UTOPIA's CINEMA (Colour INteractive Editor for Multiple Alignments), a tool for Sequence Alignment, is the latest incarnation of software originally developed at The University of Leeds to aid the analysis of G protein-coupled receptors (GPCRs). SOMAP, a Screen Oriented Multiple Alignment Procedure was developed in the late 1980s on the VMS computer operating system, used a monochrome text-based VT100 video terminal, and featured context-sensitive help and pulldown menus some time before these were standard operating system features.
SOMAP was followed by a Unix tool called VISTAS (VIsualizing STructures And Sequences) which included the ability to render 3D molecular structure and generate plots and statistical representations of sequence properties.
The first tool under the CINEMA banner developed at The University of Manchester was a Java-based applet launched via web pages, which is still available but is no longer maintained. A standalone Java version, called CINEMA-MX, was also released but is no longer readily available.
A C++ version of CINEMA, called CINEMA5 was developed early on as part of the UTOPIA project, and was released as a stand-alone sequence alignment application. It has now been replaced by a version of the tool integrated with UTOPIA's other visualisation applications, and its name has reverted simply to CINEMA.
References
Bioinformatics software
Computational science
Engineering and Physical Sciences Research Council
Department of Computer Science, University of Manchester
Science and technology in Greater Manchester | UTOPIA (bioinformatics tools) | [
"Mathematics",
"Biology"
] | 547 | [
"Computational science",
"Applied mathematics",
"Bioinformatics",
"Bioinformatics software"
] |
9,111,849 | https://en.wikipedia.org/wiki/Shape-memory%20polymer | Shape-memory polymers (SMPs) are polymeric smart materials that have the ability to return from a deformed state (temporary shape) to their original (permanent) shape when induced by an external stimulus (trigger), such as temperature change.
Properties of shape-memory polymers
SMPs can retain two or sometimes three shapes, and the transition between those is often induced by temperature change. In addition to temperature change, the shape change of SMPs can also be triggered by an electric or magnetic field, light or solution. Like polymers in general, SMPs cover a wide range of properties from stable to biodegradable, from soft to hard, and from elastic to rigid, depending on the structural units that constitute the SMP. SMPs include thermoplastic and thermoset (covalently cross-linked) polymeric materials. SMPs are known to be able to store up to three different shapes in memory. SMPs have demonstrated recoverable strains of above 800%.
Two important quantities that are used to describe shape-memory effects are the strain recovery rate (Rr) and strain fixity rate (Rf). The strain recovery rate describes the ability of the material to memorize its permanent shape, while the strain fixity rate describes the ability of switching segments to fix the mechanical deformation.
where is the cycle number, is the maximum strain imposed on the material, and and are the strains of the sample in two successive cycles in the stress-free state before yield stress is applied.
Shape-memory effect can be described briefly as the following mathematical model:
where is the glassy modulus, is the rubbery modulus, is viscous flow strain and is strain for .
Triple-shape memory
While most traditional shape-memory polymers can only hold a permanent and temporary shape, recent technological advances have allowed the introduction of triple-shape-memory materials. Much as a traditional double-shape-memory polymer will change from a temporary shape back to a permanent shape at a particular temperature, triple-shape-memory polymers will switch from one temporary shape to another at the first transition temperature, and then back to the permanent shape at another, higher activation temperature. This is usually achieved by combining two double-shape-memory polymers with different glass transition temperatures or when heating a programmed shape-memory polymer first above the glass transition temperature and then above the melting transition temperature of the switching segment.
Description of the thermally induced shape-memory effect
Polymers exhibiting a shape-memory effect have both a visible, current (temporary) form and a stored (permanent) form. Once the latter has been manufactured by conventional methods, the material is changed into another, temporary form by processing through heating, deformation, and finally, cooling. The polymer maintains this temporary shape until the shape change into the permanent form is activated by a predetermined external stimulus. The secret behind these materials lies in their molecular network structure, which contains at least two separate phases. The phase showing the highest thermal transition, Tperm, is the temperature that must be exceeded to establish the physical crosslinks responsible for the permanent shape. The switching segments, on the other hand, are the segments with the ability to soften past a certain transition temperature (Ttrans) and are responsible for the temporary shape. In some cases this is the glass transition temperature (Tg) and others the melting temperature (Tm). Exceeding Ttrans (while remaining below Tperm) activates the switching by softening these switching segments and thereby allowing the material to resume its original (permanent) form. Below Ttrans, flexibility of the segments is at least partly limited. If Tm is chosen for programming the SMP, strain-induced crystallization of the switching segment can be initiated when it is stretched above Tm and subsequently cooled below Tm. These crystallites form covalent netpoints which prevent the polymer from reforming its usual coiled structure. The hard to soft segment ratio is often between 5/95 and 95/5, but ideally this ratio is between 20/80 and 80/20. The shape-memory polymers are effectively viscoelastic and many models and analysis methods exist.
Thermodynamics of the shape-memory effect
In the amorphous state, polymer chains assume a completely random distribution within the matrix. W represents the probability of a strongly coiled conformation, which is the conformation with maximum entropy, and is the most likely state for an amorphous linear polymer chain. This relationship is represented mathematically by Boltzmann's entropy formula S = k ln W, where S is the entropy and k is the Boltzmann constant.
In the transition from the glassy state to a rubber-elastic state by thermal activation, the rotations around segment bonds become increasingly unimpeded. This allows chains to assume other possibly, energetically equivalent conformations with a small amount of disentangling. As a result, the majority of SMPs will form compact, random coils because this conformation is entropically favored over a stretched conformation.
Polymers in this elastic state with number average molecular weight greater than 20,000 stretch in the direction of an applied external force. If the force is applied for a short time, the entanglement of polymer chains with their neighbors will prevent large movement of the chain and the sample recovers its original conformation upon removal of the force. If the force is applied for a longer period of time, however, a relaxation process takes place whereby a plastic, irreversible deformation of the sample takes place due to the slipping and disentangling of the polymer chains.
To prevent the slipping and flow of polymer chains, cross-linking can be used, both chemical and physical.
Physically crosslinked SMPs
Linear block copolymers
Representative shape-memory polymers in this category are polyurethanes, polyurethanes with ionic or mesogenic components made by prepolymer method. Other block copolymers also show the shape-memory effect, such as, block copolymer of polyethylene terephthalate (PET) and polyethyleneoxide (PEO), block copolymers containing polystyrene and poly(1,4-butadiene), and an ABA triblock copolymer made from poly(2-methyl-2-oxazoline) and polytetrahydrofuran.
Other thermoplastic polymers
A linear, amorphous polynorbornene (Norsorex, developed by CdF Chemie/Nippon Zeon) or organic-inorganic hybrid polymers consisting of polynorbornene units that are partially substituted by polyhedral oligosilsesquioxane (POSS) also have shape-memory effect.
Another example reported in the literature is a copolymer consisting of polycyclooctene (PCOE) and ) (PNBEDCA), which was synthesized through ring-opening metathesis polymerization (ROMP). Then the obtained copolymer was readily modified by grafting reaction of NBEDCA units with polyhedral oligomeric silsesquioxanes (POSS) to afford a functionalized copolymer . It exhibits shape-memory effect.
Chemically crosslinked SMPs
The main limitation of physically crosslinked polymers for the shape-memory application is irreversible deformation during memory programming due to the creep. The network polymer can be synthesized by either polymerization with multifunctional (3 or more) crosslinker or by subsequent crosslinking of a linear or branched polymer. They form insoluble materials which swell in certain solvents.
Crosslinked polyurethane
This material can be made by using excess diisocyanate or by using a crosslinker such as glycerin, trimethylol propane. Introduction of covalent crosslinking improves in creep, increase in recovery temperature and recovery window.
PEO based crosslinked SMPs
The PEO-PET block copolymers can be crosslinked by using maleic anhydride, glycerin or dimethyl 5-isophthalates as a crosslinking agent. The addition of 1.5 wt% maleic anhydride increased in shape recovery from 35% to 65% and tensile strength from 3 to 5 MPa.
Thermoplastic shape-memory
While shape-memory effects are traditionally limited to thermosetting plastics, some thermoplastic polymers, most notably PEEK, can be used as well.
Light-induced SMPs
Light-activated shape-memory polymers (LASMP) use processes of photo-crosslinking and photo-cleaving to change Tg. Photo-crosslinking is achieved by using one wavelength of light, while a second wavelength of light reversibly cleaves the photo-crosslinked bonds. The effect achieved is that the material may be reversibly switched between an elastomer and a rigid polymer. Light does not change the temperature, only the cross-linking density within the material. For example, it has been reported that polymers containing cinnamic groups can be fixed into predetermined shapes by UV light illumination (> 260 nm) and then recover their original shape when exposed to UV light of a different wavelength (< 260 nm). Examples of photoresponsive switches include cinnamic acid and cinnamylidene acetic acid.
Electro-active SMPs
The use of electricity to activate the shape-memory effect of polymers is desirable for applications where it would not be possible to use heat and is another active area of research. Some current efforts use conducting SMP composites with carbon nanotubes, short carbon fibers (SCFs), carbon black, or metallic Ni powder. These conducting SMPs are produced by chemically surface-modifying multi-walled carbon nanotubes (MWNTs) in a mixed solvent of nitric acid and sulfuric acid, with the purpose of improving the interfacial bonding between the polymers and the conductive fillers. The shape-memory effect in these types of SMPs have been shown to be dependent on the filler content and the degree of surface modification of the MWNTs, with the surface modified versions exhibiting good energy conversion efficiency and improved mechanical properties.
Another technique being investigated involves the use of surface-modified super-paramagnetic nanoparticles. When introduced into the polymer matrix, remote actuation of shape transitions is possible. An example of this involves the use of composite with between 2 and 12% magnetite nanoparticles. Nickel and hybrid fibers have also been used with some degree of success.
Shape-memory polymers vs. shape-memory alloys
Shape-memory polymers differ from shape memory alloys (SMAs) by their glass transition or melting transition from a hard to a soft phase which is responsible for the shape-memory effect. In shape-memory alloys martensitic/austenitic transitions are responsible for the shape-memory effect.
There are numerous advantages that make SMPs more attractive than shape memory alloys. They have a high capacity for elastic deformation (up to 200% in most cases), much lower cost, lower density, a broad range of application temperatures which can be tailored, easy processing, potential biocompatibility and biodegradability, and probably exhibit superior mechanical properties to those of SMAs.
Applications
Industrial applications
One of the first conceived industrial applications was in robotics where shape-memory (SM) foams were used to provide initial soft pretension in gripping. These SM foams could be subsequently hardened by cooling, making a shape adaptive grip. Since this time, the materials have seen widespread usage in, for example, the building industry (foam which expands with warmth to seal window frames), sports wear (helmets, judo and karate suits) and in some cases with thermochromic additives for ease of thermal profile observation. Polyurethane SMPs are also applied as an autochoke element for engines.
Application in photonics
One field in which SMPs are having a significant impact is photonics. Due to the shape changing capability, SMPs enable the production of functional and responsive photonic gratings. By using modern soft lithography techniques such as replica molding, it is possible to imprint periodic nanostructures, with sizes of the order of magnitude of visible light, onto the surface of shape memory polymeric blocks. As a result of the refractive index periodicity, these systems diffract light. By taking advantage of the polymer's shape memory effect, it is possible to reprogram the lattice parameter of the structure and consequently tune its diffractive behavior. Another application of SMPs in photonics is shape changing random lasers. By doping SMPs with highly scattering particles such as titania it is possible to tune the light transport properties of the composite. Additionally, optical gain may be introduced by adding a molecular dye to the material. By configuring both the amount of scatters and of the organic dye, a light amplification regime may be observed when the composites are optically pumped. Shape memory polymers have also been used in conjunction with nanocellulose to fabricate composites exhibiting both chiroptical properties and thermo-activated shape memory effect.
Medical applications
Most medical applications of SMP have yet to be developed, but devices with SMP are now beginning to hit the market. Recently, this technology has expanded to applications in orthopedic surgery.
Additionally, SMPs are now being used in various ophthalmic devices including punctal plugs, glaucoma shunts and intraocular lenses.
Potential medical applications
SMPs are smart materials with potential applications as, e.g., intravenous cannula, self-adjusting orthodontic wires and selectively pliable tools for small scale surgical procedures where currently metal-based shape-memory alloys such as Nitinol are widely used. Another application of SMP in the medical field could be its use in implants: for example minimally invasive, through small incisions or natural orifices, implantation of a device in its small temporary shape. Shape-memory technologies have shown great promise for cardiovascular stents, since they allow a small stent to be inserted along a vein or artery and then expanded to prop it open. After activating the shape memory by temperature increase or mechanical stress, it would assume its permanent shape. Certain classes of shape-memory polymers possess an additional property: biodegradability. This offers the option to develop temporary implants. In the case of biodegradable polymers, after the implant has fulfilled its intended use, e.g. healing/tissue regeneration has occurred, the material degrades into substances which can be eliminated by the body. Thus full functionality would be restored without the necessity for a second surgery to remove the implant. Examples of this development are vascular stents and surgical sutures. When used in surgical sutures, the shape-memory property of SMPs enables wound closure with self-adjusting optimal tension, which avoids tissue damage due to overtightened sutures and does support healing and regeneration. SMPs have also potential for use as compression garments and hands-free door openers, whereby the latter can be produced via so-called 4D printing.
Potential industrial applications
Further potential applications include self-repairing structural components, such as e.g. automobile fenders in which dents are repaired by application of temperature. After an undesired deformation, such as a dent in the fender, these materials "remember" their original shape. Heating them activates their "memory". In the example of the dent, the fender could be repaired with a heat source, such as a hair-dryer. The impact results in a temporary form, which changes back to the original form upon heating—in effect, the plastic repairs itself. SMPs may also be useful in the production of aircraft which would morph during flight. Currently, the Defense Advanced Research Projects Agency DARPA is testing wings which would change shape by 150%.
The realization of a better control over the switching behavior of polymers is seen as key factor to implement new technical concepts. For instance, an accurate setting of the onset temperature of shape recovering can be exploited to tune the release temperature of information stored in a shape memory polymer. This may pave the way for the monitoring of temperature abuses of food or pharmaceuticals.
Recently, a new manufacturing process, mnemosynation, was developed at Georgia Tech to enable mass production of crosslinked SMP devices, which would otherwise be cost-prohibitive using traditional thermoset polymerization techniques. Mnemosynation was named for the Greek goddess of memory, Mnemosyne, and is the controlled imparting of memory on an amorphous thermoplastic materials utilizing radiation-induced covalent crosslinking, much like vulcanization imparts recoverable elastomeric behavior on rubbers using sulfur crosslinks. Mnemosynation combines advances in ionizing radiation and tuning the mechanical properties of SMPs to enable traditional plastics processing (extrusion, blow molding, injection molding, resin transfer molding, etc.) and allows thermoset SMPs in complex geometries. The customizable mechanical properties of traditional SMPs are achievable with high throughput plastics processing techniques to enable mass producible plastic products with thermosetting shape-memory properties: low residual strains, tunable recoverable force and adjustable glass transition temperatures.
Brand protection and anti-counterfeiting
Shape memory polymers may serve as technology platform for a safe way of information storage and release. Overt anti-counterfeiting labels have been constructed that display a visual symbol or code when exposed to specific chemicals. Multifunctional labels may even make counterfeiting increasingly difficult. Shape memory polymers have already been made into shape memory film by extruder machine, with covert and overt 3D embossed pattern internally, and 3D pattern will be released to be embossed or disappeared in just seconds irreversibly as soon as it is heated; Shape memory film can be used as label substrates or face stock for anti-counterfeiting, brand protection, tamper-evident seals, anti-pilferage seals, etc.
Multifunctional composites
Using shape memory polymers as matrices, multifunctional composite materials can be produced. Such composites can have temperature dependant shape morphing (i.e. shape memory) characteristics. This phenomenon allows these composites to be potentially used to create deployable structures such as booms, hinges, wings etc. While using SMPs can help produce one-way shape morphing structures, it has been reported that using SMPs in combination with shape memory alloys allows creation of more complex shape memory composites that is capable of two-way shape memory deformation.
See also
Memory foam
Smart material
Shape-memory alloy
References
Polymer material properties
Smart materials | Shape-memory polymer | [
"Chemistry",
"Materials_science",
"Engineering"
] | 3,900 | [
"Polymer material properties",
"Smart materials",
"Materials science",
"Polymer chemistry"
] |
9,113,053 | https://en.wikipedia.org/wiki/Compatibility%20testing | Compatibility testing is a part of non-functional testing conducted on application software to ensure the application's compatibility with different computing environment.
The ISO 25010 standard, (System and Software Quality Models) defines compatibility as a characteristic or degree to which a software system can exchange information with other systems whilst sharing the same software and hardware. The degree to which a product can perform its required functions efficiently while sharing a common environment and resources with other products, without detrimental impact on any other product is known as co-existence while interoperability is the degree to which two or more systems, products, or components can exchange information and use the information that has been exchanged. In these contexts, compatibility testing would be information gathering about a product or software system to determine the extent of coexistence and interoperability exhibited in the system under test.
See also
List of International Organization for Standardization standards, 24000-25999
ISO/IEC 25010:2011 Systems and software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - System and software quality models
References
Software testing | Compatibility testing | [
"Engineering"
] | 216 | [
"Software engineering",
"Software testing"
] |
9,115,050 | https://en.wikipedia.org/wiki/Department%20of%20Electrical%20Engineering%20and%20Computer%20Science%20at%20MIT | The Department of Electrical Engineering and Computer Science at MIT is an engineering department of the Massachusetts Institute of Technology in Cambridge, Massachusetts. It is regarded as one of the most prestigious in the world, and offers degrees of Master of Science, Master of Engineering, Doctor of Philosophy, and Doctor of Science.
History
The curriculum for the electrical engineering program was created in 1882, and was the first such program in the country. It was initially taught by the physics faculty. In 1902, the Institute set up a separate Electrical Engineering department. The department was renamed to Electrical Engineering and Computer Science in 1975, to highlight the new addition of computer science to the program.
Current faculty
Professors
Silvio Micali
Harold Abelson
Anant Agarwal
Akintunde I. Akinwande
Dimitri A. Antoniadis
Arvind
Arthur B. Baggeroer
Hari Balakrishnan
Dimitri P. Bertsekas
Robert C. Berwick
Duane S. Boning
Louis D. Braida
Rodney A. Brooks
Vincent W. S. Chan
Anantha P. Chandrakasan
Shafira Goldwasser
Paul E. Gray (S.B. 1954, S.M. 1955, Ph.D. 1960)
Pablo A. Parrilo
L. Rafael Reif
Jerome H. Saltzer (Sc.D. 1966)
Kenneth N. Stevens (Sc.D. 1952)
Gerald J. Sussman (S.B. 1968, Ph.D. 1973, both in Mathematics)
Patrick H. Winston
Regina Barzilay (website)
Associate professors
Saman P. Amarasinghe
Krste Asanovic
Marc Baldo
Sangeeta Bhatia
Vladimir Bulovic
Isaac L. Chuang
Michael Collins
Karl K. Berggren
Elfar Adalsteinsson
Tomas Palacios
Professors emeriti
Michael Anthans
Abraham Bers
Amar Bose (S.B. 1951, S.M. 1952, Sc.D. 1956)
James D. Bruce
Fernando J. Corbató
Shaoul Ezekiel
Robert Fano (S.B. 1941, Sc.D. 1947)
Former faculty
Leo Beranek
Gordon S. Brown (S.B. 1931, S.M. 1934, Ph.D. 1938)
Vannevar Bush (Eng.D. 1916)
Jack Dennis (S.B. 1953, S.M. 1954, Sc.D. 1958)
Harold Edgerton (S.M. 1927, Sc.D. 1931)
Jay Wright Forrester (S.M. 1945)
Irwin M. Jacobs (S.M. 1957, Sc.D. 1959)
William B. Lenoir (S.B. 1961, S.M. 1962, Ph.D. 1965)
John McCarthy
Marvin Minsky
Julius Stratton (S.B. 1923, S.M. 1926)
Notable alumni
References
External links
MIT Department of Electrical Engineering and Computer Science
Electrical Engineering and Computer Science Department
Computer science departments in the United States
Science and technology in Massachusetts
Electrical and computer engineering departments | Department of Electrical Engineering and Computer Science at MIT | [
"Engineering"
] | 605 | [
"Electrical and computer engineering departments",
"Electrical and computer engineering",
"Engineering universities and colleges"
] |
9,115,966 | https://en.wikipedia.org/wiki/Decabromodiphenyl%20ether | Decabromodiphenyl ether (also referred to as decaBDE, DBDE, BDE-209) is a brominated flame retardant which belongs to the group of polybrominated diphenyl ethers (PBDEs). It was commercialised in the 1970s and was initially thought to be safe, but is now recognised as a hazardous and persistent pollutant. It was added to Annex A of the Stockholm Convention on Persistent Organic Pollutants in 2017, which means that treaty members must take measures to eliminate its production and use. The plastics industry started switching to decabromodiphenyl ethane as an alternative in the 1990s, but this is now also coming under regulatory pressure due to concerns over human health.
Composition, uses, and production
Commercial decaBDE is a technical mixture of various PBDE congeners (related compounds). Congener number 209 (decabromodiphenyl ether) and nonabromodiphenyl ether are the main components. The term decaBDE alone refers to only decabromodiphenyl ether, the single "fully brominated" PBDE.
DecaBDE is a flame retardant. The chemical "is always used in conjunction with antimony trioxide" in polymers, mainly in "high impact polystyrene (HIPS) which is used in the television industry for cabinet backs." DecaBDE is also used for "polypropylene drapery and upholstery fabric" by means of backcoating and "may also be used in some synthetic carpets."
The annual demand worldwide was estimated as 56,100 tonnes in 2001, of which the Americas accounted for 24,500 tonnes, Asia 23,000 tonnes, and Europe 7,600 tonnes. In 2012 between 2500 and 5000 metric tonnes of Deca-BDE was sold in Europe. As of 2007, Albemarle in the U.S., Chemtura in the U.S., ICL-IP in Israel, and Tosoh Corporation in Japan are the main manufacturers of DecaBDE.
Despite its listing in Annex A to the Stockholm Convention, decaBDE is still produced in China, namely in the provinces Shandong and Jiangsu.
Environmental chemistry
As stated in a 2006 review, "Deca-BDE has long been characterized as an environmentally stable and inert product that was not capable of degradation in the environment, not toxic, and therefore of no concern." However, "some scientists had not particularly believed that Deca-BDE was so benign, particularly as evidence to this effect came largely from the industry itself." One problem in studying the chemical was that "the detection of Deca-BDE in environmental samples is difficult and problematic"; only in the late 1990s did "analytical advances... allow detection at much lower concentrations."
DecaBDE is released by diverse processes into the environment, such as emissions from manufacture of decaBDE-containing products and from the products themselves. Elevated concentrations can be found in air, water, soil, food, sediment, sludge, and dust. A 2006 study concluded "in general, environmental concentrations of BDE-209 [i.e., decaBDE] appear to be increasing."
The question of debromination
An important scientific issue is whether decaBDE debrominates in the environment to PBDE congeners with fewer bromine atoms, since such PBDE congeners may be more toxic than decaBDE itself. Debromination may be "biotic" (caused by biological means) or "abiotic" (caused by nonbiological means). The European Union (EU) in May 2004 stated "the formation of PBT/vPvB (Persistent, Bioaccumulative, and Toxic / very Persistent, very Bioaccumulative) substances in the environment as a result of degradation [of decaBDE] is a possibility that cannot be quantified based on current knowledge." In September 2004 an Agency for Toxic Substances and Disease Registry (ATSDR) report asserted that "DecaBDE seems to be largely resistant to environmental degradation."
In May 2006, the EPHA Environment Network (now The Health and Environment Alliance) released a report reviewing the available scientific literature and concluding the following:
"It is difficult to assess the degree of BDE 209 photolytic debromination in house dust, soils and sediments when exposed to light. However, in cars debromination can be expected to occur more significantly."
"In sewage anaerobic bacteria can initiate debromination of BDE 209, albeit at a slower rate than photolytic debromination, but due to the large volumes of DecaBDE in sewage sludge this may be significant."
"Some fish appear capable of debrominating BDE 209 through metabolism. The extent of the metabolism varies among fish and it is difficult to determine the extent of debromination that would occur in the wild."
Subsequently, many studies have been published concerning decaBDE debromination. Common anaerobic soil bacteria debrominated decaBDE and octaBDE in a 2006 study. In 2006-2007 studies, metabolic debromination of decaBDE was demonstrated in fish, birds, cows, and rats. A 2007 study by La Guardia and colleagues measured PBDE congeners "from a wastewater treatment plant (sludge) to receiving stream sediments and associated aquatic biota"; it "support[ed] the hypothesis that metabolic debromination of -209 [i.e., decaBDE] does occur in the aquatic environment under realistic conditions." In another 2007 study, Stapleton and Dodder exposed "both a natural and a BDE 209 spiked [house] dust material" to sunlight, and found "nonabrominated congeners" and "octabrominated congeners" consistent with debromination of decaBDE in the environment.
In March 2007 the Illinois Environmental Protection Agency concluded "it can be questioned how much abiotic and microbial degradation [of decaBDE] occurs under normal environmental conditions, and it is not clear whether the more toxic lower-brominated PBDEs are produced in significant quantities by any of these pathways." In September 2010, the UK Advisory Committee on Hazardous Substances issued an opinion that ‘there is strong but incomplete, scientific evidence indicating that Deca-BDE has the potential to undergo transformation to lower brominated congeners in the environment'.
Pharmacokinetics
Exposure to decaBDE is thought to occur by means of ingestion. Humans and animals do not absorb decaBDE well; at most, perhaps 2% of an oral dose is absorbed. It is believed that "the small amount of decaBDE that is absorbed can be metabolized".
Once in the body, decaBDE "might leave unchanged or as metabolites, mainly in the feces and in very small amounts in the urine, within a few days," in contrast with "lower brominated PBDEs... [which] might stay in your body for many years, stored mainly in body fat." In workers with occupational exposure to PBDEs, the calculated apparent half-life for decaBDE was 15 days, as opposed to (for example) an octaBDE congener with a half-life of 91 days.
Detection in humans
In the general population, decaBDE has been found in blood and breast milk, but at lower levels than other PBDE congeners such as 47, 99, and 153. A 2004 investigation carried out by the WWF detected decaBDE in blood samples from 3 of 14 ministers of health and environment of European Union countries, while (for example) PBDE-153 was found in all 14.
Possible health effects in humans
In 2004, ATSDR wrote "Nothing definite is known about the health effects of PBDEs in people. Practically all of the available information is from studies of laboratory animals. Animal studies indicate that commercial decaBDE mixtures are generally much less toxic than the products containing lower brominated PBDEs. DecaBDE is expected to have relatively little effect on the health of humans." Based on animal studies, the possible health effects of decaBDE in humans involve the liver, thyroid, reproductive/developmental effects, and neurological effects.
Liver
ATSDR stated in 2004 "We don’t know if PBDEs can cause cancer in people, although liver tumors developed in rats and mice that ate extremely large amounts of decaBDE throughout their lifetime. On the basis of evidence for cancer in animals, decaBDE is classified as a possible human carcinogen by EPA [i.e., the United States Environmental Protection Agency ]."
Thyroid
One 2006 review concluded "Decreases in thyroid hormone levels have been reported in several studies, and thyroid gland enlargement (an early sign of hypothyroidism) has been shown in studies of longer duration exposure." A 2007 experiment giving decaBDE to pregnant mice found that decaBDE "is likely an endocrine disrupter in male mice following exposure during development" based on results such as decreased serum triiodothyronine.
Reproductive/developmental effects
"Significant data gaps" exist in the scientific literature on a possible relationship between decaBDE and reproductive/developmental effects. A 2006 study of mice found that decaBDE decreased some "sperm functions."
Neurological effects
EPA has determined that daily Deca exposures should be less than 7 μg/kg-d (micrograms per kilogram bodyweight per day) to minimize the chance of brain and nervous system toxicity. EPA based their assessment on a study in 2003 on neurotoxicity in mice, which some have "criticized for certain procedural and statistical problems." A 2007 study in mice "suggest[ed] that decaBDE is a developmental neurotoxicant that can produce long-term behavioral changes following a discrete period of neonatal exposure." Administration of decaBDE to male rats at 3 days of age in another 2007 study "was shown to disrupt normal spontaneous behaviour at 2 months of age."
Overall risks and benefits
In 2002–2003 the American Chemistry Council's Brominated Flame Retardant Industry Panel, citing an unpublished 1997 study, estimated that 280 deaths due to fires are prevented each year in the U.S. because of the use of decaBDE. The industry advocacy group American Council on Science and Health, in a 2006 report largely concerning decaBDE, said that "the benefits of PBDE flame retardants, in terms of lives saved and injuries prevented, far outweigh any demonstrated or likely negative health effects from their use." A 2006 study concluded "current levels of Deca in the United States are unlikely to represent an adverse health risk for children." A report from the Swedish National Testing and Research Institute concerning the costs and benefits of decaBDE in television sets that was funded by BSEF assumed "no cost for injuries (either to humans or the environment) due to exposure to flame retardants... as there was no indication that such costs exist for DecaBDE"; it found that decaBDE's benefits exceeded its costs.
Voluntary and governmental actions
Europe
In Germany, plastics manufacturers and the textile additives industry "declared in 1986 a voluntary phase-out of the use of PBDEs, including Deca-BDE." Although decaBDE was to be phased out of electrical and electronic equipment in the EU by 2006 under the EU's Restriction of Hazardous Substances Directive (RoHS), decaBDE use has been exempted from RoHS during 2005–2010. A case in the European Court of Justice against the RoHS exemption was decided against Deca-BDE and its use must be phased out by July 1, 2008. Sweden, an EU member, banned decaBDE as of 2007. The former European Brominated Flame Retardant Industry Panel (EBFRIP), now merged with EFRA, the European Flame Retardant Association, stated that Sweden's ban on DecaBDE "was a serious breach of EU law. . The European Commission then started an infringement procedure against Sweden which lead to the Swedish Government repealing this restriction on 1 July 2008 . The environment agency of Norway, which is a member of the European Free Trade Association but is not a member of the EU, recommended that decaBDE be banned from electronic products in 2008.
DecaBDE has been the subject of a ten-year evaluation under the EU Risk Assessment procedure which has reviewed over 1100 studies. The Risk Assessment was published on the EU Official Journal in May 2008. Deca was registered under the EU's REACH Regulation at the end of August 2010.
The UK's Advisory Committee on Hazardous Substances (ACHS) presented their conclusions following a review of the emerging studies on Deca-BDE on 14 September 2010.
On 5 July ECHA withdrew Deca-BDE from its list of priority substances for Authorisation under REACH, therefore closing the public consultation. On 1 August 2014, ECHA submitted a restriction proposal for Deca-BDE. The agency is proposing a restriction on the manufacture, use and placing on the market of the substance and of mixtures and articles containing it. On 17 September 2014, ECHA submitted the restriction report which initiates a six months public consultation. On 9 February 2017, the European Commission adopted Regulation EU 2017/227. Article 1 of this regulation states that Regulation (EC) No 1907/2006 is amended to include a ban on the use of decaBDE in quantities greater than 0.1% by weight, effective from 2 March 2019. Products placed on the market prior to 2 March 2019 are exempt. Furthermore, the use decaBDE in aircraft is permissible until 2 March 2027.
This EU process is running in parallel with a UNEP review to determine whether Deca-BDE should be listed as a Persistent Organic Pollutant (POP) under the Stockholm Convention.
United States
As of mid-2007 two states had instituted measures to phase out decaBDE. In April 2007 the state of Washington passed a law banning the manufacture, sale, and use of decaBDE in mattresses as of 2008; the ban "could be extended to TVs, computers and upholstered residential furniture in 2011 provided an alternative flame retardant is approved." In June 2007 the state of Maine passed a law "ban[ning] the use of deca-BDE in mattresses and furniture on January 1, 2008 and phas[ing] out its use in televisions and other plastic-cased electronics by January 1, 2010." As of 2007, other states considering restrictions on decaBDE include California, Connecticut, Hawaii, Illinois, Massachusetts, Michigan, Minnesota, Montana, New York, and Oregon.
On December 17, 2009, as the result of negotiations with EPA, the two U.S. producers of decabromodiphenyl ether (decaBDE), Albemarle Corporation and Chemtura Corporation, and the largest U.S. importer, ICL Industrial Products, Inc., announced commitments to phase out voluntarily decaBDE in the United States by the end of 2013. , ,
Alternatives
A number of reports have examined alternatives to decaBDE as a flame retardant. At least three U.S. states have evaluated decaBDE alternatives:
Washington concluded in 2006 that "there do not appear to be any obvious alternatives to Deca-BDE that are less toxic, persistent and bioaccumulative and have enough data available for making a robust assessment" and that "there is much more data available on Deca-BDE than for any of the alternatives."
Maine in January 2007 stated that bisphenol A diphenyl phosphate (also known as BDP, BPADP, bisphenol A diphosphate, or BAPP) "is not a suitable alternative to decaBDE" because "one of the degradation products is bisphenol A, a potent endocrine disruptor." The report listed resorcinol bis(diphenyl phosphate) (also known as RDP), magnesium hydroxide, and other chemicals as alternatives to decaBDE that are "most likely to be used."
A March 2007 report from Illinois categorized decaBDE alternatives as "Potentially Unproblematic," "Potentially Problematic," "Insufficient Data," and "Not Recommended." The "Potentially Unproblematic" alternatives were BAPP, RDP, aluminum trihydroxide, and magnesium hydroxide.
References
Flame retardants
Bromobenzene derivatives
PBT substances
Persistent organic pollutants under the Stockholm Convention
Diphenyl ethers | Decabromodiphenyl ether | [
"Chemistry"
] | 3,467 | [
"Persistent organic pollutants under the Stockholm Convention"
] |
656,099 | https://en.wikipedia.org/wiki/De%20Casteljau%27s%20algorithm | In the mathematical field of numerical analysis, De Casteljau's algorithm is a recursive method to evaluate polynomials in Bernstein form or Bézier curves, named after its inventor Paul de Casteljau. De Casteljau's algorithm can also be used to split a single Bézier curve into two Bézier curves at an arbitrary parameter value.
The algorithm is numerically stable when compared to direct evaluation of polynomials. The computational complexity of this algorithm is , where d is the number of dimensions, and n is the number of control points. There exist faster alternatives.
Definition
A Bézier curve (of degree , with control points ) can be written in Bernstein form as follows
where is a Bernstein basis polynomial
The curve at point can be evaluated with the recurrence relation
Then, the evaluation of at point can be evaluated in operations. The result is given by
Moreover, the Bézier curve can be split at point into two curves with respective control points:
Geometric interpretation
The geometric interpretation of De Casteljau's algorithm is straightforward.
Consider a Bézier curve with control points . Connecting the consecutive points we create the control polygon of the curve.
Subdivide now each line segment of this polygon with the ratio and connect the points you get. This way you arrive at the new polygon having one fewer segment.
Repeat the process until you arrive at the single point – this is the point of the curve corresponding to the parameter .
The following picture shows this process for a cubic Bézier curve:
Note that the intermediate points that were constructed are in fact the control points for two new Bézier curves, both exactly coincident with the old one. This algorithm not only evaluates the curve at , but splits the curve into two pieces at , and provides the equations of the two sub-curves in Bézier form.
The interpretation given above is valid for a nonrational Bézier curve. To evaluate a rational Bézier curve in , we may project the point into ; for example, a curve in three dimensions may have its control points and weights projected to the weighted control points . The algorithm then proceeds as usual, interpolating in . The resulting four-dimensional points may be projected back into three-space with a perspective divide.
In general, operations on a rational curve (or surface) are equivalent to operations on a nonrational curve in a projective space. This representation as the "weighted control points" and weights is often convenient when evaluating rational curves.
Notation
When doing the calculation by hand it is useful to write down the coefficients in a triangle scheme as
When choosing a point t0 to evaluate a Bernstein polynomial we can use the two diagonals of the triangle scheme to construct a division of the polynomial
into
and
Bézier curve
When evaluating a Bézier curve of degree n in 3-dimensional space with n + 1 control points Pi
with
we split the Bézier curve into three separate equations
which we evaluate individually using De Casteljau's algorithm.
Example
We want to evaluate the Bernstein polynomial of degree 2 with the Bernstein coefficients
at the point t0.
We start the recursion with
and with the second iteration the recursion stops with
which is the expected Bernstein polynomial of degree 2.
Implementations
Here are example implementations of De Casteljau's algorithm in various programming languages.
Haskell
deCasteljau :: Double -> [(Double, Double)] -> (Double, Double)
deCasteljau t [b] = b
deCasteljau t coefs = deCasteljau t reduced
where
reduced = zipWith (lerpP t) coefs (tail coefs)
lerpP t (x0, y0) (x1, y1) = (lerp t x0 x1, lerp t y0 y1)
lerp t a b = t * b + (1 - t) * a
Python
def de_casteljau(t: float, coefs: list[float]) -> float:
"""De Casteljau's algorithm."""
beta = coefs.copy() # values in this list are overridden
n = len(beta)
for j in range(1, n):
for k in range(n - j):
beta[k] = beta[k] * (1 - t) + beta[k + 1] * t
return beta[0]
Java
public double deCasteljau(double t, double[] coefficients) {
double[] beta = coefficients;
int n = beta.length;
for (int i = 1; i < n; i++) {
for (int j = 0; j < (n - i); j++) {
beta[j] = beta[j] * (1 - t) + beta[j + 1] * t;
}
}
return beta[0];
}
Code Example in JavaScript
The following JavaScript function applies De Casteljau's algorithm to an array of control points or poles as originally named by De Casteljau to reduce them one by one until reaching a point in the curve for a given t between 0 for the first point of the curve and 1 for the last one
function crlPtReduceDeCasteljau(points, t) {
let retArr = [ points.slice () ];
while (points.length > 1) {
let midpoints = [];
for (let i = 0; i+1 < points.length; ++i) {
let ax = points[i][0];
let ay = points[i][1];
let bx = points[i+1][0];
let by = points[i+1][1];
// a * (1-t) + b * t = a + (b - a) * t
midpoints.push([
ax + (bx - ax) * t,
ay + (by - ay) * t,
]);
}
retArr.push (midpoints)
points = midpoints;
}
return retArr;
}
For example,
var poles = [ [0, 128], [128, 0], [256, 0], [384, 128] ]
crlPtReduceDeCasteljau (poles, .5)
returns the array
[ [ [0, 128], [128, 0], [256, 0], [384, 128 ] ],
[ [64, 64], [192, 0], [320, 64] ],
[ [128, 32], [256, 32]],
[ [192, 32]],
]
which yields the points and segments plotted below:
See also
Bézier curves
De Boor's algorithm
Horner scheme to evaluate polynomials in monomial form
Clenshaw algorithm to evaluate polynomials in Chebyshev form
References
External links
Piecewise linear approximation of Bézier curves – description of De Casteljau's algorithm, including a criterion to determine when to stop the recursion
Bézier Curves and Picasso — Description and illustration of De Casteljau's algorithm applied to cubic Bézier curves.
de Casteljau's algorithm - Implementation help and interactive demonstration of the algorithm.
Splines (mathematics)
Numerical analysis
Articles with example Haskell code | De Casteljau's algorithm | [
"Mathematics"
] | 1,525 | [
"Computational mathematics",
"Mathematical relations",
"Approximations",
"Numerical analysis"
] |
656,692 | https://en.wikipedia.org/wiki/Zeitschrift%20f%C3%BCr%20Physikalische%20Chemie | Zeitschrift für Physikalische Chemie (English: Journal of Physical Chemistry) is a monthly peer-reviewed scientific journal covering physical chemistry that is published by Oldenbourg Wissenschaftsverlag. Its English subtitle is "International Journal of Research in Physical Chemistry and Chemical Physics". It was established in 1887 by Wilhelm Ostwald, Jacobus Henricus van 't Hoff, and Svante August Arrhenius as the first scientific journal for publications specifically in the field of physical chemistry. The editor-in-chief is Klaus Rademann (Humboldt University of Berlin).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.408.
References
External links
Physical chemistry journals
Publications established in 1887
Monthly journals
De Gruyter academic journals
English-language journals
Jacobus Henricus van 't Hoff | Zeitschrift für Physikalische Chemie | [
"Chemistry"
] | 193 | [
"Physical chemistry journals",
"Physical chemistry stubs"
] |
656,965 | https://en.wikipedia.org/wiki/Polymer%20chemistry | Polymer chemistry is a sub-discipline of chemistry that focuses on the structures, chemical synthesis, and chemical and physical properties of polymers and macromolecules. The principles and methods used within polymer chemistry are also applicable through a wide range of other chemistry sub-disciplines like organic chemistry, analytical chemistry, and physical chemistry. Many materials have polymeric structures, from fully inorganic metals and ceramics to DNA and other biological molecules. However, polymer chemistry is typically related to synthetic and organic compositions. Synthetic polymers are ubiquitous in commercial materials and products in everyday use, such as plastics, and rubbers, and are major components of composite materials. Polymer chemistry can also be included in the broader fields of polymer science or even nanotechnology, both of which can be described as encompassing polymer physics and polymer engineering.
History
The work of Henri Braconnot in 1777 and the work of Christian Schönbein in 1846 led to the discovery of nitrocellulose, which, when treated with camphor, produced celluloid. Dissolved in ether or acetone, it becomes collodion, which has been used as a wound dressing since the U.S. Civil War. Cellulose acetate was first prepared in 1865. In years 1834-1844 the properties of rubber (polyisoprene) were found to be greatly improved by heating with sulfur, thus founding the vulcanization process.
In 1884 Hilaire de Chardonnet started the first artificial fiber plant based on regenerated cellulose, or viscose rayon, as a substitute for silk, but it was very flammable. In 1907 Leo Baekeland invented the first polymer made independent of the products of organisms, a thermosetting phenol-formaldehyde resin called Bakelite. Around the same time, Hermann Leuchs reported the synthesis of amino acid N-carboxyanhydrides and their high molecular weight products upon reaction with nucleophiles, but stopped short of referring to these as polymers, possibly due to the strong views espoused by Emil Fischer, his direct supervisor, denying the possibility of any covalent molecule exceeding 6,000 daltons. Cellophane was invented in 1908 by Jocques Brandenberger who treated sheets of viscose rayon with acid.
The chemist Hermann Staudinger first proposed that polymers consisted of long chains of atoms held together by covalent bonds, which he called macromolecules. His work expanded the chemical understanding of polymers and was followed by an expansion of the field of polymer chemistry during which such polymeric materials as neoprene, nylon and polyester were invented. Before Staudinger, polymers were thought to be clusters of small molecules (colloids), without definite molecular weights, held together by an unknown force. Staudinger received the Nobel Prize in Chemistry in 1953. Wallace Carothers invented the first synthetic rubber called neoprene in 1931, the first polyester, and went on to invent nylon, a true silk replacement, in 1935. Paul Flory was awarded the Nobel Prize in Chemistry in 1974 for his work on polymer random coil configurations in solution in the 1950s. Stephanie Kwolek developed an aramid, or aromatic nylon named Kevlar, patented in 1966. Karl Ziegler and Giulio Natta received a Nobel Prize for their discovery of catalysts for the polymerization of alkenes. Alan J. Heeger, Alan MacDiarmid, and Hideki Shirakawa were awarded the 2000 Nobel Prize in Chemistry for the development of polyacetylene and related conductive polymers. Polyacetylene itself did not find practical applications, but organic light-emitting diodes (OLEDs) emerged as one application of conducting polymers.
Teaching and research programs in polymer chemistry were introduced in the 1940s. An Institute for Macromolecular Chemistry was founded in 1940 in Freiburg, Germany under the direction of Staudinger. In America, a Polymer Research Institute (PRI) was established in 1941 by Herman Mark at the Polytechnic Institute of Brooklyn (now Polytechnic Institute of NYU).
Polymers and their properties
Polymers are high molecular mass compounds formed by polymerization of monomers. They are synthesized by the polymerization process and can be modified by the additive of monomers. The additives of monomers change polymers mechanical property, processability, durability and so on. The simple reactive molecule from which the repeating structural units of a polymer are derived is called a monomer. A polymer can be described in many ways: its degree of polymerisation, molar mass distribution, tacticity, copolymer distribution, the degree of branching, by its end-groups, crosslinks, crystallinity and thermal properties such as its glass transition temperature and melting temperature. Polymers in solution have special characteristics with respect to solubility, viscosity, and gelation. Illustrative of the quantitative aspects of polymer chemistry, particular attention is paid to the number-average and weight-average molecular weights and , respectively.
The formation and properties of polymers have been rationalized by many theories including Scheutjens–Fleer theory, Flory–Huggins solution theory, Cossee–Arlman mechanism, Polymer field theory, Hoffman Nucleation Theory, Flory–Stockmayer theory, and many others.
The study of polymer thermodynamics helps improve the material properties of various polymer-based materials such as polystyrene (styrofoam) and polycarbonate. Common improvements include toughening, improving impact resistance, improving biodegradability, and altering a material's solubility.
Viscosity
As polymers get longer and their molecular weight increases, their viscosity tend to increase. Thus, the measured viscosity of polymers can provide valuable information about the average length of the polymer, the progress of reactions, and in what ways the polymer branches.
Classification
Polymers can be classified in many ways. Polymers, strictly speaking, comprise most solid matter: minerals (i.e. most of the Earth's crust) are largely polymers, metals are 3-d polymers, organisms, living and dead, are composed largely of polymers and water. Often polymers are classified according to their origin:
biopolymers
synthetic polymers
inorganic polymers
Biopolymers are the structural and functional materials that comprise most of the organic matter in organisms. One major class of biopolymers are proteins, which are derived from amino acids. Polysaccharides, such as cellulose, chitin, and starch, are biopolymers derived from sugars. The polynucleic acids DNA and RNA are derived from phosphorylated sugars with pendant nucleotides that carry genetic information.
Synthetic polymers are the structural materials manifested in plastics, synthetic fibers, paints, building materials, furniture, mechanical parts, and adhesives. Synthetic polymers may be divided into thermoplastic polymers and thermoset plastics. Thermoplastic polymers include polyethylene, teflon, polystyrene, polypropylene, polyester, polyurethane, Poly(methyl methacrylate), polyvinyl chloride, nylons, and rayon. Thermoset plastics include vulcanized rubber, bakelite, Kevlar, and polyepoxide. Almost all synthetic polymers are derived from petrochemicals.
See also
Polymer
Polymer science
Polymer physics
Chemistry
Polymer topology
References | Polymer chemistry | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,520 | [
"Materials science",
"Polymer chemistry"
] |
657,430 | https://en.wikipedia.org/wiki/Amenable%20group | In mathematics, an amenable group is a locally compact topological group G carrying a kind of averaging operation on bounded functions that is invariant under translation by group elements. The original definition, in terms of a finitely additive measure (or mean) on subsets of G, was introduced by John von Neumann in 1929 under the German name "messbar" ("measurable" in English) in response to the Banach–Tarski paradox. In 1949 Mahlon M. Day introduced the English translation "amenable", apparently as a pun on "mean".
The critical step in the Banach–Tarski paradox construction is to find inside the rotation group SO(3) a free subgroup on two generators. Amenable groups cannot contain such groups, and do not allow this kind of paradoxical construction.
Amenability has many equivalent definitions. In the field of analysis, the definition is in terms of linear functionals. An intuitive way to understand this version is that the support of the regular representation is the whole space of irreducible representations.
In discrete group theory, where G has the discrete topology, a simpler definition is used. In this setting, a group is amenable if one can say what proportion of G any given subset takes up. For example, any subgroup of the group of integers is generated by some integer . If then the subgroup takes up 0 proportion. Otherwise, it takes up of the whole group. Even though both the group and the subgroup has infinitely many elements, there is a well-defined sense of proportion.
If a group has a Følner sequence then it is automatically amenable.
Definition for locally compact groups
Let G be a locally compact Hausdorff group. Then it is well known that it possesses a unique, up-to-scale left- (or right-) translation invariant nontrivial ring measure, the Haar measure. (This is a Borel regular measure when G is second-countable; the left and right Haar measures coincide when G is compact.) Consider the Banach space L∞(G) of essentially bounded measurable functions within this measure space (which is clearly independent of the scale of the Haar measure).
Definition 1. A linear functional Λ in Hom(L∞(G), R) is said to be a mean if Λ has norm 1 and is non-negative, i.e. f ≥ 0 a.e. implies Λ(f) ≥ 0.
Definition 2. A mean Λ in Hom(L∞(G), R) is said to be left-invariant (respectively right-invariant) if Λ(g·f) = Λ(f) for all g in G, and f in L∞(G) with respect to the left (respectively right) shift action of g·f(x) = f(g−1x) (respectively f·g(x) = f(xg−1)).
Definition 3. A locally compact Hausdorff group is called amenable if it admits a left- (or right-)invariant mean.
By identifying Hom(L∞(G), R) with the space of finitely-additive Borel measures which are absolutely continuous with respect to the Haar measure on G (a ba space), the terminology becomes more natural: a mean in Hom(L∞(G), R) induces a left-invariant, finitely additive Borel measure on G which gives the whole group weight 1.
Example
As an example for compact groups, consider the circle group. The graph of a typical function f ≥ 0 looks like a jagged curve above a circle, which can be made by tearing off the end of a paper tube. The linear functional would then average the curve by snipping off some paper from one place and gluing it to another place, creating a flat top again. This is the invariant mean, i.e. the average value where is Lebesgue measure.
Left-invariance would mean that rotating the tube does not change the height of the flat top at the end. That is, only the shape of the tube matters. Combined with linearity, positivity, and norm-1, this is sufficient to prove that the invariant mean we have constructed is unique.
As an example for locally compact groups, consider the group of integers. A bounded function f is simply a bounded function of type , and its mean is the running average .
Equivalent conditions for amenability
contains a comprehensive account of the conditions on a second countable locally compact group G that are equivalent to amenability:
Existence of a left (or right) invariant mean on L∞(G). The original definition, which depends on the axiom of choice.
Existence of left-invariant states. There is a left-invariant state on any separable left-invariant unital C*-subalgebra of the bounded continuous functions on G.
Fixed-point property. Any action of the group by continuous affine transformations on a compact convex subset of a (separable) locally convex topological vector space has a fixed point. For locally compact abelian groups, this property is satisfied as a result of the Markov–Kakutani fixed-point theorem.
Irreducible dual. All irreducible representations are weakly contained in the left regular representation λ on L2(G).
Trivial representation. The trivial representation of G is weakly contained in the left regular representation.
Godement condition. Every bounded positive-definite measure μ on G satisfies μ(1) ≥ 0. Valette improved this criterion by showing that it is sufficient to ask that, for every continuous positive-definite compactly supported function f on G, the function Δ–f has non-negative integral with respect to Haar measure, where Δ denotes the modular function.
Day's asymptotic invariance condition. There is a sequence of integrable non-negative functions φn with integral 1 on G such that λ(g)φn − φn tends to 0 in the weak topology on L1(G).
Reiter's condition. For every finite (or compact) subset F of G there is an integrable non-negative function φ with integral 1 such that λ(g)φ − φ is arbitrarily small in L1(G) for g in F.
Dixmier's condition. For every finite (or compact) subset F of G there is unit vector f in L2(G) such that λ(g)f − f is arbitrarily small in L2(G) for g in F.
Glicksberg−Reiter condition. For any f in L1(G), the distance between 0 and the closed convex hull in L1(G) of the left translates λ(g)f equals |∫f|.
Følner condition. For every finite (or compact) subset F of G there is a measurable subset U of G with finite positive Haar measure such that m(U Δ gU)/m(U) is arbitrarily small for g in F.
Leptin's condition. For every finite (or compact) subset F of G there is a measurable subset U of G with finite positive Haar measure such that m(FU Δ U)/m(U) is arbitrarily small.
Kesten's condition. Left convolution on L2(G) by a symmetric probability measure on G gives an operator of operator norm 1.
Johnson's cohomological condition. The Banach algebra A = L1(G) is amenable as a Banach algebra, i.e. any bounded derivation of A into the dual of a Banach A-bimodule is inner.
Case of discrete groups
The definition of amenability is simpler in the case of a discrete group, i.e. a group equipped with the discrete topology.
Definition. A discrete group G is amenable if there is a finitely additive measure (also called a mean)—a function that assigns to each subset of G a number from 0 to 1—such that
The measure is a probability measure: the measure of the whole group G is 1.
The measure is finitely additive: given finitely many disjoint subsets of G, the measure of the union of the sets is the sum of the measures.
The measure is left-invariant: given a subset A and an element g of G, the measure of A equals the measure of gA. (gA denotes the set of elements ga for each element a in A. That is, each element of A is translated on the left by g.)
This definition can be summarized thus: G is amenable if it has a finitely-additive left-invariant probability measure. Given a subset A of G, the measure can be thought of as answering the question: what is the probability that a random element of G is in A?
It is a fact that this definition is equivalent to the definition in terms of L∞(G).
Having a measure μ on G allows us to define integration of bounded functions on G. Given a bounded function f: G → R, the integral
is defined as in Lebesgue integration. (Note that some of the properties of the Lebesgue integral fail here, since our measure is only finitely additive.)
If a group has a left-invariant measure, it automatically has a bi-invariant one. Given a left-invariant measure μ, the function μ−(A) = μ(A−1) is a right-invariant measure. Combining these two gives a bi-invariant measure:
The equivalent conditions for amenability also become simpler in the case of a countable discrete group Γ. For such a group the following conditions are equivalent:
Γ is amenable.
If Γ acts by isometries on a (separable) Banach space E, leaving a weakly closed convex subset C of the closed unit ball of E* invariant, then Γ has a fixed point in C.
There is a left invariant norm-continuous functional μ on ℓ∞(Γ) with μ(1) = 1 (this requires the axiom of choice).
There is a left invariant state μ on any left invariant separable unital C*-subalgebra of ℓ∞(Γ).
There is a set of probability measures μn on Γ such that ||g · μn − μn||1 tends to 0 for each g in Γ (M.M. Day).
There are unit vectors xn in ℓ2(Γ) such that ||g · xn − xn||2 tends to 0 for each g in Γ (J. Dixmier).
There are finite subsets Sn of Γ such that |g · Sn Δ Sn| / |Sn| tends to 0 for each g in Γ (Følner).
If μ is a symmetric probability measure on Γ with support generating Γ, then convolution by μ defines an operator of norm 1 on ℓ2(Γ) (Kesten).
If Γ acts by isometries on a (separable) Banach space E and f in ℓ∞(Γ, E*) is a bounded 1-cocycle, i.e. f(gh) = f(g) + g·f(h), then f is a 1-coboundary, i.e. f(g) = g·φ − φ for some φ in E* (B.E. Johnson).
The reduced group C*-algebra (see the reduced group C*-algebra Cr*(G)) is nuclear.
The reduced group C*-algebra is quasidiagonal (J. Rosenberg, A. Tikuisis, S. White, W. Winter).
The von Neumann group algebra (see von Neumann algebras associated to groups) of Γ is hyperfinite (A. Connes).
Note that A. Connes also proved that the von Neumann group algebra of any connected locally compact group is hyperfinite, so the last condition no longer applies in the case of connected groups.
Amenability is related to spectral theory of certain operators. For instance, the fundamental group of a closed Riemannian manifold is amenable if and only if the bottom of the spectrum of the Laplacian on the L2-space of the universal cover of the manifold is 0.
Properties
Every (closed) subgroup of an amenable group is amenable.
Every quotient of an amenable group is amenable.
A group extension of an amenable group by an amenable group is again amenable. In particular, finite direct product of amenable groups are amenable, although infinite products need not be.
Direct limits of amenable groups are amenable. In particular, if a group can be written as a directed union of amenable subgroups, then it is amenable.
Amenable groups are unitarizable; the converse is an open problem.
Countable discrete amenable groups obey the Ornstein isomorphism theorem.
Examples
Finite groups are amenable. Use the counting measure with the discrete definition. More generally, compact groups are amenable. The Haar measure is an invariant mean (unique taking total measure 1).
The group of integers is amenable (a sequence of intervals of length tending to infinity is a Følner sequence). The existence of a shift-invariant, finitely additive probability measure on the group Z also follows easily from the Hahn–Banach theorem this way. Let S be the shift operator on the sequence space ℓ∞(Z), which is defined by (Sx)i = xi+1 for all x ∈ ℓ∞(Z), and let u ∈ ℓ∞(Z) be the constant sequence ui = 1 for all i ∈ Z. Any element y ∈ Y:=range(S − I) has a distance larger than or equal to 1 from u (otherwise yi = xi+1 - xi would be positive and bounded away from zero, whence xi could not be bounded). This implies that there is a well-defined norm-one linear form on the subspace Ru + Y taking tu + y to t. By the Hahn–Banach theorem the latter admits a norm-one linear extension on ℓ∞(Z), which is by construction a shift-invariant finitely additive probability measure on Z.
If every conjugacy class in a locally compact group has compact closure, then the group is amenable. Examples of groups with this property include compact groups, locally compact abelian groups, and discrete groups with finite conjugacy classes.
By the direct limit property above, a group is amenable if all its finitely generated subgroups are. That is, locally amenable groups are amenable.
By the fundamental theorem of finitely generated abelian groups, it follows that abelian groups are amenable.
It follows from the extension property above that a group is amenable if it has a finite index amenable subgroup. That is, virtually amenable groups are amenable.
Furthermore, it follows that all solvable groups are amenable.
All examples above are elementary amenable. The first class of examples below can be used to exhibit non-elementary amenable examples thanks to the existence of groups of intermediate growth.
Finitely generated groups of subexponential growth are amenable. A suitable subsequence of balls will provide a Følner sequence.
Finitely generated infinite simple groups cannot be obtained by bootstrap constructions as used to construct elementary amenable groups. Since there exist such simple groups that are amenable, due to Juschenko and Monod, this provides again non-elementary amenable examples.
Nonexamples
If a countable discrete group contains a (non-abelian) free subgroup on two generators, then it is not amenable. The converse to this statement is the so-called von Neumann conjecture, which was disproved by Olshanskii in 1980 using his Tarski monsters. Adyan subsequently showed that free Burnside groups are non-amenable: since they are periodic, they cannot contain the free group on two generators. These groups are finitely generated, but not finitely presented. However, in 2002 Sapir and Olshanskii found finitely presented counterexamples: non-amenable finitely presented groups that have a periodic normal subgroup with quotient the integers.
For finitely generated linear groups, however, the von Neumann conjecture is true by the Tits alternative: every subgroup of GL(n,k) with k a field either has a normal solvable subgroup of finite index (and therefore is amenable) or contains the free group on two generators. Although Tits' proof used algebraic geometry, Guivarc'h later found an analytic proof based on V. Oseledets' multiplicative ergodic theorem. Analogues of the Tits alternative have been proved for many other classes of groups, such as fundamental groups of 2-dimensional simplicial complexes of non-positive curvature.
See also
Uniformly bounded representation
Kazhdan's property (T)
Von Neumann conjecture
Notes
Citations
Sources
External links
Some notes on amenability by Terry Tao
Garrido, Alejandra. An introduction to amenable groups
Geometric group theory
Topological groups | Amenable group | [
"Physics",
"Mathematics"
] | 3,624 | [
"Geometric group theory",
"Group actions",
"Space (mathematics)",
"Topological spaces",
"Topological groups",
"Symmetry"
] |
1,670,763 | https://en.wikipedia.org/wiki/Inrush%20current | Inrush current, input surge current, or switch-on surge is the maximal instantaneous input current drawn by an electrical device when first turned on. Alternating-current electric motors and transformers may draw several times their normal full-load current when first energized, for a few cycles of the input waveform. Power converters also often have inrush currents much higher than their steady-state currents, due to the charging current of the input capacitance. The selection of over-current-protection devices such as fuses and circuit breakers is made more complicated when high inrush currents must be tolerated. The over-current protection must react quickly to overload or short-circuit faults but must not interrupt the circuit when the (usually harmless) inrush current flows.
Capacitors
A discharged or partially charged capacitor appears as a short circuit to the source when the source voltage is higher than the potential of the capacitor. A fully discharged capacitor will take approximately 5 RC time periods to fully charge; during the charging period, instantaneous current can exceed steady-state current by a substantial multiple. Instantaneous current declines to steady-state current as the capacitor reaches full charge. In the case of open circuit, the capacitor will be charged to the peak AC voltage (one cannot actually charge a capacitor with AC line power, so this refers to a varying but unidirectional voltage; e.g., the voltage output from a rectifier).
In the case of charging a capacitor from a linear DC voltage, like that from a battery, the capacitor will still appear as a short circuit; it will draw current from the source limited only by the internal resistance of the source and ESR of the capacitor. In this case, charging current will be continuous and decline exponentially to the load current. For open circuit, the capacitor will be charged to the DC voltage.
Safeguarding against the filter capacitor’s charging period’s initial current inrush flow is crucial for the performance of the device. Temporarily introducing a high resistance between the input power and rectifier can increase the resistance of the powerup, leading to reducing the inrush current. Using an inrush current limiter for this purpose helps, as it can provide the initial resistance needed.
Transformers
When a transformer is first energized, a transient current up to 10 to 15 times larger than the rated transformer current can flow for several cycles. Toroidal transformers, using less copper for the same power handling, can have up to 60 times inrush to running current.
Worst-case inrush happens when the primary winding is connected at an instant around the zero crossing of the primary voltage (which for a pure inductance would be the current maximum in the AC cycle) and if the polarity of the voltage half-cycle has the same polarity as the remanence in the iron core has (the magnetic remanence was left high from a preceding half cycle). Unless the windings and core are sized to normally never exceed 50% of saturation (and in an efficient transformer they never are, such a construction would be overly heavy and inefficient), then during such a start-up the core will be saturated. This can also be expressed as the remnant magnetism in normal operation is nearly as high as the saturation magnetism at the "knee" of the hysteresis loop. Once the core saturates, however, the winding inductance appears greatly reduced, and only the resistance of the primary-side windings and the impedance of the power line are limiting the current. As saturation occurs for part half-cycles only, harmonic-rich waveforms can be generated and can cause problems to other equipment.
For large transformers with low winding resistance and high inductance, these inrush currents can last for several seconds until the transient has died away (decay time proportional to XL/R) and the regular AC equilibrium is established. To avoid magnetic inrush, only for transformers with an air gap in the core, the inductive load needs to be synchronously connected near a supply voltage peak, in contrast with the zero-voltage switching, which is desirable to minimize sharp-edged current transients with resistive loads such as high-power heaters. But for toroidal transformers only a premagnetising procedure before switching on allows to start those transformers without any inrush-current peak.
Inrush current can be divided in three categories:
Energization inrush current result of re-energization of transformer. The residual flux in this case can be zero or depending on energization timing.
Recovery inrush current flow when transformer voltage is restored after having been reduced by system disturbance.
Sympathetic inrush current flow when multiple transformer connected in same line and one of them energized.
Motors
When an electric motor, AC or DC, is first energized, the rotor is not moving, and a current equivalent to the stalled current will flow, reducing as the motor picks up speed and develops a back EMF to oppose the supply. AC induction motors behave as transformers with a shorted secondary until the rotor begins to move, while brushed motors present essentially the winding resistance. The duration of the starting transient is less if the mechanical load on the motor is relieved until it has picked up speed.
For high-power motors, the winding configuration may be changed (wye at start and then delta) during start-up to reduce the current drawn.
Heaters and filament lamps
Metals have a positive temperature coefficient of resistance; they have lower resistance when cold. Any electrical load that contains a substantial component of metallic resistive heating elements, such as an electric kiln or a bank of tungsten-filament incandescent bulbs, will draw a high current until the metallic element reaches operating temperature. For example, wall switches intended to control incandescent lamps will have a "T" rating, indicating that they can safely control circuits with the large inrush currents of incandescent lamps. The inrush may be as much as 14 times the steady-state current and may persist for a few milliseconds for smaller lamps up to several seconds for lamps of 500 watts or more. (Non-graphitized) carbon-filament lamps, rarely used now, have a negative temperature coefficient and draw more current as they warm up; an "inrush" current is not found with these types.
Protection
A resistor in series with the line can be used to limit the current charging input capacitors. However, this approach is not very efficient, especially in high-power devices, since the resistor will have a voltage drop and dissipate some power.
Inrush current can also be reduced by inrush current limiters. Negative-temperature-coefficient (NTC) thermistors are commonly used in switching power supplies, motor drives and audio equipment to prevent damage caused by inrush current. A thermistor is a thermally-sensitive resistor with a resistance that changes significantly and predictably as a result of temperature changes. The resistance of an NTC thermistor decreases as its temperature increases.
As the inrush current limiter self-heats, the current begins to flow through it and warm it. Its resistance begins to drop, and a relatively small current flow charges the input capacitors. After the capacitors in the power supply become charged, the self-heated inrush current limiter offers little resistance in the circuit, with a low voltage drop with respect to the total voltage drop of the circuit. A disadvantage is that immediately after the device is switched off, the NTC resistor is still hot and has a low resistance. It cannot limit the inrush current unless it cools for more than 1 minute to get a higher resistance. Another disadvantage is that the NTC thermistor is not short-circuit-proof.
Another way to avoid the transformer inrush current is a "transformer switching relay". This does not need time for cool down. It can also deal with power-line half-wave voltage dips and is short-circuit-proof. This technique is important for IEC 61000-4-11 tests.
Another option, particularly for high-voltage circuits, is to use a pre-charge circuit. The circuit would support a current-limited precharge mode during the charging of capacitors and then switch to an unlimited mode for normal operation when the voltage on the load is 90% of full charge.
Switch-off spike
When a transformer, electric motor, electromagnet, or other inductive load is switched off, the inductor increases the voltage across the switch or breaker and cause extended arcing. When a transformer is switched off on its primary side, inductive kick produces a voltage spike on the secondary that can damage insulation and connected loads.
See also
Ripple (electrical)
References
External links
IEC 61000–4–30, Electromagnetic Compatibility (EMC) – Testing and measurement techniques – Power quality measurement methods, Published by The International Electrotechnical Commission, 2003.
Electrical parameters | Inrush current | [
"Engineering"
] | 1,915 | [
"Electrical engineering",
"Electrical parameters"
] |
1,671,343 | https://en.wikipedia.org/wiki/Wolfe%20conditions | In the unconstrained minimization problem, the Wolfe conditions are a set of inequalities for performing inexact line search, especially in quasi-Newton methods, first published by Philip Wolfe in 1969.
In these methods the idea is to find
for some smooth . Each step often involves approximately solving the subproblem
where is the current best guess, is a search direction, and is the step length.
The inexact line searches provide an efficient way of computing an acceptable step length that reduces the objective function 'sufficiently', rather than minimizing the objective function over exactly. A line search algorithm can use Wolfe conditions as a requirement for any guessed , before finding a new search direction .
Armijo rule and curvature
A step length is said to satisfy the Wolfe conditions, restricted to the direction , if the following two inequalities hold:
with . (In examining condition (ii), recall that to ensure that is a descent direction, we have , as in the case of gradient descent, where , or Newton–Raphson, where with positive definite.)
is usually chosen to be quite small while is much larger; Nocedal and Wright give example values of and for Newton or quasi-Newton methods and for the nonlinear conjugate gradient method. Inequality i) is known as the Armijo rule and ii) as the curvature condition; i) ensures that the step length decreases 'sufficiently', and ii) ensures that the slope has been reduced sufficiently. Conditions i) and ii) can be interpreted as respectively providing an upper and lower bound on the admissible step length values.
Strong Wolfe condition on curvature
Denote a univariate function restricted to the direction as . The Wolfe conditions can result in a value for the step length that is not close to a minimizer of . If we modify the curvature condition to the following,
then i) and iii) together form the so-called strong Wolfe conditions, and force to lie close to a critical point of .
Rationale
The principal reason for imposing the Wolfe conditions in an optimization algorithm where is to ensure convergence of the gradient to zero. In particular, if the cosine of the angle between and the gradient,
is bounded away from zero and the i) and ii) conditions hold, then .
An additional motivation, in the case of a quasi-Newton method, is that if , where the matrix is updated by the BFGS or DFP formula, then if is positive definite ii) implies is also positive definite.
Comments
Wolfe's conditions are more complicated than Armijo's condition, and a gradient descent algorithm based on Armijo's condition has a better theoretical guarantee than one based on Wolfe conditions (see the sections on "Upper bound for learning rates" and "Theoretical guarantee"
in the Backtracking line search article).
See also
Backtracking line search
References
Further reading
Mathematical optimization | Wolfe conditions | [
"Mathematics"
] | 589 | [
"Mathematical optimization",
"Mathematical analysis"
] |
1,671,610 | https://en.wikipedia.org/wiki/The%20Tower%20King | "The Tower King" is a British comic science fiction strip, appearing in titles published by IPC Magazines. The story was published in the anthology Eagle from 27 March to 4 September 1982, written by Alan Hebden, with art by José Ortiz. The story was set in a dystopian London, where society has broken down.
Creation
While the relaunched Eagle included a mix of photo and conventional picture strips. "The Tower King" was one of the latter. It was written by IPC stalwart Alan Hebden, who had experience writing for Battle Picture Weekly (including creating Major Eazy) and 2000 AD. José Ortiz provided the art; while the strip was in black-and-white, the web offset printing method used for Eagle meant he was able to give the art a grey wash, enhancing the atmosphere and detail. The strip's creators made use of the opportunity by juxtapositioning jarring visual elements, such as historic London landmarks strewn with the rubble of modern buildings, or soldiers in patchwork armour complete with pocket watches and police helmets, armed with both halberds and grenades.
Publication history
The story debuted in the launch issue of the new Eagle, dated 27 March 1982 and continued until the 4 September 1982 edition - when it was effectively replaced by another Ortiz-drawn strip, "The House of Daemon".
In 1998, the rights to the strips created for Eagle – including "House of Daemon" – were purchased from Egmont Publishing by the Dan Dare Corporation. In 2014, Hibernia Books licensed "The Tower King" and produced a collected edition with a foreword by 2000 AD artist Leigh Gallagher, initially in a print run of 200 copies. A second limited run followed in 2017. In 2020, Hibernia produced another short run, along with another run of their collection of "The House of Daemon".
Plot
The story is set in a post-apocalyptic modern-day London. Following from a nuclear war, a malfunctioning solar-powered satellite, somehow bathes the Earth in radiation that makes the production of electricity in any form impossible. Without heating, transport, food, or communication, and in the middle of a heavy winter, London swiftly falls into mass panic, resulting in pseudo-medieval anarchy; the state of the rest of Britain or other countries is not explored eduring the strip.
Unlike many of the early Eagle strips, The Tower King is in drawn rather than photographic format; the strip's creators made use of the opportunity by juxtapositioning jarring visual elements, such as historic London landmarks strewn with the rubble of modern buildings, or soldiers in patchwork armor complete with pocket watches and police helmets, armed with both halberds and grenades.
The main character is Mick Tempest, a natural-born soldier and leader, with a down-to-earth demeanour. He organises his neighbourhood into a group, sharing protection and food in the face of rioters and looters, and then negotiates entry into the Tower of London for protection, Britain's monarchy apparently having been killed in the chaos. He is captured in an assault on the Tower by Lord Spencer, a self-styled feudal warlord, who tries to publicly behead Tempest on Tower Green when he refuses to swear fealty. They are attacked by Tube Rats, a vicious group of cannibalistic survivors who had taken over the defunct London Underground system; in the battle, Tempest proves his worth to Spencer as an equal rather than a lieutenant.
Tempest, Spencer, and their army slowly regain control over parts of central London, encountering a hospital containing medical staff who had become insane due to events and have started a cult around worship of human organs; a group called the Wreckers who drive hand-cranked diesel-powered tanks and trains; and an 'electric temple' inside a power station, within which another group of deranged people worship electricity as if it still exists.
Unknown to everyone on Earth, the electricity-interfering satellite is destroyed by meteor impact. It's Tempest who discovers the station's generators work when he turns them on to prove electricity is never coming back. The strip ends with Tempest vowing to re-create the world without the many social problems wrought by modern technology.
Reception
Reviewing the 2014 collected edition, Lew Stringer highly praised both the art and story, describing it as "one of the best British adventure strips of its time", despite noting a hurried conclusion.
References
1982 comics endings
Comics set in London
Dystopian comics
Eagle (1982) comic strips
Nuclear war and weapons in popular culture
Post-apocalyptic comics
British comic strips
Radiation
Apocalyptic comics | The Tower King | [
"Physics",
"Chemistry"
] | 942 | [
"Transport phenomena",
"Waves",
"Physical phenomena",
"Radiation"
] |
1,672,153 | https://en.wikipedia.org/wiki/Hannskarl%20Bandel | Hannskarl Bandel (May 3, 1925 Dessau, Germany – December 29, 1993 Aspen, Colorado, United States), was a German-American structural engineer.
Early life
Hannskarl Bandel's father was an architect who owned a construction firm, and his mother came from the Brechtel family, which owned the well-known German construction company of the same name, founded in 1883 by Johannes Brechtel. This may have been a contributing factor in his choice of profession and study: he took a doctorate in engineering at Technische Universität Berlin. After working in the German steel industry, he came to the United States after World War II with no money and two suitcases full of books, hoping to build suspension bridges. Three years after joining the New York firm of engineer Fred Severud, he was made a full partner.
Major works
With Severud, he made crucial, creative structural contributions to important mid-century architectural projects such as:
the cylindrical towers and theater roof at Marina City in Chicago, Illinois
the Toronto City Hall
Ford Foundation Headquarters in New York City (the jungle building)
the cable-suspension system for the roof of Madison Square Garden in New York City
the Kennedy Center for the Performing Arts in Washington DC
Philip Johnson's Crystal Cathedral in Garden Grove, California
the Sunshine Skyway Bridge in St. Petersburg, Florida (demolished)
Other work
It was Bandel who modified the catenary arch shape for Eero Saarinen's Gateway Arch project. When Saarinen tried to demonstrate his desired shape with a chain suspended in his hands, he couldn't achieve the slightly elongated, "soaring" effect he wanted; Bandel asked for the chain, came back in a few days, and delighted the architect by producing Saarinen's curve, as if by magic. Bandel had replaced some of the constant-sized links with variable links, thus changing the weight, the distribution of the weight, and the shape. While working on the design Bandel also factored in the loads upon the 630 foot arch caused by the wind and worked out that if he added more weight to the first 300 feet of the arch and placed 25,980 tons of concrete in the arch's foundation the center of gravity would be lowered to a stable location.
In 1978, he was elected a member of the National Academy of Engineering. After Fred Severud's retirement, the firm, despite Bandel's objections, was bought by a Hungarian engineer. Bandel left the firm and became the senior vice-president of DRC Consultants, working on cable-stayed bridges and various other structures. He was offered the chair of structural engineering at the University of Graz, Austria, in 1980, but turned down the offer, saying that his challenging assignments in America were more important to him than a prestigious professorship in Europe.
When the current Sunshine Skyway Bridge was constructed, Bandel was in charge of providing the construction engineering, planning, and management for the project. This bridge was constructed using pre-stressed concrete, which not only served as the roadway but as the structural support for the bridge, and featured a single span of supporting cables which incorporated his work in cable stayed bridge systems.
Bandel was also an expert on creative structural renovation and retrofitting. According to Benjamin Horace Weese, Bandel personally saved the deteriorating Guastavino tile dome at the Cathedral of St. John the Divine by New York City in 1972 by recommending that its supporting granite piers be insulated. In later years Bandel produced an innovative study for three-dimensional trusses to be assembled without tools in zero gravity, for the NASA Mars Pathfinder project.
Death
Bandel died of heart failure while skiing at Aspen Highlands in Aspen, Colorado.
In popular culture
James Franco's character in the 2008 movie Pineapple Express refers to Hannskarl when discussing his favorite civil engineers.
In Jonathan Franzen's The Twenty-Seventh City protagonist Martin Probst clearly refers to Bandel as constructor of the Gateway Arch and puts him in the middle of a political conspiracy that finally unravels both, his professional life and his family.
References
1925 births
1993 deaths
Emigrants from Allied-occupied Germany
Structural engineers
German civil engineers
American civil engineers
20th-century American engineers
Technische Universität Berlin alumni
People from Dessau-Roßlau | Hannskarl Bandel | [
"Engineering"
] | 888 | [
"Structural engineering",
"Structural engineers"
] |
1,672,232 | https://en.wikipedia.org/wiki/M%C3%B6bius%20resistor | A Möbius resistor is an electrical component made up of two conductive surfaces separated by a dielectric material, twisted 180° and connected to form a Möbius strip. As with the Möbius strip, once the Möbius resistor is connected up in this way it effectively has only one side and one continuous surface.
Its connectors are attached at the same point on the circumference but on opposite surfaces.
It provides a resistor with a reduced self-inductance, meaning that it can resist the flow of electricity in a more frequency-independent manner.
Transmission line
Due to a symmetrical construction, the voltage between the conductive surfaces of the Möbius resistor in the point equidistant to the feed point is exactly zero. This means that a short circuit placed at this point doesn't influence the characteristics of the device. Thus, the Möbius resistor can be thought of as two shorted lossy (resistive) transmission line segments, connected in parallel at the resistor's feed point.
Patents
, Möbius capacitor
, Non-inductive electrical resistor
, Apparatus and method for minimizing electromagnetic emissions of technical emitters Dietrich Reichwein
See also
Ayrton–Perry winding
References
External links
R.L. Davis, Noninductive Resistor, collection of articles at rexresearch.com
Resistive components | Möbius resistor | [
"Physics"
] | 284 | [
"Resistive components",
"Physical quantities",
"Electrical resistance and conductance"
] |
1,672,824 | https://en.wikipedia.org/wiki/Methylaluminoxane | Methylaluminoxane, commonly called MAO, is a mixture of organoaluminium compounds with the approximate formula (Al(CH3)O)n. It is usually encountered as a solution in (aromatic) solvents, commonly toluene but also xylene, cumene, or mesitylene, Used in large excess, it activates precatalysts for alkene polymerization.
Preparation and structure
MAO is prepared by the incomplete hydrolysis of trimethylaluminium, as indicated by this idealized equation:
n Al(CH3)3 + n H2O → (Al(CH3)O)n + 2n CH4
After many years of study, single crystals of an active MAO were analyzed by X-ray crystallography. The molecule adopts a ruffled sheet of tetrahedral Al centers linked by triply bridging oxides.
Uses
MAO is well known as catalyst activator for olefin polymerizations by homogeneous catalysis. In traditional Ziegler–Natta catalysis, supported titanium trichloride is activated by treatment with trimethylaluminium (TMA). TMA only weakly activates homogeneous precatalysts, such as zirconocene dichloride. In the mid-1970s Kaminsky discovered that metallocene dichlorides can be activated by MAO (see Kaminsky catalyst). The effect was discovered when a small amount of water was found to enhance the activity in the Ziegler–Natta system.
MAO serves multiple functions in the activation process. First it alkylates the metal-chloride pre-catalyst species giving Ti/Zr-methyl intermediates. Second, it abstracts a ligand from the methylated precatalysts, forming an electrophilic, coordinatively unsaturated catalysts that can undergo ethylene insertion. This activated catalyst is an ion pair between a cationic catalyst and an weakly basic MAO-derived anion. MAO also functions as scavenger for protic impurities.
Previous studies
Diverse mechanisms have been proposed for the formation of MAO and many structures as well.
See also
Aluminoxane
References
Polymer chemistry
Aluminium compounds
Catalysts
Pyrophoric materials
Methyl compounds | Methylaluminoxane | [
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 464 | [
"Catalysis",
"Catalysts",
"Materials science",
"Polymer chemistry",
"Chemical kinetics"
] |
1,673,159 | https://en.wikipedia.org/wiki/Peroxyacyl%20nitrates | In organic chemistry, peroxyacyl nitrates (also known as Acyl peroxy nitrates, APN or PANs) are powerful respiratory and eye irritants present in photochemical smog. They are nitrates produced in the thermal equilibrium between organic peroxy radicals by the gas-phase oxidation of a variety of volatile organic compounds (VOCs), or by aldehydes and other oxygenated VOCs oxidizing in the presence of .
They are good markers for the source of VOCs as either biogenic or anthropogenic, which is useful in the study of global and local effects of pollutants.
Formation
PANs are secondary pollutants, which means they are not directly emitted as exhaust from power plants or internal combustion engines, but they are formed from other pollutants by chemical reactions in the atmosphere. Free radical reactions catalyzed by ultraviolet light from the sun oxidize unburned non-methane hydrocarbons to aldehydes, ketones, and dicarbonyls, whose secondary reactions create peroxyacyl radicals. The most common peroxyacyl radical is peroxyacetyl, which can be formed from the free radical oxidation of acetaldehyde, various ketones, or the photolysis of dicarbonyl compounds such as methylglyoxal or diacetyl.
These react reversibly with nitrogen dioxide () to form PANs:
Night-time reaction of aldehydes with nitrogen trioxide is another possible source.
Since they dissociate quite slowly in the atmosphere into radicals and , PANs are able to transport these unstable compounds far away from the urban and industrial origin. This is important for tropospheric ozone production as PANs transport NOx to regions where it can more efficiently produce ozone.
Types
Peroxyacetyl nitrate is the most prevalent peroxyacyl nitrate (75–90% of total atmospheric emissions), followed by peroxypropionyl nitrate (PPN). Peroxybenzoyl nitrate (PBzN) and methacryloyl peroxynitrate (MPAN) have also been observed. The composition of PANs in a particular region depends heavily on which hydrocarbons are present in the atmosphere, with the exception of peroxyacetyl nitrate, which is able to be produced from a range of precursors.
Health effects
PANs are both toxic and irritating, as they dissolve more readily in water than ozone. They are lachrymators, causing eye irritation at concentrations of only a few parts per billion. At higher concentrations they cause extensive damage to vegetation. PANs are mutagenic, and are considered potential contributors to the development of skin cancer.
References
Organic peroxides
Pollutants
Smog | Peroxyacyl nitrates | [
"Physics",
"Chemistry"
] | 580 | [
"Visibility",
"Physical quantities",
"Smog",
"Organic compounds",
"Organic peroxides"
] |
1,673,288 | https://en.wikipedia.org/wiki/Electron%20magnetic%20moment | In atomic physics, the electron magnetic moment, or more specifically the electron magnetic dipole moment, is the magnetic moment of an electron resulting from its intrinsic properties of spin and electric charge. The value of the electron magnetic moment (symbol μe) is In units of the Bohr magneton (μB), it is , a value that was measured with a relative accuracy of .
Magnetic moment of an electron
The electron is a charged particle with charge −, where is the unit of elementary charge. Its angular momentum comes from two types of rotation: spin and orbital motion. From classical electrodynamics, a rotating distribution of electric charge produces a magnetic dipole, so that it behaves like a tiny bar magnet. One consequence is that an external magnetic field exerts a torque on the electron magnetic moment that depends on the orientation of this dipole with respect to the field.
If the electron is visualized as a classical rigid body in which the mass and charge have identical distribution and motion that is rotating about an axis with angular momentum , its magnetic dipole moment is given by:
where e is the electron rest mass. The angular momentum L in this equation may be the spin angular momentum, the orbital angular momentum, or the total angular momentum. The ratio between the true spin magnetic moment and that predicted by this model is a dimensionless factor , known as the electron -factor:
It is usual to express the magnetic moment in terms of the reduced Planck constant and the Bohr magneton B:
Since the magnetic moment is quantized in units of B, correspondingly the angular momentum is quantized in units of .
Formal definition
Classical notions such as the center of charge and mass are, however, hard to make precise for a quantum elementary particle. In practice the definition used by experimentalists comes from the form factors
appearing in the matrix element
of the electromagnetic current operator between two on-shell states. Here and are 4-spinor solution of the Dirac equation normalized so that , and is the momentum transfer from the current to the electron. The form factor is the electron's charge, is its static magnetic dipole moment, and provides the formal definion of the electron's electric dipole moment. The remaining form factor would, if non zero, be the anapole moment.
Spin magnetic dipole moment
The spin magnetic moment is intrinsic for an electron. It is
Here is the electron spin angular momentum. The spin -factor is approximately two: . The factor of two indicates that the electron appears to be twice as effective in producing a magnetic moment as a charged body for which the mass and charge distributions are identical.
The spin magnetic dipole moment is approximately one B because and the electron is a spin- particle ():
The component of the electron magnetic moment is
where s is the spin quantum number. Note that is a negative constant multiplied by the spin, so the magnetic moment is antiparallel to the spin angular momentum.
The spin g-factor comes from the Dirac equation, a fundamental equation connecting the electron's spin with its electromagnetic properties. Reduction of the Dirac equation for an electron in a magnetic field to its non-relativistic limit yields the Schrödinger equation with a correction term, which takes account of the interaction of the electron's intrinsic magnetic moment with the magnetic field giving the correct energy.
For the electron spin, the most accurate value for the spin -factor has been experimentally determined to have the value
Note that this differs only marginally from the value from the Dirac equation. The small correction is known as the anomalous magnetic dipole moment of the electron; it arises from the electron's interaction with virtual photons in quantum electrodynamics. A triumph of the quantum electrodynamics theory is the accurate prediction of the electron g-factor. The CODATA value for the electron magnetic moment is
Orbital magnetic dipole moment
The revolution of an electron around an axis through another object, such as the nucleus, gives rise to the orbital magnetic dipole moment. Suppose that the angular momentum for the orbital motion is . Then the orbital magnetic dipole moment is
Here L is the electron orbital -factor and B is the Bohr magneton. The value of L is exactly equal to one, by a quantum-mechanical argument analogous to the derivation of the classical gyromagnetic ratio.
Total magnetic dipole moment
The total magnetic dipole moment resulting from both spin and orbital angular momenta of an electron is related to the total angular momentum by a similar equation:
The -factor J is known as the Landé g-factor, which can be related to L and S by quantum mechanics. See Landé g-factor for details.
Example: hydrogen atom
For a hydrogen atom, an electron occupying the atomic orbital , the magnetic dipole moment is given by
Here is the orbital angular momentum, , , and are the principal, azimuthal, and magnetic quantum numbers respectively.
The component of the orbital magnetic dipole moment for an electron with a magnetic quantum number ℓ is given by
History
The electron magnetic moment is intrinsically connected to electron spin and was first hypothesized during the early models of the atom in the early twentieth century. The first to introduce the idea of electron spin was Arthur Compton in his 1921 paper on investigations of ferromagnetic substances with X-rays. In Compton's article, he wrote: "Perhaps the most natural, and certainly the most generally accepted view of the nature of the elementary magnet, is that the revolution of electrons in orbits within the atom give to the atom as a whole the properties of a tiny permanent magnet."
That same year Otto Stern proposed an experiment carried out later called the Stern–Gerlach experiment in which silver atoms in a magnetic field were deflected in opposite directions of distribution. This pre-1925 period marked the old quantum theory built upon the Bohr-Sommerfeld model of the atom with its classical elliptical electron orbits. During the period between 1916 and 1925, much progress was being made concerning the arrangement of electrons in the periodic table. In order to explain the Zeeman effect in the Bohr atom, Sommerfeld proposed that electrons would be based on three 'quantum numbers', n, k, and m, that described the size of the orbit, the shape of the orbit, and the direction in which the orbit was pointing. Irving Langmuir had explained in his 1919 paper regarding electrons in their shells, "Rydberg has pointed out that these numbers are obtained from the series . The factor two suggests a fundamental two-fold symmetry for all stable atoms." This configuration was adopted by Edmund Stoner, in October 1924 in his paper 'The Distribution of Electrons Among Atomic Levels' published in the Philosophical Magazine. Wolfgang Pauli hypothesized that this required a fourth quantum number with a two-valuedness.
Electron spin in the Pauli and Dirac theories
Starting from here the charge of the electron is . The necessity of introducing half-integral spin goes back experimentally to the results of the Stern–Gerlach experiment. A beam of atoms is run through a strong non-uniform magnetic field, which then splits into parts depending on the intrinsic angular momentum of the atoms. It was found that for silver atoms, the beam was split in two—the ground state therefore could not be integral, because even if the intrinsic angular momentum of the atoms were as small as possible, 1, the beam would be split into 3 parts, corresponding to atoms with = −1, 0, and +1. The conclusion is that silver atoms have net intrinsic angular momentum of . Pauli set up a theory which explained this splitting by introducing a two-component wave function and a corresponding correction term in the Hamiltonian, representing a semi-classical coupling of this wave function to an applied magnetic field, as so:
Here is the magnetic vector potential and the electric potential, both representing the electromagnetic field, and = (, , ) are the Pauli matrices. On squaring out the first term, a residual interaction with the magnetic field is found, along with the usual classical Hamiltonian of a charged particle interacting with an applied field:
This Hamiltonian is now a 2 × 2 matrix, so the Schrödinger equation based on it must use a two-component wave function. Pauli had introduced the 2 × 2 sigma matrices as pure phenomenology — Dirac now had a theoretical argument that implied that spin was somehow the consequence of incorporating relativity into quantum mechanics. On introducing the external electromagnetic 4-potential into the Dirac equation in a similar way, known as minimal coupling, it takes the form (in natural units = = 1)
where are the gamma matrices (known as Dirac matrices) and is the imaginary unit. A second application of the Dirac operator will now reproduce the Pauli term exactly as before, because the spatial Dirac matrices multiplied by , have the same squaring and commutation properties as the Pauli matrices. What is more, the value of the gyromagnetic ratio of the electron, standing in front of Pauli's new term, is explained from first principles. This was a major achievement of the Dirac equation and gave physicists great faith in its overall correctness. The Pauli theory may be seen as the low energy limit of the Dirac theory in the following manner. First the equation is written in the form of coupled equations for 2-spinors with the units restored:
so
Assuming the field is weak and the motion of the electron non-relativistic, we have the total energy of the electron approximately equal to its rest energy, and the momentum reducing to the classical value,
and so the second equation may be written
which is of order - thus at typical energies and velocities, the bottom components of the Dirac spinor in the standard representation are much suppressed in comparison to the top components. Substituting this expression into the first equation gives after some rearrangement
The operator on the left represents the particle energy reduced by its rest energy, which is just the classical energy, so we recover Pauli's theory if we identify his 2-spinor with the top components of the Dirac spinor in the non-relativistic approximation. A further approximation gives the Schrödinger equation as the limit of the Pauli theory. Thus the Schrödinger equation may be seen as the far non-relativistic approximation of the Dirac equation when one may neglect spin and work only at low energies and velocities. This also was a great triumph for the new equation, as it traced the mysterious that appears in it, and the necessity of a complex wave function, back to the geometry of space-time through the Dirac algebra. It also highlights why the Schrödinger equation, although superficially in the form of a diffusion equation, actually represents the propagation of waves.
It should be strongly emphasized that this separation of the Dirac spinor into large and small components depends explicitly on a low-energy approximation. The entire Dirac spinor represents an irreducible whole, and the components we have just neglected to arrive at the Pauli theory will bring in new phenomena in the relativistic regime – antimatter and the idea of creation and annihilation of particles.
In a general case (if a certain linear function of electromagnetic field does not vanish identically), three out of four components of the spinor function in the Dirac equation can be algebraically eliminated, yielding an equivalent fourth-order partial differential equation for just one component. Furthermore, this remaining component can be made real by a gauge transform.
Measurement
The existence of the anomalous magnetic moment of the electron has been detected experimentally by magnetic resonance method. This allows the determination of hyperfine splitting of electron shell energy levels in atoms of protium and deuterium using the measured resonance frequency for several transitions.
The magnetic moment of the electron has been measured using a one-electron quantum cyclotron and quantum nondemolition spectroscopy. The spin frequency of the electron is determined by the -factor.
See also
Spin (physics)
Electron precipitation
Bohr magneton
Nuclear magnetic moment
Nucleon magnetic moment
Anomalous magnetic dipole moment
Electron electric dipole moment
Fine structure
Hyperfine structure
References
Bibliography
Atomic physics
Electric dipole moment
Magnetic moment
Particle physics
Physical constants | Electron magnetic moment | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,519 | [
"Electric dipole moment",
"Physical quantities",
"Quantity",
"Quantum mechanics",
"Magnetic moment",
"Physical constants",
"Particle physics",
" molecular",
" and optical physics",
"Atomic",
"Atomic physics",
"Moment (physics)"
] |
1,673,666 | https://en.wikipedia.org/wiki/Integrated%20Computer-Aided%20Manufacturing | Integrated Computer-Aided Manufacturing (ICAM) is a US Air Force program that develops tools, techniques, and processes to support manufacturing integration. It influenced the computer-integrated manufacturing (CIM) and computer-aided manufacturing (CAM) project efforts of many companies.
The ICAM program was founded in 1976 and initiative managed by the US Air Force at Wright-Patterson as a part of their technology modernization efforts. The program initiated the development a series of standards for modeling and analysis in management and business improvement, called Integrated Definitions, short IDEFs.
Overview
The USAF ICAM program was founded in 1976 at the US Air Force Materials Laboratory, Wright-Patterson Air Force Base in Ohio by Dennis E. Wisnosky and Dan L. Shunk and others. In the mid-1970s Joseph Harrington had assisted Wisnosky and Shunk in designing the ICAM program and had broadened the concept of CIM to include the entire manufacturing company. Harrington considered manufacturing a "monolithic function".
The ICAM program was visionary in showing that a new approach was necessary to achieve integration in manufacturing firms. Wisnosky and Shunk developed a "wheel" to illustrate the architecture of their ICAM project and to show the various elements that had to work together. Wisnosky and Shunk were among the first to understand the web of interdependencies needed for integration. Their work represents the first major step in shifting the focus of manufacturing from a series of sequential operations to parallel processing.
The ICAM program has spent over $100 million to develop tools, techniques, and processes to support manufacturing integration. The Air Force's ICAM program recognizes the role of data as central to any integration effort. Data must be common and shareable across functions. The concept still remains ahead of its time, because most major companies did not seriously begin to attack the data architecture challenge until well into the 1990s. The ICAM program also recognizes the need for ways to analyze and document major activities within the manufacturing establishment. Thus, from ICAM came the IDEFs, the standard for modeling and analysis in management and business improvement efforts. IDEF means ICAM DEFinition.
The impact
Standard data models
To extract real meaning from the data, we must also have formulated, and agreed on, a model of the world the data describes. We now understand that this actually involves two different kinds of model:
Static associations between data and real-world physical and conceptual objects it describes—called the information model
Rules for use and modification of the data, which derive from the dynamic characteristics of the objects themselves—called the functional model
The significance of these models to data interchange for manufacturing and materials flow was recognized early in the Air Force Integrated Computer Aided Manufacturing (ICAM) Project and gave rise to the IDEF formal modeling project. IDEF produced a specification for a formal functional modeling approach (IDEF0) and an information modeling language (IDEF1). The more recent "Product Data Exchange Specification" (PDES) project in the U.S., the related ISO Standard for the exchange of product model data (STEP) and the Computer Integrated Manufacture Open Systems Architecture (CIMOSA) [ISO87] project in the European Economic Community have whole heartedly accepted the notion that useful data sharing is not possible without formal semantic data models of the context the data describes.
Within their respective spectra of efforts, each of these projects has a panoply of information models for manufactured objects, materials and product characteristics, and for manufacturing and assembly processes. Each also has a commitment to detailed functional models of the various phases of product life cycle. The object of all of these recent efforts is to standardize the interchange of information in many aspects of product design, manufacture, delivery and support.
Further research with ICAM definitions
The research in expending and applying the ICAM definitions have proceeded. In the 1990s for example the Material Handling Research Center (MHRC) of the Georgia Institute of Technology and University of Arkansas had included it in their Information Systems research area. That area focuses on the information that must accompany material movements and the application of artificial intelligence to material handling problems. MHRC's research involves expanding the integrated computer-aided manufacturing definition (IDEF) approach to include the information flow as well as the material flow needed to support a manufacturing enterprise, as well as models to handle unscheduled events such as machine breakdowns or material shortages. Past research resulted in software to automatically palletize random-size packages, a system to automatically load and unload truck trailers, and an integrated production control system to fabricate optical fibers.
See also
CIMOSA
IDEF
References
Further reading
Charles Savage, 1996, Fifth Generation Management, Dynamic Teaming, Virtual Enterprising and Knowledge Networking, page 184, , Butterworth-Heinemann.
Joseph Harrington (1984). Understanding the Manufacturing Process.
Computer-aided design
Wright-Patterson Air Force Base
Computer-aided manufacturing | Integrated Computer-Aided Manufacturing | [
"Engineering"
] | 1,001 | [
"Computer-aided design",
"Design engineering"
] |
1,674,141 | https://en.wikipedia.org/wiki/Vacancy%20defect | In crystallography, a vacancy is a type of point defect in a crystal where an atom is missing from one of the lattice sites. Crystals inherently possess imperfections, sometimes referred to as crystallographic defects.
Vacancies occur naturally in all crystalline materials. At any given temperature, up to the melting point of the material, there is an equilibrium concentration (ratio of vacant lattice sites to those containing atoms). At the melting point of some metals the ratio can be approximately 1:1000. This temperature dependence can be modelled by
where is the vacancy concentration, is the energy required for vacancy formation, is the Boltzmann constant, is the absolute temperature, and is the concentration of atomic sites i.e.
where is density, the Avogadro constant, and the molar mass.
It is the simplest point defect. In this system, an atom is missing from its regular atomic site. Vacancies are formed during solidification due to vibration of atoms, local rearrangement of atoms, plastic deformation and ionic bombardments.
The creation of a vacancy can be simply modeled by considering the energy required to break the bonds between an atom inside the crystal and its nearest neighbor atoms. Once that atom is removed from the lattice site, it is put back on the surface of the crystal and some energy is retrieved because new bonds are established with other atoms on the surface. However, there is a net input of energy because there are fewer bonds between surface atoms than between atoms in the interior of the crystal.
Material physics
In most applications vacancy defects are irrelevant to the intended purpose of a material, as they are either too few or spaced throughout a multi-dimensional space in such a way that force or charge can move around the vacancy. In the case of more constrained structures like carbon nanotubes however, vacancies and other crystalline defects can significantly weaken the material.
See also
Crystallographic defect
Schottky defect
Frenkel defect
References
External links
Crystalline Defects in Silicon
Crystallographic defects | Vacancy defect | [
"Chemistry",
"Materials_science",
"Engineering"
] | 403 | [
"Crystallographic defects",
"Crystallography",
"Materials degradation",
"Materials science"
] |
1,674,308 | https://en.wikipedia.org/wiki/Hydrogen%20halide | In chemistry, hydrogen halides (hydrohalic acids when in the aqueous phase) are diatomic, inorganic compounds that function as Arrhenius acids. The formula is HX where X is one of the halogens: fluorine, chlorine, bromine, iodine, astatine, or tennessine. All known hydrogen halides are gases at standard temperature and pressure.
Comparison to hydrohalic acids
The hydrogen halides are diatomic molecules with no tendency to ionize in the gas phase (although liquified hydrogen fluoride is a polar solvent somewhat similar to water). Thus, chemists distinguish hydrogen chloride from hydrochloric acid. The former is a gas at room temperature that reacts with water to give the acid. Once the acid has formed, the diatomic molecule can be regenerated only with difficulty, but not by normal distillation. Commonly the names of the acid and the molecules are not clearly distinguished such that in lab jargon, "HCl" often means hydrochloric acid, not the gaseous hydrogen chloride.
Occurrence
Hydrogen chloride, in the form of hydrochloric acid, is a major component of gastric acid.
Hydrogen fluoride, chloride and bromide are also volcanic gases.
Synthesis
The direct reaction of hydrogen with fluorine and chlorine gives hydrogen fluoride and hydrogen chloride, respectively. Industrially these gases are, however, produced by treatment of halide salts with sulfuric acid. Hydrogen bromide arises when hydrogen and bromine are combined at high temperatures in the presence of a platinum catalyst. The least stable hydrogen halide, HI, is produced less directly, by the reaction of iodine with hydrogen sulfide or with hydrazine.
Physical properties
The hydrogen halides are colourless gases at standard conditions for temperature and pressure (STP) except for hydrogen fluoride, which boils at 19 °C. Alone of the hydrogen halides, hydrogen fluoride exhibits hydrogen bonding between molecules, and therefore has the highest melting and boiling points of the HX series. From HCl to HI the boiling point rises. This trend is attributed to the increasing strength of intermolecular van der Waals forces, which correlates with numbers of electrons in the molecules. Concentrated hydrohalic acid solutions produce visible white fumes. This mist arises from the formation of tiny droplets of their concentrated aqueous solutions of the hydrohalic acid.
Reactions
Upon dissolution in water, which is highly exothermic, the hydrogen halides give the corresponding acids. These acids are very strong, reflecting their tendency to ionize in aqueous solution yielding hydronium ions (H3O+). With the exception of hydrofluoric acid, the hydrogen halides are strong acids, with acid strength increasing down the group. Hydrofluoric acid is complicated because its strength depends on the concentration owing to the effects of homoconjugation. As solutions in non-aqueous solvents, such as acetonitrile, the hydrogen halides are only modestly acidic however.
Similarly, the hydrogen halides react with ammonia (and other bases), forming ammonium halides:
HX + NH3 → NH4X
In organic chemistry, the hydrohalogenation reaction is used to prepare halocarbons. For example, chloroethane is produced by hydrochlorination of ethylene:
C2H4 + HCl → CH3CH2Cl
See also
Pseudohalogen
Hypohalous acid
group 13 hydrides
group 14 hydrides
group 15 hydrides
group 16 hydrides
References
Nonmetal halides
Hydrogen compounds
Diatomic molecules | Hydrogen halide | [
"Physics",
"Chemistry"
] | 763 | [
"Molecules",
"Diatomic molecules",
"Matter"
] |
1,674,411 | https://en.wikipedia.org/wiki/Convex%20optimization | Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.
Definition
Abstract form
A convex optimization problem is defined by two ingredients:
The objective function, which is a real-valued convex function of n variables, ;
The feasible set, which is a convex subset .
The goal of the problem is to find some attaining
.
In general, there are three options regarding the existence of a solution:
If such a point x* exists, it is referred to as an optimal point or solution; the set of all optimal points is called the optimal set; and the problem is called solvable.
If is unbounded below over , or the infimum is not attained, then the optimization problem is said to be unbounded.
Otherwise, if is the empty set, then the problem is said to be infeasible.
Standard form
A convex optimization problem is in standard form if it is written as
where:
is the vector of optimization variables;
The objective function is a convex function;
The inequality constraint functions , , are convex functions;
The equality constraint functions , , are affine transformations, that is, of the form: , where is a vector and is a scalar.
The feasible set of the optimization problem consists of all points satisfying the inequality and the equality constraints. This set is convex because is convex, the sublevel sets of convex functions are convex, affine sets are convex, and the intersection of convex sets is convex.
Many optimization problems can be equivalently formulated in this standard form. For example, the problem of maximizing a concave function can be re-formulated equivalently as the problem of minimizing the convex function . The problem of maximizing a concave function over a convex set is commonly called a convex optimization problem.
Epigraph form (standard form with linear objective)
In the standard form it is possible to assume, without loss of generality, that the objective function f is a linear function. This is because any program with a general objective can be transformed into a program with a linear objective by adding a single variable t and a single constraint, as follows:
Conic form
Every convex program can be presented in a conic form, which means minimizing a linear objective over the intersection of an affine plane and a convex cone:
where K is a closed pointed convex cone, L is a linear subspace of Rn, and b is a vector in Rn. A linear program in standard form is the special case in which K is the nonnegative orthant of Rn.
Eliminating linear equality constraints
It is possible to convert a convex program in standard form, to a convex program with no equality constraints. Denote the equality constraints hi(x)=0 as Ax=b, where A has n columns. If Ax=b is infeasible, then of course the original problem is infeasible. Otherwise, it has some solution x0 , and the set of all solutions can be presented as: Fz+x0, where z is in Rk, k=n-rank(A), and F is an n-by-k matrix. Substituting x = Fz+x0 in the original problem gives: where the variables are z. Note that there are rank(A) fewer variables. This means that, in principle, one can restrict attention to convex optimization problems without equality constraints. In practice, however, it is often preferred to retain the equality constraints, since they might make some algorithms more efficient, and also make the problem easier to understand and analyze.
Special cases
The following problem classes are all convex optimization problems, or can be reduced to convex optimization problems via simple transformations:
Linear programming problems are the simplest convex programs. In LP, the objective and constraint functions are all linear.
Quadratic programming are the next-simplest. In QP, the constraints are all linear, but the objective may be a convex quadratic function.
Second order cone programming are more general.
Semidefinite programming are more general.
Conic optimization are even more general - see figure to the right,
Other special cases include;
Least squares
Quadratic minimization with convex quadratic constraints
Geometric programming
Entropy maximization with appropriate constraints.
Properties
The following are useful properties of convex optimization problems:
every local minimum is a global minimum;
the optimal set is convex;
if the objective function is strictly convex, then the problem has at most one optimal point.
These results are used by the theory of convex minimization along with geometric notions from functional analysis (in Hilbert spaces) such as the Hilbert projection theorem, the separating hyperplane theorem, and Farkas' lemma.
Algorithms
Unconstrained and equality-constrained problems
The convex programs easiest to solve are the unconstrained problems, or the problems with only equality constraints. As the equality constraints are all linear, they can be eliminated with linear algebra and integrated into the objective, thus converting an equality-constrained problem into an unconstrained one.
In the class of unconstrained (or equality-constrained) problems, the simplest ones are those in which the objective is quadratic. For these problems, the KKT conditions (which are necessary for optimality) are all linear, so they can be solved analytically.
For unconstrained (or equality-constrained) problems with a general convex objective that is twice-differentiable, Newton's method can be used. It can be seen as reducing a general unconstrained convex problem, to a sequence of quadratic problems.Newton's method can be combined with line search for an appropriate step size, and it can be mathematically proven to converge quickly.
Other efficient algorithms for unconstrained minimization are gradient descent (a special case of steepest descent).
General problems
The more challenging problems are those with inequality constraints. A common way to solve them is to reduce them to unconstrained problems by adding a barrier function, enforcing the inequality constraints, to the objective function. Such methods are called interior point methods.They have to be initialized by finding a feasible interior point using by so-called phase I methods, which either find a feasible point or show that none exist. Phase I methods generally consist of reducing the search in question to a simpler convex optimization problem.
Convex optimization problems can also be solved by the following contemporary methods:
Bundle methods (Wolfe, Lemaréchal, Kiwiel), and
Subgradient projection methods (Polyak),
Interior-point methods, which make use of self-concordant barrier functions and self-regular barrier functions.
Cutting-plane methods
Ellipsoid method
Subgradient method
Dual subgradients and the drift-plus-penalty method
Subgradient methods can be implemented simply and so are widely used. Dual subgradient methods are subgradient methods applied to a dual problem. The drift-plus-penalty method is similar to the dual subgradient method, but takes a time average of the primal variables.
Lagrange multipliers
Consider a convex minimization problem given in standard form by a cost function and inequality constraints for . Then the domain is:
The Lagrangian function for the problem is
For each point in that minimizes over , there exist real numbers called Lagrange multipliers, that satisfy these conditions simultaneously:
minimizes over all
with at least one
(complementary slackness).
If there exists a "strictly feasible point", that is, a point satisfying
then the statement above can be strengthened to require that .
Conversely, if some in satisfies (1)–(3) for scalars with then is certain to minimize over .
Software
There is a large software ecosystem for convex optimization. This ecosystem has two main categories: solvers on the one hand and modeling tools (or interfaces) on the other hand.
Solvers implement the algorithms themselves and are usually written in C. They require users to specify optimization problems in very specific formats which may not be natural from a modeling perspective. Modeling tools are separate pieces of software that let the user specify an optimization in higher-level syntax. They manage all transformations to and from the user's high-level model and the solver's input/output format.
The table below shows a mix of modeling tools (such as CVXPY and Convex.jl) and solvers (such as CVXOPT and MOSEK). This table is by no means exhaustive.
Applications
Convex optimization can be used to model problems in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design, data analysis and modeling, finance, statistics (optimal experimental design), and structural optimization, where the approximation concept has proven to be efficient. Convex optimization can be used to model problems in the following fields:
Portfolio optimization.
Worst-case risk analysis.
Optimal advertising.
Variations of statistical regression (including regularization and quantile regression).
Model fitting (particularly multiclass classification).
Electricity generation optimization.
Combinatorial optimization.
Non-probabilistic modelling of uncertainty.
Localization using wireless signals
Extensions
Extensions of convex optimization include the optimization of biconvex, pseudo-convex, and quasiconvex functions. Extensions of the theory of convex analysis and iterative methods for approximately solving non-convex minimization problems occur in the field of generalized convexity, also known as abstract convex analysis.
See also
Duality
Karush–Kuhn–Tucker conditions
Optimization problem
Proximal gradient method
Algorithmic problems on convex sets
Notes
References
Hiriart-Urruty, Jean-Baptiste, and Lemaréchal, Claude. (2004). Fundamentals of Convex analysis. Berlin: Springer.
Nesterov, Yurii. (2004). Introductory Lectures on Convex Optimization, Kluwer Academic Publishers
Schmit, L.A.; Fleury, C. 1980: Structural synthesis by combining approximation concepts and dual methods. J. Amer. Inst. Aeronaut. Astronaut 18, 1252-1260
External links
EE364a: Convex Optimization I and EE364b: Convex Optimization II, Stanford course homepages
6.253: Convex Analysis and Optimization, an MIT OCW course homepage
Brian Borchers, An overview of software for convex optimization
Convex Optimization Book by Lieven Vandenberghe and Stephen P. Boyd
Convex analysis
Mathematical optimization | Convex optimization | [
"Mathematics"
] | 2,158 | [
"Mathematical optimization",
"Mathematical analysis"
] |
1,674,555 | https://en.wikipedia.org/wiki/Vibration%20theory%20of%20olfaction | The vibration theory of smell proposes that a molecule's smell character is due to its vibrational frequency in the infrared range. This controversial theory is an alternative to the more widely accepted docking theory of olfaction (formerly termed the shape theory of olfaction), which proposes that a molecule's smell character is due to a range of weak non-covalent interactions between its protein odorant receptor (found in the nasal epithelium), such as electrostatic and Van der Waals interactions as well as H-bonding, dipole attraction, pi-stacking, metal ion, Cation–pi interaction, and hydrophobic effects, in addition to the molecule's conformation.
Introduction
The current vibration theory has recently been called the "swipe card" model, in contrast with "lock and key" models based on shape theory. As proposed by Luca Turin, the odorant molecule must first fit in the receptor's binding site. Then it must have a vibrational energy mode compatible with the difference in energies between two energy levels on the receptor, so electrons can travel through the molecule via inelastic electron tunneling, triggering the signal transduction pathway. The vibration theory is discussed in a popular but controversial book by Chandler Burr.
The odor character is encoded in the ratio of activities of receptors tuned to different vibration frequencies, in the same way that color is encoded in the ratio of activities of cone cell receptors tuned to different frequencies of light. An important difference, though, is that the odorant has to be able to become resident in the receptor for a response to be generated. The time an odorant resides in a receptor depends on how strongly it binds, which in turn determines the strength of the response; the odor intensity is thus governed by a similar mechanism to the "lock and key" model. For a pure vibrational theory, the differing odors of enantiomers, which possess identical vibrations, cannot be explained. However, once the link between receptor response and duration of the residence of the odorant in the receptor is recognised, differences in odor between enantiomers can be understood: molecules with different handedness may spend different amounts of time in a given receptor, and so initiate responses of different intensities.
Seeing as there are some aroma molecules of different shapes that smell the same (eg. benzaldehyde, that gives the same scent to both almonds and/or cyanide), the shape "lock and key" model is not quite sufficient to explain what is going on. Experiments with olfaction, taking quantum mechanics into consideration, suggest that ultimately both theories might work in harmony - first the scent molecules need to fit, as in the docking theory of olfaction model, but then the molecular vibrations of the chemical/atom bonds take over. So in essence your sense of smell could be much more like your sense of hearing, where your nose could be 'listening' to the acoustic/vibrational bonds of aroma molecules.
Some studies support vibration theory while others challenge its findings.
Major proponents and history
The theory was first proposed by Malcolm Dyson in 1928 and expanded by Robert H. Wright in 1954, after which it was largely abandoned in favor of the competing shape theory. A 1996 paper by Luca Turin revived the theory by proposing a mechanism, speculating that the G-protein-coupled receptors discovered by Linda Buck and Richard Axel were actually measuring molecular vibrations using inelastic electron tunneling as Turin claimed, rather than responding to molecular keys fitting molecular locks, working by shape alone. In 2007 a Physical Review Letters paper by Marshall Stoneham and colleagues at University College London and Imperial College London showed that Turin's proposed mechanism was consistent with known physics and coined the expression "swipe card model" to describe it. A PNAS paper in 2011 by Turin, Efthimios Skoulakis, and colleagues at MIT and the Alexander Fleming Biomedical Sciences Research Center reported fly behavioral experiments consistent with a vibrational theory of smell. The theory remains controversial.
Support
Isotope effects
A major prediction of Turin's theory is the isotope effect: that the normal and deuterated versions of a compound should smell different, although they have the same shape. A 2001 study by Haffenden et al. showed humans able to distinguish benzaldehyde from its deuterated version. However, this study has been criticized for lacking double-blind controls to eliminate bias and because it used an anomalous version of the duo-trio test. In another study, tests with animals have shown fish and insects able to distinguish isotopes by smell.
Deuteration changes the heats of adsorption and the boiling and freezing points of molecules (boiling points: 100.0 °C for H2O vs. 101.42 °C for D2O; melting points: 0.0 °C for H2O, 3.82 °C for D2O), pKa (i.e., dissociation constant: 9.71×10−15 for H2O vs. 1.95×10−15 for D2O, cf. Heavy water) and the strength of hydrogen bonding. Such isotope effects are exceedingly common, and so it is well known that deuterium substitution will indeed change the binding constants of molecules to protein receptors. Any binding interaction of an odorant molecule with an olfactory receptor will therefore be likely to show some isotope effect upon deuteration, and the observation of an isotope effect in no way argues exclusively for a vibrational theory of olfaction.
A study published in 2011 by Franco, Turin, Mershin and Skoulakis shows both that flies can smell deuterium, and that to flies, a carbon-deuterium bond smells like a nitrile, which has a similar vibration. The study reports that drosophila melanogaster (fruit fly), which is ordinarily attracted to acetophenone, spontaneously dislikes deuterated acetophenone. This dislike increases with the number of deuteriums. (Flies genetically altered to lack smell receptors could not tell the difference.) Flies could also be trained by electric shocks either to avoid the deuterated molecule or to prefer it to the normal one. When these trained flies were then presented with a completely new and unrelated choice of normal vs. deuterated odorants, they avoided or preferred deuterium as with the previous pair. This suggested that flies were able to smell deuterium regardless of the rest of the molecule. To determine whether this deuterium smell was actually due to vibrations of the carbon-deuterium (C-D) bond or to some unforeseen effect of isotopes, the researchers looked to nitriles, which have a similar vibration to the C-D bond. Flies trained to avoid deuterium and asked to choose between a nitrile and its non-nitrile counterpart did avoid the nitrile, lending support to the idea that the flies are smelling vibrations. Further isotope smell studies are under way in fruit flies and dogs.
Explaining differences in stereoisomer scents
Carvone presented a perplexing situation to vibration theory. Carvone has two isomers, which have identical vibrations, yet one smells like mint and the other like caraway (for which the compound is named).
An experiment by Turin filmed by the 1995 BBC Horizon documentary "A Code in the Nose" consisted of mixing the mint isomer with butanone, on the theory that the shape of the G-protein-coupled receptor prevented the carbonyl group in the mint isomer from being detected by the "biological spectroscope". The experiment succeeded with the trained perfumers used as subjects, who perceived that a mixture of 60% butanone and 40% mint carvone smelled like caraway.
The sulfurous smell of boranes
According to Turin's original paper in the journal Chemical Senses, the well documented smell of borane compounds is sulfurous, though these molecules contain no sulfur. He proposes to explain this by the similarity in frequency between the vibration of the B-H bond and the S-H bond. However, it has been pointed out that for o-carborane, which has a very strong B−H stretch at 2575 cm−1, the "onion-like odor of crude commercial o-carborane is replaced by a pleasant camphoraceous odor on careful purification, reflecting the method for commercial preparation of o-carborane from reactions promoted by onion-smelling diethyl sulfide, which is removed on purification."
Consistency with physics
Biophysical simulations published in Physical Review Letters in 2006 suggest that Turin's proposal is viable from a physics standpoint. However, Block et al. in their 2015 paper in Proceedings of the National Academy of Sciences indicate that their theoretical analysis shows that "the proposed electron transfer mechanism of the vibrational frequencies of odorants could be easily suppressed by quantum effects of nonodorant molecular vibrational modes".
Correlating odor to vibration
A 2004 paper published in the journal Organic Biomolecular Chemistry by Takane and Mitchell shows that odor descriptions in the olfaction literature correlate with EVA descriptors, which loosely correspond to the vibrational spectrum, better than with descriptors based on the two dimensional connectivity of the molecule. The study did not consider molecular shape.
Lack of antagonists
Turin points out that traditional lock-and-key receptor interactions deal with agonists, which increase the receptor's time spent in the active state, and antagonists, which increase the time spent in the inactive state. In other words, some ligands tend to turn the receptor on and some tend to turn it off. As an argument against the traditional lock-and-key theory of smell, very few olfactory antagonists have been found.
In 2004, a Japanese research group published that an oxidation product of isoeugenol is able to antagonize, or prevent, mice olfactory receptor response to isoeugenol.
Additional challenges to the docking theory of olfaction
Similarly shaped molecules with different molecular vibrations have different smells (metallocene experiment and deuterium replacement of molecular hydrogen). However this challenge is contrary to the results obtained with silicon analogues of bourgeonal and lilial, which despite their differences in molecular vibrations have similar smells and similarly activate the most responsive human receptor, hOR17-4, and with studies showing that the human musk receptor OR5AN1 responds identically to deuterated and non-deuterated musks. In the metallocene experiment, Turin observes that while ferrocene and nickelocene have nearly the same molecular sandwich structures, they possess distinct odors. He suggests that "because of the change in size and mass, different metal atoms give different frequencies for those vibrations that involve the metal atoms," an observation which is compatible with the vibration theory. However it has been noted that, in contrast to ferrocene, nickelocene rapidly decomposes in air and the cycloalkene odor observed for nickelocene, but not for ferrocene, could simply reflect decomposition of nickelocene giving trace amounts of hydrocarbons such as cyclopentadiene.
Differently shaped molecules with similar molecular vibrations have similar smells (replacement of carbon double bonds by sulfur atoms and the disparate shaped amber odorants)
Hiding functional groups does not hide the group's characteristic odor. However this is not always the case, since ortho-substituted arylisonitriles and thiophenols have far less offensive odors than the parent compounds.
Challenges
Three predictions by Luca Turin on the nature of smell, using concepts of vibration theory, were addressed by experimental tests published in Nature Neuroscience in 2004 by Vosshall and Keller. The study failed to support the prediction that isotopes should smell different, with untrained human subjects unable to distinguish acetophenone from its deuterated counterpart. This study also pointed to experimental design flaws in the earlier study by Haffenden. In addition, Turin's description of the odor of long-chain aldehydes as alternately (1) dominantly waxy and faintly citrus and (2) dominantly citrus and faintly waxy was not supported by tests on untrained subjects, despite anecdotal support from fragrance industry professionals who work regularly with these materials. Vosshall and Keller also presented a mixture of guaiacol and benzaldehyde to subjects, to test Turin's theory that the mixture should smell of vanillin. Vosshall and Keller's data did not support Turin's prediction. However, Vosshall says these tests do not disprove the vibration theory.
In response to the 2011 PNAS study on flies, Vosshall acknowledged that flies could smell isotopes but called the conclusion that smell was based on vibrations an "overinterpretation" and expressed skepticism about using flies to test a mechanism originally ascribed to human receptors. For the theory to be confirmed, Vosshall stated there must be further studies on mammalian receptors. Bill Hansson, an insect olfaction specialist, raised the question of whether deuterium could affect hydrogen bonds between the odorant and receptor.
In 2013, Turin and coworkers confirmed Vosshall and Keller's experiments showing that even trained human subjects were unable to distinguish acetophenone from its deuterated counterpart. At the same time Turin and coworkers reported that human volunteers were able to distinguish cyclopentadecanone from its fully deuterated analog. To account for the different results seen with acetophenone and cyclopentadecanone, Turin and coworkers assert that "there must be many C-H bonds before they are detectable by smell. In contrast to acetophenone which contains only 8 hydrogens, cyclopentadecanone has 28. This results in more than 3 times the number of vibrational modes involving hydrogens than in acetophenone, and this is likely essential for detecting the difference between isotopomers." Turin and coworkers provide no quantum mechanical justification for this latter assertion. Note that the correct term for compounds differing in the number of isotopic substitutions is isotopologue; isotopomers differ only in the position of the substitutions.
Vosshall, in commenting on Turin's work, notes that "the olfactory membranes are loaded with enzymes that can metabolise odorants, changing their chemical identity and perceived odour. Deuterated molecules would be poor substrates for such enzymes, leading to a chemical difference in what the subjects are testing. Ultimately, any attempt to prove the vibrational theory of olfaction should concentrate on actual mechanisms at the level of the receptor, not on indirect psychophysical testing." Richard Axel co-recipient of the 2004 Nobel prize for physiology for his work on olfaction, expresses a similar sentiment, indicating that Turin's work "would not resolve the debate – only a microscopic look at the receptors in the nose would finally show what is at work. Until somebody really sits down and seriously addresses the mechanism and not inferences from the mechanism... it doesn't seem a useful endeavour to use behavioural responses as an argument".
In response to the 2013 paper on cyclopentadecanone, Block et al. report that the human musk-recognizing receptor, OR5AN1, identified using a heterologous olfactory receptor expression system and robustly responding to cyclopentadecanone and muscone (which has 30 hydrogens), fails to distinguish isotopologues of these compounds in vitro. Furthermore, the mouse (methylthio)methanethiol-recognizing receptor, MOR244-3, as well as other selected human and mouse olfactory receptors, responded similarly to normal, deuterated, and carbon-13 isotopologues of their respective ligands, paralleling results found with the musk receptor OR5AN1. Based on these findings, the authors conclude that the proposed vibration theory does not apply to the human musk receptor OR5AN1, mouse thiol receptor MOR244-3, or other olfactory receptors examined. Additionally, theoretical analysis by the authors shows that the proposed electron transfer mechanism of the vibrational frequencies of odorants could be easily suppressed by quantum effects of nonodorant molecular vibrational modes. The authors conclude: "These and other concerns about electron transfer at olfactory receptors, together with our extensive experimental data, argue against the plausibility of the vibration theory."
In commenting on this work, Vosshall writes "In PNAS, Block et al.... shift the "shape vs. vibration" debate from olfactory psychophysics to the biophysics of the ORs themselves. The authors mount a sophisticated multidisciplinary attack on the central tenets of the vibration theory using synthetic organic chemistry, heterologous expression of olfactory receptors, and theoretical considerations to find no evidence to support the vibration theory of smell." While Turin comments that Block used "cells in a dish rather than within whole organisms" and that "expressing an olfactory receptor in human embryonic kidney cells doesn't adequately reconstitute the complex nature of olfaction...", Vosshall responds "Embryonic kidney cells are not identical to the cells in the nose ... but if you are looking at receptors, it's the best system in the world." In a Letter to the Editor of PNAS, Turin et al. raise concerns about Block et al. and Block et al. respond.
Recently, Saberi and Allaei have suggested that a functional relationship exists between molecular volume and the olfactory neural response. The molecular volume is an important factor, but it is not the only factor that determines the response of ONRs. The binding affinity of an odorant-receptor pair is affected by their relative sizes. The maximum affinity can be attained when the molecular volume of an odorant matches the volume of the binding pocket. A recent study describes the responses of primary olfactory neurons in tissue culture to isotopes and finds that a small fraction of the population (<1%) clearly discriminates between isotopes, some even giving an all-or-or -none response to H or D isotopologues of octanal. The authors attribute this to differences in hydrophobicity between normal and deuterated odorants.
See also
Odotope theory
Docking theory of olfaction
Quantum biology
References
Olfactory system
Quantum biology
Theories | Vibration theory of olfaction | [
"Physics",
"Biology"
] | 3,790 | [
"Quantum mechanics",
"nan",
"Quantum biology"
] |
1,674,621 | https://en.wikipedia.org/wiki/Process%20modeling | The term process model is used in various contexts. For example, in business process modeling the enterprise process model is often referred to as the business process model.
Overview
Process models are processes of the same nature that are classified together into a model. Thus, a process model is a description of a process at the type level. Since the process model is at the type level, a process is an instantiation of it. The same process model is used repeatedly for the development of many applications and thus, has many instantiations. One possible use of a process model is to prescribe how things must/should/could be done in contrast to the process itself which is really what happens. A process model is roughly an anticipation of what the process will look like. What the process shall be will be determined during actual system development.
The goals of a process model are to be:
Descriptive
Track what actually happens during a process
Take the point of view of an external observer who looks at the way a process has been performed and determines the improvements that must be made to make it perform more effectively or efficiently.
Prescriptive
Define the desired processes and how they should/could/might be performed.
Establish rules, guidelines, and behavior patterns which, if followed, would lead to the desired process performance. They can range from strict enforcement to flexible guidance.
Explanatory
Provide explanations about the rationale of processes.
Explore and evaluate the several possible courses of action based on rational arguments.
Establish an explicit link between processes and the requirements that the model needs to fulfill.
Pre-defines points at which data can be extracted for reporting purposes.
Purpose
From a theoretical point of view, the meta-process modeling explains the key concepts needed to describe what happens in the development process, on what, when it happens, and why. From an operational point of view, the meta-process modeling is aimed at providing guidance for method engineers and application developers.
The activity of modeling a business process usually predicates a need to change processes or identify issues to be corrected. This transformation may or may not require IT involvement, although that is a common driver for the need to model a business process. Change management programmes are desired to put the processes into practice. With advances in technology from larger platform vendors, the vision of business process models (BPM) becoming fully executable (and capable of round-trip engineering) is coming closer to reality every day. Supporting technologies include Unified Modeling Language (UML), model-driven architecture, and service-oriented architecture.
Process modeling addresses the process aspects of an enterprise business architecture, leading to an all encompassing enterprise architecture. The relationships of a business processes in the context of the rest of the enterprise systems, data, organizational structure, strategies, etc. create greater capabilities in analyzing and planning a change. One real-world example is in corporate mergers and acquisitions; understanding the processes in both companies in detail, allowing management to identify redundancies resulting in a smoother merger.
Process modeling has always been a key aspect of business process reengineering, and continuous improvement approaches seen in Six Sigma.
Classification of process models
By coverage
There are five types of coverage where the term process model has been defined differently:
Activity-oriented: related set of activities conducted for the specific purpose of product definition; a set of partially ordered steps intended to reach a goal.
Product-oriented: series of activities that cause sensitive product transformations to reach the desired product.
Decision-oriented: set of related decisions conducted for the specific purpose of product definition.
Context-oriented: sequence of contexts causing successive product transformations under the influence of a decision taken in a context.
Strategy-oriented: allow building models representing multi-approach processes and plan different possible ways to elaborate the product based on the notion of intention and strategy.
By alignment
Processes can be of different kinds. These definitions "correspond to the various ways in which a process can be modelled".
Strategic processes
investigate alternative ways of doing a thing and eventually produce a plan for doing it
are often creative and require human co-operation; thus, alternative generation and selection from an alternative are very critical activities
Tactical processes
help in the achievement of a plan
are more concerned with the tactics to be adopted for actual plan achievement than with the development of a plan of achievement
Implementation processes
are the lowest level processes
are directly concerned with the details of the what and how of plan implementation
By granularity
Granularity refers to the level of detail of a process model and affects the kind of guidance, explanation and trace that can be provided. Coarse granularity restricts these to a rather limited level of detail whereas fine granularity provides more detailed capability. The nature of granularity needed is dependent on the situation at hand.
Project manager, customer representatives, the general, top-level, or middle management require rather coarse-grained process description as they want to gain an overview of time, budget, and resource planning for their decisions. In contrast, software engineers, users, testers, analysts, or software system architects will prefer a fine-grained process model where the details of the model can provide them with instructions and important execution dependencies such as the dependencies between people.
While notations for fine-grained models exist, most traditional process models are coarse-grained descriptions. Process models should, ideally, provide a wide range of granularity (e.g. Process Weaver).
By flexibility
It was found that while process models were prescriptive, in actual practice departures from the prescription can occur. Thus, frameworks for adopting methods evolved so that systems development methods match specific organizational situations and thereby improve their usefulness. The development of such frameworks is also called situational method engineering.
Method construction approaches can be organized in a flexibility spectrum ranging from 'low' to 'high'.
Lying at the 'low' end of this spectrum are rigid methods, whereas at the 'high' end there are modular method construction. Rigid methods are completely pre-defined and leave little scope for adapting them to the situation at hand. On the other hand, modular methods can be modified and augmented to fit a given situation. Selecting a rigid methods allows each project to choose its method from a panel of rigid, pre-defined methods, whereas selecting a path within a method consists of choosing the appropriate path for the situation at hand. Finally, selecting and tuning a method allows each project to select methods from different approaches and tune them to the project's needs."
Quality of methods
As the quality of process models is being discussed in this paper, there is a need to elaborate quality of modeling techniques as an important essence in quality of process models. In most existing frameworks created for understanding the quality, the line between quality of modeling techniques and the quality of models as a result of the application of those techniques are not clearly drawn. This report will concentrate both on quality of process modeling techniques and quality of process models to clearly differentiate the two.
Various frameworks were developed to help in understanding quality of process modeling techniques, one example is Quality based modeling evaluation framework or known as Q-Me framework which argued to provide set of well defined quality properties and procedures to make an objective assessment of this properties possible.
This framework also has advantages of providing uniform and formal description of the model element within one or different model types using one modeling techniques
In short this can make assessment of both the product quality and the process quality of modeling techniques with regard to a set of properties that have been defined before.
Quality properties that relate to business process modeling techniques discussed in are:
Expressiveness: the degree to which a given modeling technique is able to denote the models of any number and kinds of application domains.
Arbitrariness: the degree of freedom one has when modeling one and the same domain
Suitability: the degree to which a given modeling technique is specifically tailored for a specific kind of application domain.
Comprehensibility: the ease with which the way of working and way of modeling are understood by participants.
Coherence: the degree to which the individual sub models of a way of modeling constitute a whole.
Completeness; the degree to which all necessary concepts of the application domain are represented in the way of modeling.
Efficiency: the degree to which the modeling process uses resources such as time and people.
Effectiveness: the degree to which the modeling process achieves its goal.
To assess the quality of Q-ME framework; it is used to illustrate the quality of the dynamic essentials modeling of the organisation (DEMO) business modeling techniques.
It is stated that the evaluation of the Q-ME framework to the DEMO modeling techniques has revealed the shortcomings of Q-ME. One particular is that it does not include quantifiable metric to express the quality of business modeling technique which makes it hard to compare quality of different techniques in an overall rating.
There is also a systematic approach for quality measurement of modeling techniques known as complexity metrics suggested by Rossi et al. (1996). Techniques of Meta model is used as a basis for computation of these complexity metrics. In comparison to quality framework proposed by Krogstie, quality measurement focus more on technical level instead of individual model level.
Authors (Cardoso, Mendling, Neuman and Reijers, 2006) used complexity metrics to measure the simplicity and understandability of a design. This is supported by later research done by Mendling et al. who argued that without using the quality metrics to help question quality properties of a model, simple process can be modeled in a complex and unsuitable way. This in turn can lead to a lower understandability, higher maintenance cost and perhaps inefficient execution of the process in question.
The quality of modeling technique is important in creating models that are of quality and contribute to the correctness and usefulness of models.
Quality of models
Earliest process models reflected the dynamics of the process with a practical process obtained by instantiation in terms of relevant concepts, available technologies, specific implementation environments, process constraints and so on.
Enormous number of research has been done on quality of models but less focus has been shifted towards the quality of process models. Quality issues of process models cannot be evaluated exhaustively however there are four main guidelines and frameworks in practice for such. These are: top-down quality frameworks, bottom-up metrics related to quality aspects, empirical surveys related to modeling techniques, and pragmatic guidelines.
Hommes quoted Wang et al. (1994) that all the main characteristic of quality of models can all be grouped under 2 groups namely correctness and usefulness of a model, correctness ranges from the model correspondence to the phenomenon that is modeled to its correspondence to syntactical rules of the modeling and also it is independent of the purpose to which the model is used.
Whereas the usefulness can be seen as the model being helpful for the specific purpose at hand for which the model is constructed at first place. Hommes also makes a further distinction between internal correctness (empirical, syntactical and semantic quality) and external correctness (validity).
A common starting point for defining the quality of conceptual model is to look at the linguistic properties of the modeling language of which syntax and semantics are most often applied.
Also the broader approach is to be based on semiotics rather than linguistic as was done by Krogstie using the top-down quality framework known as SEQUAL. It defines several quality aspects based on relationships between a model, knowledge Externalisation, domain, a modeling language, and the activities of learning, taking action, and modeling.
The framework does not however provide ways to determine various degrees of quality but has been used extensively for business process modeling in empirical tests carried out
According to previous research done by Moody et al. with use of conceptual model quality framework proposed by Lindland et al. (1994) to evaluate quality of process model, three levels of quality were identified:
Syntactic quality: Assesses extent to which the model conforms to the grammar rules of modeling language being used.
Semantic quality: whether the model accurately represents user requirements
Pragmatic quality: whether the model can be understood sufficiently by all relevant stakeholders in the modeling process. That is the model should enable its interpreters to make use of it for fulfilling their need.
From the research it was noticed that the quality framework was found to be both easy to use and useful in evaluating the quality of process models however it had limitations in regards to reliability and difficult to identify defects. These limitations led to refinement of the framework through subsequent research done by Krogstie. This framework is called SEQUEL framework by Krogstie et al. 1995 (Refined further by Krogstie & Jørgensen, 2002) which included three more quality aspects.
Physical quality: whether the externalized model is persistent and available for the audience to make sense of it.
Empirical quality: whether the model is modeled according to the established regulations regarding a given language.
Social quality: This regards the agreement between the stakeholders in the modeling domain.
Dimensions of Conceptual Quality framework
Modeling Domain is the set of all statements that are relevant and correct for describing a problem domain, Language Extension is the set of all statements that are possible given the grammar and vocabulary of the modeling languages used. Model Externalization is the conceptual representation of the problem domain.
It is defined as the set of statements about the problem domain that are actually made. Social Actor Interpretation and Technical Actor Interpretation are the sets of statements that actors both human model users and the tools that interact with the model, respectively 'think' the conceptual representation of the problem domain contains.
Finally, Participant Knowledge is the set of statements that human actors, who are involved in the modeling process, believe should be made to represent the problem domain. These quality dimensions were later divided into two groups that deal with physical and social aspects of the model.
In later work, Krogstie et al. stated that while the extension of the SEQUAL framework has fixed some of the limitation of the initial framework, however other limitation remain .
In particular, the framework is too static in its view upon semantic quality, mainly considering models, not modeling activities, and comparing these models to a static domain rather than seeing the model as a facilitator for changing the domain.
Also, the framework's definition of pragmatic quality is quite narrow, focusing on understanding, in line with the semiotics of Morris, while newer research in linguistics and semiotics has focused beyond mere understanding, on how the model is used and affects its interpreters.
The need for a more dynamic view in the semiotic quality framework is particularly evident when considering process models, which themselves often prescribe or even enact actions in the problem domain, hence a change to the model may also change the problem domain directly. This paper discusses the quality framework in relation to active process models and suggests a revised framework based on this.
Further work by Krogstie et al. (2006) to revise SEQUAL framework to be more appropriate for active process models by redefining physical quality with a more narrow interpretation than previous research.
The other framework in use is Guidelines of Modeling (GoM) based on general accounting principles include the six principles: Correctness, Clarity deals with the comprehensibility and explicitness (System description) of model systems.
Comprehensibility relates to graphical arrangement of the information objects and, therefore, supports the understand ability of a model.
Relevance relates to the model and the situation being presented. Comparability involves the ability to compare models that is semantic comparison between two models, Economic efficiency; the produced cost of the design process need at least to be covered by the proposed use of cost cuttings and revenue increases.
Since the purpose of organizations in most cases is the maximization of profit, the principle defines the borderline for the modeling process. The last principle is Systematic design defines that there should be an accepted differentiation between diverse views within modeling.
Correctness, relevance and economic efficiency are prerequisites in the quality of models and must be fulfilled while the remaining guidelines are optional but necessary.
The two frameworks SEQUAL and GOM have a limitation of use in that they cannot be used by people who are not competent with modeling. They provide major quality metrics but are not easily applicable by non-experts.
The use of bottom-up metrics related to quality aspects of process models is trying to bridge the gap of use of the other two frameworks by non-experts in modeling but it is mostly theoretical and no empirical tests have been carried out to support their use.
Most experiments carried out relate to the relationship between metrics and quality aspects and these works have been done individually by different authors: Canfora et al. study the connection mainly between count metrics (for example, the number of tasks or splits -and maintainability of software process models); Cardoso validates the correlation between control flow complexity and perceived complexity; and Mendling et al. use metrics to predict control flow errors such as deadlocks in process models.
The results reveal that an increase in size of a model appears to reduce its quality and comprehensibility.
Further work by Mendling et al. investigates the connection between metrics and understanding and While some metrics are confirmed regarding their effect, also personal factors of the modeler – like competence – are revealed as important for understanding about the models.
Several empirical surveys carried out still do not give clear guidelines or ways of evaluating the quality of process models but it is necessary to have clear set of guidelines to guide modelers in this task. Pragmatic guidelines have been proposed by different practitioners even though it is difficult to provide an exhaustive account of such guidelines from practice.
Most of the guidelines are not easily put to practice but "label activities verb–noun" rule has been suggested by other practitioners before and analyzed empirically.
From the research. value of process models is not only dependent on the choice of graphical constructs but also on their annotation with textual labels which need to be analyzed. It was found that it results in better models in terms of understanding than alternative labelling styles.
From the earlier research and ways to evaluate process model quality it has been seen that the process model's size, structure, expertise of the modeler and modularity affect its overall comprehensibility.
Based on these a set of guidelines was presented 7 Process Modeling Guidelines (7PMG). This guideline uses the verb-object style, as well as guidelines on the number of elements in a model, the application of structured modeling, and the decomposition of a process model. The guidelines are as follows:
G1 Minimize the number of elements in a model
G2 Minimize the routing paths per element
G3 Use one start and one end event
G4 Model as structured as possible
G5 Avoid OR routing elements
G6 Use verb-object activity labels
G7 Decompose a model with more than 50 elements
7PMG still though has limitations with its use: Validity problem 7PMG does not relate to the content of a process model, but only to the way this content is organized and represented.
It does suggest ways of organizing different structures of the process model while the content is kept intact but the pragmatic issue of what must be included in the model is still left out.
The second limitation relates to the prioritizing guideline the derived ranking has a small empirical basis as it relies on the involvement of 21 process modelers only.
This could be seen on the one hand as a need for a wider involvement of process modelers' experience, but it also raises the question, what alternative approaches may be available to arrive at a prioritizing guideline?
See also
Model selection
Process (science)
Process architecture
Process calculus
Process flow diagram
Process ontology
Process Specification Language
References
External links
Modeling processes regarding workflow patterns; link appears to be broken
American Productivity and Quality Center (APQC), a worldwide organization for process and performance improvement
The Application of Petri Nets to Workflow Management, W.M.P. van der Aalst, 1998.
Business process management
Systems engineering
Process theory | Process modeling | [
"Engineering"
] | 4,070 | [
"Systems engineering"
] |
1,674,626 | https://en.wikipedia.org/wiki/Metamodeling | A metamodel is a model of a model, and metamodeling is the process of generating such metamodels. Thus metamodeling or meta-modeling is the analysis, construction, and development of the frames, rules, constraints, models, and theories applicable and useful for modeling a predefined class of problems. As its name implies, this concept applies the notions of meta- and modeling in software engineering and systems engineering. Metamodels are of many types and have diverse applications.
Overview
A metamodel/ surrogate model is a model of the model, i.e. a simplified model of an actual model of a circuit, system, or software-like entity. Metamodel can be a mathematical relation or algorithm representing input and output relations. A model is an abstraction of phenomena in the real world; a metamodel is yet another abstraction, highlighting the properties of the model itself. A model conforms to its metamodel in the way that a computer program conforms to the grammar of the programming language in which it is written. Various types of metamodels include polynomial equations, neural networks, Kriging, etc. "Metamodeling" is the construction of a collection of "concepts" (things, terms, etc.) within a certain domain. Metamodeling typically involves studying the output and input relationships and then fitting the right metamodels to represent that behavior.
Common uses for metamodels are:
As a schema for semantic data that needs to be exchanged or stored
As a language that supports a particular method or process
As a language to express additional semantics of existing information
As a mechanism to create tools that work with a broad class of models at run time
As a schema for modeling and automatically exploring sentences of a language with applications to automated test synthesis
As an approximation of a higher-fidelity model for use when reducing time, cost, or computational effort is necessary
Because of the "meta" character of metamodeling, both the praxis and theory of metamodels are of relevance to metascience, metaphilosophy, metatheories and systemics, and meta-consciousness. The concept can be useful in mathematics, and has practical applications in computer science and computer engineering/software engineering. The latter are the main focus of this article.
Topics
Definition
In software engineering, the use of models is an alternative to more common code-based development techniques. A model always conforms to a unique metamodel. One of the currently most active branches of Model Driven Engineering is the approach named model-driven architecture proposed by OMG. This approach is embodied in the Meta Object Facility (MOF) specification.
Typical metamodelling specifications proposed by OMG are UML, SysML, SPEM or CWM. ISO has also published the standard metamodel ISO/IEC 24744. All the languages presented below could be defined as MOF metamodels.
Metadata modeling
Metadata modeling is a type of metamodeling used in software engineering and systems engineering for the analysis and construction of models applicable and useful to some predefined class of problems. (see also: data modeling).
Model transformations
One important move in model-driven engineering is the systematic use of model transformation languages. The OMG has proposed a standard for this called QVT for Queries/Views/Transformations. QVT is based on the meta-object facility (MOF). Among many other model transformation languages (MTLs), some examples of implementations of this standard are AndroMDA, VIATRA, Tefkat, MT, ManyDesigns Portofino.
Relationship to ontologies
Meta-models are closely related to ontologies. Both are often used to describe and analyze the relations between concepts:
Ontologies: express something meaningful within a specified universe or domain of discourse by utilizing grammar for using vocabulary. The grammar specifies what it means to be a well-formed statement, assertion, query, etc. (formal constraints) on how terms in the ontology’s controlled vocabulary can be used together.
Meta-modeling: can be considered as an explicit description (constructs and rules) of how a domain-specific model is built. In particular, this comprises a formalized specification of the domain-specific notations. Typically, metamodels are – and always should follow - a strict rule set. "A valid metamodel is an ontology, but not all ontologies are modeled explicitly as metamodels."
Types of metamodels
For software engineering, several types of models (and their corresponding modeling activities) can be distinguished:
Metadata modeling (MetaData model)
Meta-process modeling (MetaProcess model)
Executable meta-modeling (combining both of the above and much more, as in the general purpose tool Kermeta)
Model transformation language (see below)
Polynomial metamodels
Neural network metamodels
Kriging metamodels
Piecewise polynomial (spline) metamodels
Gradient-enhanced kriging (GEK)
Zoos of metamodels
A library of similar metamodels has been called a Zoo of metamodels.
There are several types of meta-model zoos. Some are expressed in ECore. Others are written in MOF 1.4 – XMI 1.2. The metamodels expressed in UML-XMI1.2 may be uploaded in Poseidon for UML, a UML CASE tool.
See also
Business reference model
Data governance
Model-driven engineering (MDE)
Model-driven architecture (MDA)
Domain-specific language (DSL)
Domain-specific modeling (DSM)
Generic Eclipse Modeling System (GEMS)
Kermeta (Kernel Meta-modeling)
Metadata
MetaCASE tool (tools for creating tools for computer-aided software engineering tools)
Method engineering
MODAF Meta-Model
MOF Queries/Views/Transformations (MOF QVT)
Object Process Methodology
Requirements analysis
Space mapping
Surrogate model
Transformation language
VIATRA (Viatra)
XML transformation language (XML TL)
References
Further reading
Booch, G., Rumbaugh, J., Jacobson, I. (1999), The Unified Modeling Language User Guide, Redwood City, CA: Addison Wesley Longman Publishing Co., Inc.
J. P. van Gigch, System Design Modeling and Metamodeling, Plenum Press, New York, 1991
Gopi Bulusu, hamara.in, 2004 Model Driven Transformation
P. C. Smolik, Mambo Metamodeling Environment, Doctoral Thesis, Brno University of Technology. 2006
Gonzalez-Perez, C. and B. Henderson-Sellers, 2008. Metamodelling for Software Engineering. Chichester (UK): Wiley. 210 p.
M.A. Jeusfeld, M. Jarke, and J. Mylopoulos, 2009. Metamodeling for Method Engineering. Cambridge (USA): The MIT Press. 424 p. , Open access via http://conceptbase.sourceforge.net/2021_Metamodeling_for_Method_Engineering.pdf
G. Caplat Modèles & Métamodèles, 2008 -
Fill, H.-G., Karagiannis, D., 2013. On the Conceptualisation of Modelling Methods Using the ADOxx Meta Modelling Platform, Enterprise Modelling and Information Systems Architectures, Vol. 8, Issue 1, 4-25.
Software design
Scientific modelling | Metamodeling | [
"Engineering"
] | 1,543 | [
"Design",
"Software design"
] |
1,674,907 | https://en.wikipedia.org/wiki/Cyclophosphamide | Cyclophosphamide (CP), also known as cytophosphane among other names, is a medication used as chemotherapy and to suppress the immune system. As chemotherapy it is used to treat lymphoma, multiple myeloma, leukemia, ovarian cancer, breast cancer, small cell lung cancer, neuroblastoma, and sarcoma. As an immune suppressor it is used in nephrotic syndrome, granulomatosis with polyangiitis, and following organ transplant, among other conditions. It is taken by mouth or injection into a vein.
Most people develop side effects. Common side effects include low white blood cell counts, loss of appetite, vomiting, hair loss, and bleeding from the bladder. Other severe side effects include an increased future risk of cancer, infertility, allergic reactions, and pulmonary fibrosis. Cyclophosphamide is in the alkylating agent and nitrogen mustard family of medications. It is believed to work by interfering with the duplication of DNA and the creation of RNA.
Cyclophosphamide was approved for medical use in the United States in 1959. It is on the World Health Organization's List of Essential Medicines.
Medical uses
Cyclophosphamide is used to treat cancers and autoimmune diseases. It is used to quickly control the disease. Due to its toxicity, it is replaced as soon as possible by less toxic drugs. Regular and frequent laboratory evaluations are required to monitor kidney function, avoid drug-induced bladder complications and screen for bone marrow toxicity.
Cancer
The main use of cyclophosphamide is with other chemotherapy agents in the treatment of lymphomas, some forms of brain cancer, neuroblastoma, leukemia and some solid tumors.
Autoimmune diseases
Cyclophosphamide decreases the immune system's response, and although concerns about toxicity restrict its use to patients with severe disease, it remains an important treatment for life-threatening autoimmune diseases where disease-modifying antirheumatic drugs (DMARDs) have been ineffective. For example, systemic lupus erythematosus with severe lupus nephritis may respond to pulsed cyclophosphamide. Cyclophosphamide is also used to treat minimal change disease, severe rheumatoid arthritis, granulomatosis with polyangiitis, Goodpasture syndrome and multiple sclerosis.
Because of its potential side effects such as amenorrhea or ovarian failure, cyclophosphamide is used for early phases of treatment and later substituted by other medications, such as mycophenolic acid or azathioprine.
AL amyloidosis
Cyclophosphamide, used in combination with thalidomide or lenalidomide and dexamethasone has documented efficacy as an off-label treatment of AL amyloidosis. It appears to be an alternative to the more traditional treatment with melphalan in people who are ill-suited for autologous stem cell transplant.
Graft-versus-host disease
Graft-versus-host disease (GVHD) is a major barrier for allogeneic stem cell transplant because of the immune reactions of donor T cell against the person receiving them. GVHD can often be avoided by T-cell depletion of the graft. The use of a high dose cyclophosphamide post-transplant in a half matched or haploidentical donor hematopoietic stem cell transplantation reduces GVHD, even after using a reduced conditioning regimen.
Contraindications
Like other alkylating agents, cyclophosphamide is teratogenic and contraindicated in pregnant women (pregnancy category D) except for life-threatening circumstances in the mother. Additional relative contraindications to the use of cyclophosphamide include lactation, active infection, neutropenia or bladder toxicity.
Cyclophosphamide is a pregnancy category D drug and causes birth defects. First trimester exposure to cyclophosphamide for the treatment of cancer or lupus displays a pattern of anomalies labeled "cyclophosphamide embryopathy", including growth restriction, ear and facial abnormalities, absence of digits and hypoplastic limbs.
Side effects
Adverse drug reactions from cyclophosphamide are related to the cumulative medication dose and include chemotherapy-induced nausea and vomiting, bone marrow suppression, stomach ache, hemorrhagic cystitis, diarrhea, darkening of the skin/nails, alopecia (hair loss) or thinning of hair, changes in color and texture of the hair, lethargy, and profound gonadotoxicity. Other side effects may include easy bruising/bleeding, joint pain, mouth sores, slow-healing existing wounds, unusual decrease in the amount of urine or unusual tiredness or weakness. Potential side effects also include leukopenia, infection, bladder toxicity, and cancer.
Pulmonary injury appears rare, but can present with two clinical patterns: an early, acute pneumonitis and a chronic, progressive fibrosis. Cardiotoxicity is a major problem with people treated with higher dose regimens.
High-dose intravenous cyclophosphamide can cause the syndrome of inappropriate antidiuretic hormone secretion (SIADH) and a potentially fatal hyponatremia when compounded by intravenous fluids administered to prevent drug-induced cystitis. While SIADH has been described primarily with higher doses of cyclophosphamide, it can also occur with the lower doses used in the management of inflammatory disorders.
Bladder bleeding
Acrolein is toxic to the bladder epithelium and can lead to hemorrhagic cystitis, which is associated with microscopic or gross hematuria and occasionally dysuria. Risks of hemorrhagic cystitis can be minimized with adequate fluid intake, avoidance of nighttime dosage and mesna (sodium 2-mercaptoethane sulfonate), a sulfhydryl donor which binds and detoxifies acrolein. Intermittent dosing of cyclophosphamide decreases cumulative drug dose, reduces bladder exposure to acrolein and has equal efficacy to daily treatment in the management of lupus nephritis.
Infection
Neutropenia or lymphopenia arising secondary to cyclophosphamide usage can predispose people to a variety of bacterial, fungal and opportunistic infections. No published guidelines cover PCP prophylaxis for people with rheumatological diseases receiving immunosuppressive drugs, but some advocate its use when receiving high-dose medication.
Infertility
Cyclophosphamide has been found to significantly increase the risk of premature menopause in females and of infertility in males and females, the likelihood of which increases with cumulative drug dose and increasing patient age. Such infertility is usually temporary, but can be permanent. The use of leuprorelin in women of reproductive age before administration of intermittently dosed cyclophosphamide may diminish the risks of premature menopause and infertility.
Cancer
Cyclophosphamide is carcinogenic and may increase the risk of developing lymphomas, leukemia, skin cancer, transitional cell carcinoma of the bladder or other malignancies. Myeloproliferative neoplasms, including acute leukemia, non-Hodgkin lymphoma and multiple myeloma, occurred in 5 of 119 rheumatoid arthritis patients within the first decade after receiving cyclophosphamide, compared with one case of chronic lymphocytic leukemia in 119 rheumatoid arthritis patients with no history. Secondary acute myeloid leukemia (therapy-related AML, or "t-AML") is thought to occur either by cyclophosphamide-inducing mutations or selecting for a high-risk myeloid clone.
This risk may be dependent on dose and other factors, including the condition, other agents or treatment modalities (including radiotherapy), treatment length and intensity. For some regimens, it is rare. For instance, CMF-therapy for breast cancer (where the cumulative dose is typically less than 20 grams of cyclophosphamide) carries an AML risk of less than 1/2000, with some studies finding no increased risk compared to background. Other treatment regimens involving higher doses may carry risks of 1–2% or higher.
Cyclophosphamide-induced AML, when it happens, typically presents some years after treatment, with incidence peaking around 3–9 years. After nine years, the risk falls to background. When AML occurs, it is often preceded by a myelodysplastic syndrome phase, before developing into overt acute leukemia. Cyclophosphamide-induced leukemia will often involve complex cytogenetics, which carries a worse prognosis than de novo AML.
Pharmacology
Oral cyclophosphamide is rapidly absorbed and then converted by mixed-function oxidase enzymes (cytochrome P450 system) in the liver to active metabolites. The main active metabolite is 4-hydroxycyclophosphamide, which exists in equilibrium with its tautomer, aldophosphamide. Most of the aldophosphamide is then oxidised by the enzyme aldehyde dehydrogenase (ALDH) to make carboxycyclophosphamide. A small proportion of aldophosphamide freely diffuses into cells, where it is decomposed into two compounds, phosphoramide mustard and acrolein. The active metabolites of cyclophosphamide are highly protein bound and distributed to all tissues, are assumed to cross the placenta and are known to be present in breast milk.
It is specifically in the oxazaphosphorine group of medications.
Cyclophosphamide metabolites are primarily excreted in the urine unchanged, and drug dosing should be appropriately adjusted in the setting of renal dysfunction. Drugs altering hepatic microsomal enzyme activity (e.g., alcohol, barbiturates, rifampicin, or phenytoin) may result in accelerated metabolism of cyclophosphamide into its active metabolites, increasing both pharmacologic and toxic effects of the drug; alternatively, drugs that inhibit hepatic microsomal enzymes (e.g. corticosteroids, tricyclic antidepressants, or allopurinol) result in slower conversion of cyclophosphamide into its metabolites and consequently reduced therapeutic and toxic effects.
Cyclophosphamide reduces plasma pseudocholinesterase activity and may result in prolonged neuromuscular blockade when administered concurrently with succinylcholine. Tricyclic antidepressants and other anticholinergic agents can result in delayed bladder emptying and prolonged bladder exposure to acrolein.
Mechanism of action
The main effect of cyclophosphamide is due to its metabolite phosphoramide mustard. This metabolite is only formed in cells that have low levels of ALDH. Phosphoramide mustard forms DNA crosslinks both between and within DNA strands at guanine N-7 positions (known as interstrand and intrastrand crosslinkages, respectively). This is irreversible and leads to cell apoptosis.
Cyclophosphamide has relatively little typical chemotherapy toxicity as ALDHs are present in relatively large concentrations in bone marrow stem cells, liver and intestinal epithelium. ALDHs protect these actively proliferating tissues against toxic effects of phosphoramide mustard and acrolein by converting aldophosphamide to carboxycyclophosphamide that does not give rise to the toxic metabolites phosphoramide mustard and acrolein. This is because carboxycyclophosphamide cannot undergo β-elimination (the carboxylate acts as an electron-donating group, nullifying the potential for transformation), preventing nitrogen mustard activation and subsequent alkylation.
Cyclophosphamide induces beneficial immunomodulatory effects in adaptive immunotherapy. Suggested mechanisms include:
Elimination of T regulatory cells (CD4+CD25+ T cells) in naive and tumor-bearing hosts
Induction of T cell growth factors, such as type I IFNs, and/or
Enhanced grafting of adoptively transferred, tumor-reactive effector T cells by the creation of an immunologic space niche.
Thus, cyclophosphamide preconditioning of recipient hosts (for donor T cells) has been used to enhance immunity in naïve hosts, and to enhance adoptive T cell immunotherapy regimens, as well as active vaccination strategies, inducing objective antitumor immunity.
History
As reported by O. M. Colvin in his study of the development of cyclophosphamide and its clinical applications,
Cyclophosphamide and the related nitrogen mustard–derived alkylating agent ifosfamide were developed by Norbert Brock and ASTA (now Baxter Oncology). Brock and his team synthesised and screened more than 1,000 candidate oxazaphosphorine compounds. They converted the base nitrogen mustard into a nontoxic "transport form". This transport form was a prodrug, subsequently actively transported into cancer cells. Once in the cells, the prodrug was enzymatically converted into the active, toxic form. The first clinical trials were published at the end of the 1950s. In 1959 it became the eighth cytotoxic anticancer agent to be approved by the FDA.
Society and culture
The abbreviation CP is common, although abbreviating drug names is not best practice in medicine.
Research
Because of its impact on the immune system, it is used in animal studies. Rodents are injected intraperitoneally with either a single dose of 150 mg/kg or two doses (150 and 100 mg/kg) spread over two days. This can be used for applications such as:
The EPA may be concerned about potential human pathogenicity of an engineered microbe when conducting an MCAN review. Particularly for bacteria with potential consumer exposure they require testing of the microbe on immuno-compromised rats.
Cyclophosphamide provides a positive control when studying immune-response of a new drug.
References
External links
Novel cyclic phosphoric acid ester amides, and the production thereof. (patent for cyclophosphamide).
Hepatotoxins
IARC Group 1 carcinogens
Nitrogen mustards
Organochlorides
Oxazaphosphinans
Phosphorodiamidates
Prodrugs
Chloroethyl compounds
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Cyclophosphamide | [
"Chemistry"
] | 3,141 | [
"Chemicals in medicine",
"Prodrugs"
] |
1,675,601 | https://en.wikipedia.org/wiki/Manifold%20vacuum | Manifold vacuum, or engine vacuum in a petrol engine is the difference in air pressure between the engine's intake manifold and Earth's atmosphere.
Manifold vacuum is an effect of a piston's movement on the induction stroke and the airflow through a throttle in the intervening carburetor or throttle body leading to the intake manifold. It is a result of the amount of restriction of airflow through the engine. In some engines, the manifold vacuum is also used as an auxiliary power source to drive engine accessories and for the crankcase ventilation system.
Manifold vacuums should not be confused with venturi vacuums, which are an effect exploited in carburetors to establish a pressure difference roughly proportional to mass airflow and to maintain a somewhat constant air/fuel ratio.
It is also used in light airplanes to provide airflow for pneumatic gyroscopic instruments.
Overview
The rate of airflow through an internal combustion engine is an important factor determining the amount of power the engine generates. Most gasoline engines are controlled by limiting that flow with a throttle that restricts intake airflow, while a diesel engine is controlled by the amount of fuel supplied to the cylinder, and so has no "throttle" as such. Manifold vacuum is present in all naturally aspirated engines that use throttles (including carbureted and fuel injected gasoline engines using the Otto cycle or the two-stroke cycle; diesel engines do not have throttle plates).
The mass flow through the engine is the product of the rotation rate of the engine, the displacement of the engine, and the density of the intake stream in the intake manifold. In most applications the rotation rate is set by the application (engine speed in a vehicle or machinery speed in other applications). The displacement is dependent on the engine geometry, which is generally not adjustable while the engine is in use (although a handful of models do have this feature, see variable displacement). Restricting the input flow reduces the density (and hence pressure) in the intake manifold, reducing the amount of power produced. It is also a major source of engine drag (see engine braking), as the low-pressure air in the intake manifold provides less pressure on the piston during the induction stroke.
When the throttle is opened (in a car, the accelerator pedal is depressed), ambient air is free to fill the intake manifold, increasing the pressure (filling the vacuum). A carburetor or fuel injection system adds fuel to the airflow in the correct proportion, providing energy to the engine. When the throttle is opened all the way, the engine's air induction system is exposed to full atmospheric pressure, and maximum airflow through the engine is achieved. In a naturally aspirated engine, output power is limited by the ambient barometric pressure. Superchargers and turbochargers boost manifold pressure above atmospheric pressure.
Modern developments
Modern engines use a manifold absolute pressure (abbreviated as MAP) sensor to measure air pressure in the intake manifold. Manifold absolute pressure is one of a multitude of parameters used by the engine control unit (ECU) to optimize engine operation. It is important to differentiate between absolute and gauge pressure when dealing with certain applications, particularly those that experience changes in elevation during normal operation.
Motivated by government regulations mandating reduction of fuel consumption (in the USA) or reduction of carbon dioxide emissions (in Europe), passenger cars and light trucks have been fitted with a variety of technologies (downsized engines; lockup, multi-ratio and overdrive transmissions; variable valve timing, forced induction, diesel engines, et al.) which render manifold vacuum inadequate or unavailable. Electric vacuum pumps are now commonly used for powering pneumatic accessories.
Manifold vacuum vs. venturi vacuum
Manifold vacuum is caused by a different phenomenon than venturi vacuum, which is present inside carburetors. Venturi vacuum is caused by the venturi effect which, for fixed ambient conditions (air density and temperature), depends on the total mass flow through the carburetor. In engines that use carburetors, the venturi vacuum is approximately proportional to the total mass flow through the engine (and hence the total power output). As ambient pressure (altitude, weather) or temperature change, the carburetor may need to be adjusted to maintain this relationship.
Manifold pressure may also be "ported". Porting is selecting a location for the pressure tap within the throttle plate's range of motion. Depending on throttle position, a ported pressure tap may be either upstream or downstream of the throttle. As the throttle position changes, a "ported" pressure tap is selectively connected to either manifold pressure or ambient pressure. Older (pre-OBD II) engines often used ported manifold pressure taps for ignition distributors and emission-control components.
Manifold vacuum in cars
Most automobiles use four-stroke Otto cycle engines with multiple cylinders attached to a single inlet manifold. During the intake stroke, the piston descends in the cylinder and the intake valve is open. As the piston descends it effectively increases the volume in the cylinder above it, setting up low pressure. Atmospheric pressure pushes air through the manifold and carburetor or fuel injection system, where it is mixed with fuel. Because multiple cylinders operate at different times in the engine cycle, there is almost constant pressure difference through the inlet manifold from carburetor to engine.
To control the amount of fuel/air mix entering the engine, a simple butterfly valve (throttle plate) is generally fitted at the start of the intake manifold (just below the carburetor in carbureted engines). The butterfly valve is simply a circular disc fitted on a spindle, fitting inside the pipe work. It is connected to the accelerator pedal of the car, and is set to be fully open when the pedal is fully pressed and fully closed when the pedal is released. The butterfly valve often contains a small "idle cutout", a hole that allows small amounts of fuel/air mixture into the engine even when the valve is fully closed, or the carburetor has a separate air bypass with its own idle jet.
If the engine is operating under light or no load and low or closed throttle, there is high manifold vacuum. As the throttle is opened, the engine speed increases rapidly. The engine speed is limited only by the amount of fuel/air mixture that is available in the manifold. Under full throttle and light load, other effects (such as valve float, turbulence in the cylinders, or ignition timing) limit engine speed so that the manifold pressure can increase—but in practice, parasitic drag on the internal walls of the manifold, plus the restrictive nature of the venturi at the heart of the carburetor, means that a low pressure will always be set up as the engine's internal volume exceeds the amount of the air the manifold is capable of delivering.
If the engine is operating under heavy load at wide throttle openings (such as accelerating from a stop or pulling the car up a hill) then engine speed is limited by the load and minimal vacuum will be created. Engine speed is low but the butterfly valve is fully open. Since the pistons are descending more slowly than under no load, the pressure differences are less marked and parasitic drag in the induction system is negligible. The engine pulls air into the cylinders at the full ambient pressure.
More vacuum is created in some situations. On deceleration or when descending a hill, the throttle will be closed and a low gear selected to control speed. The engine will be rotating fast because the road wheels and transmission are moving quickly, but the butterfly valve will be fully closed. The flow of air through the engine is strongly restricted by the throttle, producing a strong vacuum on the engine side of the butterfly valve which will tend to limit the speed of the engine. This phenomenon, known as engine braking, is used to prevent acceleration or even to slow down with minimal or no brake usage (as when descending a long or steep hill). This vacuum braking should not be confused with compression braking (aka a "Jake brake"), or with exhaust braking, which are often used on large diesel trucks. Such devices are necessary for engine braking with a diesel as they lack a throttle to restrict the air flow enough to create sufficient vacuum to brake a vehicle.
Uses of manifold vacuum
This low (or negative) pressure can be put to use. A pressure gauge measuring the manifold pressure can be fitted to give the driver an indication of how hard the engine is working and it can be used to achieve maximum momentary fuel efficiency by adjusting driving habits: minimizing manifold vacuum increases momentary efficiency. A weak manifold vacuum under closed-throttle conditions shows that the butterfly valve or internal components of the engine (valves or piston rings) are worn, preventing good pumping action by the engine and reducing overall efficiency.
Vacuum used to be a common way to drive auxiliary systems on the vehicle. Vacuum systems tend to be unreliable with age as the vacuum tubing becomes brittle and susceptible to leaks.
Before 1960
Windshield wiper motors - Prior to the introduction of Federal Motor Vehicle Safety Standards in the USA by the National Traffic and Motor Vehicle Safety Act of 1966, it was common to use manifold vacuum to drive windscreen wipers with a pneumatic motor. This system is cheap and simple but resulted in wipers whose speed is inversely proportional to how hard the engine is working.
Power door lock motors.
"Autovac" fuel lifter, which uses vacuum to raise fuel from the main tank to a small auxiliary tank, from which it flows by gravity to the carburetor. This eliminated the fuel pump which, in early cars, was an unreliable item.
1960–1990
Automotive vacuum systems reached their height of use between the 1960s and 1980s. During this time a huge variety of vacuum switches, delay valves and accessory devices were created. As an example, a 1967 Ford Thunderbird used vacuum for:
Vacuum-assist brake servos (power brakes) use atmospheric pressure pressing against the engine manifold vacuum to increase pressure on the brakes. Since braking is nearly always accompanied by the closing of the throttle and associated high manifold vacuum, this system is simple and almost foolproof. Vacuum tanks were installed on trailers to control their integrated braking systems.
Transmission shift control
Doors for the hidden headlamps
Remote trunk latch release
Power door locks
HVAC air routing - Vehicle HVAC systems used manifold vacuum to drive actuators controlling airflow and temperature.
Control of the heater core valve
Rear cabin vent control
Tilt-away steering wheel release
Other items that can be powered by vacuum include:
Exhaust gas recirculation solenoid
Power steering pump
Fuel pressure regulator
Modern usage
Modern cars have a minimal amount of accessories that use vacuum. Many accessories previously driven by vacuum have been replaced by electronic accessories. Some modern accessories that sometimes use vacuum include:
Vacuum-assist brake servos
Positive crankcase ventilation valve
Charcoal canister
Manifold vacuum in diesel engines
Many diesel engines do not have butterfly valve throttles. The manifold is connected directly to the air intake and the only suction created is that caused by the descending piston with no venturi to increase it, and the engine power is controlled by varying the amount of fuel that is injected into the cylinder by a fuel injection system. This assists in making diesels much more efficient than petrol engines.
If vacuum is required (vehicles that can be fitted with both petrol and diesel engines often have systems requiring it), a butterfly valve connected to the throttle can be fitted to the manifold. This reduces efficiency and is still not as effective as it is not connected to a venturi. Since low-pressure is only created on the overrun (such as when descending hills with a closed throttle), not over a wide range of situations as in a petrol engine, a vacuum tank is fitted.
Most diesel engines now have a separate vacuum pump ("exhauster") fitted to provide vacuum at all times, at all engine speeds.
Many new BMW petrol engines do not use a throttle in normal running, but instead use "Valvetronic" variable-lift intake valves to control the amount of air entering the engine. Like a diesel engine, manifold vacuum is practically non-existent in these engines and a different source must be utilised to power the brake servo.
See also
Vacuum delay valve
References
Internal combustion engine
Engine technology
Vacuum | Manifold vacuum | [
"Physics",
"Technology",
"Engineering"
] | 2,487 | [
"Internal combustion engine",
"Engines",
"Vacuum",
"Combustion engineering",
"Engine technology",
"Matter"
] |
1,675,938 | https://en.wikipedia.org/wiki/Bosque | A bosque ( ) is a type of gallery forest habitat found along the riparian flood plains of streams, river banks, and lakes. It derives its name from the Spanish word for 'forest', pronounced .
Setting
In the predominantly arid or semi-arid southwestern United States, a bosque is an oasis-like ribbon of green forest, often canopied, that only exists near rivers, streams, or other water courses. The most notable bosque is the -long forest ecosystem along the valley of the middle Rio Grande in New Mexico that extends from Santa Fe, through Albuquerque and south to El Paso, Texas. One of the most famous and ecologically intact sections of the bosque is included in the Bosque del Apache National Wildlife Refuge, which is located south of San Antonio, NM. Another bosque can be found in Costa Rica, a beautiful wildlife refuge named Bosque Alegre.
Middle Rio Grande bosque
There are various refuges, parks, and trails for visitors, such as the Paseo Del Bosque trail in Albuquerque, New Mexico.
Flora and fauna
As a desert riparian forest, the middle Rio Grande bosque has a characteristic variety of flora and fauna. Common trees in the bosque habitat include mesquite, cottonwood, desert willow, and desert olive. Because there is often only a single canopy layer and because the tree species found in the bosque are generally deciduous, a wide variety of shrubs, grasses, and other understory vegetation is also supported. Desert hackberry, blue palo verde, graythorn (Condalia lycioides), Mexican elder (Sambucus mexicana), virgin's bower, and Indian root all flourish in the bosque. The habitat also supports a large variety of lichens. For a semi-arid region, there is extraordinary biodiversity at the interface of the bosque and surrounding desert ecosystems. Certain subsets of vegetative association are defined within the Kuchler scheme, including the Mesquite Bosque. In 2017, 150 different species of flora (trees, shrubs, forbs, and grasses) were documented in Albuquerque's Bosque (New Mexico, United States).
The bosque is an important stopover for a variety of migratory birds, such as ducks, geese, egrets, herons, and sandhill cranes. Year-round avian residents include Red-tailed hawks, Cooper's hawks, American kestrels, hummingbirds, owls, woodpeckers, and the southwestern willow flycatcher. Over 270 species of birds can be found in Albuquerque's Bosque (New Mexico, United States). Aquatic fauna of the bosque include the endangered Rio Grande silvery minnow. Mammalian residents include desert cottontail, white-footed mouse, North American porcupine, North American beaver, long-tailed weasel, common raccoon, coyote, mountain lions, and bobcats. Cottonwood trees serve as shelter to a variety of animals. However, a September 2020 report by the Bosque Ecosystem Monitoring Program (BEMP) predicted that cottonwood trees in the middle Rio Grande bosque will be disproportionately impacted as climate change affects groundwater depth and as air temperatures rise. The report separately concluded that invasive plant species were not sensitive to such changes in groundwater, suggesting that the plant structure and animal habitats of the middle Rio Grande bosque will change dramatically as climate changes.
Inhabitants
Even though the earliest inhabitants began to settle around the bosque about 15,000 years ago, they caused only minor ecosystem changes. It was not until rapid population growth and when inhabitants started creating water diversions for farming purposes that the bosque started to be manipulated, and change was noted in the ecosystem.
Restoration
Maintaining the ecosystem and habitat of the bosque is a difficult and ongoing concern for many. The creation of water diversions such as levees, ditches, irrigation canals, etc has caused irreparable damage, causing floodplains to dry and water levels to drop. Thus creating a ripple effect, many different types of native plant species, wildlife, and amphibians have died off or relocated. The drying out waters and loss of wetlands create a land that is susceptible to fires destroying more habitation areas.
There are ongoing efforts to undo damage to the bosque ecosystem caused by human development, fires, and invasive species in the 20th century. Where possible, levees and other flood control devices along the Rio Grande are being removed, to allow the river to undergo its natural cycle. However, in June 2023, the Army Corps of Engineers-Albuquerque District and the Middle Rio Grande Conservancy District signed a design agreement aiming for the reconstruction of multiple levees along the Rio Grande river between Albuquerque and Belen as part of the Middle Rio Grande, Bernalillo to Belen project, which aims to minimize flood damage along the river. To help with the regrowth and maintenance of the bosque, new trees are planted by The Open Space Division.
Since 1996, the Bosque Ecosystem Monitoring Program (BEMP) of the University of New Mexico has worked with local schools on habitat restoration and ecological monitoring within the bosque, as well as raising awareness of the ecological importance of this habitat through educational outreach initiatives. BEMP receives funding from a number of sources, including the federal government. As of 2016, the program maintained thirty permanent sites throughout the middle Rio Grande bosque.
See also
Flora of New Mexico
Riparian forest
Tugay, an analogous forest type in the deserts and steppes of Central Asia
References
External links
Save our Bosque Report (.pdf)
Bosque Management and Endangered Species (BMEP)
Fire commander: Bosque’s urban area presents challenge
Race to reduce bosque fires
Forests of the United States
Habitats
Natural history of New Mexico
Riparian zone | Bosque | [
"Environmental_science"
] | 1,183 | [
"Riparian zone",
"Hydrology"
] |
75,047 | https://en.wikipedia.org/wiki/Aeroelasticity | Aeroelasticity is the branch of physics and engineering studying the interactions between the inertial, elastic, and aerodynamic forces occurring while an elastic body is exposed to a fluid flow. The study of aeroelasticity may be broadly classified into two fields: static aeroelasticity dealing with the static or steady state response of an elastic body to a fluid flow, and dynamic aeroelasticity dealing with the body's dynamic (typically vibrational) response.
Aircraft are prone to aeroelastic effects because they need to be lightweight while enduring large aerodynamic loads. Aircraft are designed to avoid the following aeroelastic problems:
divergence where the aerodynamic forces increase the twist of a wing which further increases forces;
control reversal where control activation produces an opposite aerodynamic moment that reduces, or in extreme cases reverses, the control effectiveness; and
flutter which is uncontained vibration that can lead to the destruction of an aircraft.
Aeroelasticity problems can be prevented by adjusting the mass, stiffness or aerodynamics of structures which can be determined and verified through the use of calculations, ground vibration tests and flight flutter trials. Flutter of control surfaces is usually eliminated by the careful placement of mass balances.
The synthesis of aeroelasticity with thermodynamics is known as aerothermoelasticity, and its synthesis with control theory is known as aeroservoelasticity.
History
The second failure of Samuel Langley's prototype plane on the Potomac was attributed to aeroelastic effects (specifically, torsional divergence). An early scientific work on the subject was George Bryan's Theory of the Stability of a Rigid Aeroplane published in 1906. Problems with torsional divergence plagued aircraft in the First World War and were solved largely by trial-and-error and ad hoc stiffening of the wing. The first recorded and documented case of flutter in an aircraft was that which occurred to a Handley Page O/400 bomber during a flight in 1916, when it suffered a violent tail oscillation, which caused extreme distortion of the rear fuselage and the elevators to move asymmetrically. Although the aircraft landed safely, in the subsequent investigation F. W. Lanchester was consulted. One of his recommendations was that left and right elevators should be rigidly connected by a stiff shaft, which was to subsequently become a design requirement. In addition, the National Physical Laboratory (NPL) was asked to investigate the phenomenon theoretically, which was subsequently carried out by Leonard Bairstow and Arthur Fage.
In 1926, Hans Reissner published a theory of wing divergence, leading to much further theoretical research on the subject. The term aeroelasticity itself was coined by Harold Roxbee Cox and Alfred Pugsley at the Royal Aircraft Establishment (RAE), Farnborough in the early 1930s.
In the development of aeronautical engineering at Caltech, Theodore von Kármán started a course "Elasticity applied to Aeronautics". After teaching the course for one term, Kármán passed it over to Ernest Edwin Sechler, who developed aeroelasticity in that course and in publication of textbooks on the subject.
In 1947, Arthur Roderick Collar defined aeroelasticity as "the study of the mutual interaction that takes place within the triangle of the inertial, elastic, and aerodynamic forces acting on structural members exposed to an airstream, and the influence of this study on design".
Static aeroelasticity
In an aeroplane, two significant static aeroelastic effects may occur. Divergence is a phenomenon in which the elastic twist of the wing suddenly becomes theoretically infinite, typically causing the wing to fail. Control reversal is a phenomenon occurring only in wings with ailerons or other control surfaces, in which these control surfaces reverse their usual functionality (e.g., the rolling direction associated with a given aileron moment is reversed).
Divergence
Divergence occurs when a lifting surface deflects under aerodynamic load in a direction which further increases lift in a positive feedback loop. The increased lift deflects the structure further, which eventually brings the structure to the point of divergence. Unlike flutter, which is another aeroelastic problem, instead of irregular oscillations, divergence causes the lifting surface to move in the same direction and when it comes to point of divergence the structure deforms.
Control reversal
Control surface reversal is the loss (or reversal) of the expected response of a control surface, due to deformation of the main lifting surface. For simple models (e.g. single aileron on an Euler-Bernoulli beam), control reversal speeds can be derived analytically as for torsional divergence. Control reversal can be used to aerodynamic advantage, and forms part of the Kaman servo-flap rotor design.
Dynamic aeroelasticity
Dynamic aeroelasticity studies the interactions among aerodynamic, elastic, and inertial forces. Examples of dynamic aeroelastic phenomena are:
Flutter
Flutter is a dynamic instability of an elastic structure in a fluid flow, caused by positive feedback between the body's deflection and the force exerted by the fluid flow. In a linear system, "flutter point" is the point at which the structure is undergoing simple harmonic motion—zero net damping—and so any further decrease in net damping will result in a self-oscillation and eventual failure. "Net damping" can be understood as the sum of the structure's natural positive damping and the negative damping of the aerodynamic force. Flutter can be classified into two types: hard flutter, in which the net damping decreases very suddenly, very close to the flutter point; and soft flutter, in which the net damping decreases gradually.
In water the mass ratio of the pitch inertia of the foil to that of the circumscribing cylinder of fluid is generally too low for binary flutter to occur, as shown by explicit solution of the simplest pitch and heave flutter stability determinant.
Structures exposed to aerodynamic forces—including wings and aerofoils, but also chimneys and bridges—are generally designed carefully within known parameters to avoid flutter. Blunt shapes, such as chimneys, can give off a continuous stream of vortices known as a Kármán vortex street, which can induce structural oscillations. Strakes are typically wrapped around chimneys to stop the formation of these vortices.
In complex structures where both the aerodynamics and the mechanical properties of the structure are not fully understood, flutter can be discounted only through detailed testing. Even changing the mass distribution of an aircraft or the stiffness of one component can induce flutter in an apparently unrelated aerodynamic component. At its mildest, this can appear as a "buzz" in the aircraft structure, but at its most violent, it can develop uncontrollably with great speed and cause serious damage to the aircraft or lead to its destruction, as in Northwest Airlines Flight 2 in 1938, Braniff Flight 542 in 1959, or the prototypes for Finland's VL Myrsky fighter aircraft in the early 1940s. Famously, the original Tacoma Narrows Bridge was destroyed as a result of aeroelastic fluttering.
Aeroservoelasticity
In some cases, automatic control systems have been demonstrated to help prevent or limit flutter-related structural vibration.
Propeller whirl flutter
Propeller whirl flutter is a special case of flutter involving the aerodynamic and inertial effects of a rotating propeller and the stiffness of the supporting nacelle structure. Dynamic instability can occur involving pitch and yaw degrees of freedom of the propeller and the engine supports leading to an unstable precession of the propeller. Failure of the engine supports led to whirl flutter occurring on two Lockheed L-188 Electra aircraft, in 1959 on Braniff Flight 542 and again in 1960 on Northwest Orient Airlines Flight 710.
Transonic aeroelasticity
Flow is highly non-linear in the transonic regime, dominated by moving shock waves. Avoiding flutter is mission-critical for aircraft that fly through transonic Mach numbers. The role of shock waves was first analyzed by Holt Ashley. A phenomenon that impacts stability of aircraft known as "transonic dip", in which the flutter speed can get close to flight speed, was reported in May 1976 by Farmer and Hanson of the Langley Research Center.
Buffeting
Buffeting is a high-frequency instability, caused by airflow separation or shock wave oscillations from one object striking another. It is caused by a sudden impulse of load increasing. It is a random forced vibration. Generally it affects the tail unit of the aircraft structure due to air flow downstream of the wing.
The methods for buffet detection are:
Pressure coefficient diagram
Pressure divergence at trailing edge
Computing separation from trailing edge based on Mach number
Normal force fluctuating divergence
Prediction and cure
In the period 1950–1970, AGARD developed the Manual on Aeroelasticity which details the processes used in solving and verifying aeroelastic problems along with standard examples that can be used to test numerical solutions.
Aeroelasticity involves not just the external aerodynamic loads and the way they change but also the structural, damping and mass characteristics of the aircraft. Prediction involves making a mathematical model of the aircraft as a series of masses connected by springs and dampers which are tuned to represent the dynamic characteristics of the aircraft structure. The model also includes details of applied aerodynamic forces and how they vary.
The model can be used to predict the flutter margin and, if necessary, test fixes to potential problems. Small carefully chosen changes to mass distribution and local structural stiffness can be very effective in solving aeroelastic problems.
Methods of predicting flutter in linear structures include the p-method, the k-method and the p-k method.
For nonlinear systems, flutter is usually interpreted as a limit cycle oscillation (LCO), and methods from the study of dynamical systems can be used to determine the speed at which flutter will occur.
Media
These videos detail the Active Aeroelastic Wing two-phase NASA-Air Force flight research program to investigate the potential of aerodynamically twisting flexible wings to improve maneuverability of high-performance aircraft at transonic and supersonic speeds, with traditional control surfaces such as ailerons and leading-edge flaps used to induce the twist.
Notable aeroelastic failures
The original Tacoma Narrows Bridge was destroyed as a result of aeroelastic fluttering.
Propeller whirl flutter of the Lockheed L-188 Electra on Braniff Flight 542.
1931 Transcontinental & Western Air Fokker F-10 crash.
Body freedom flutter of the GAF Jindivik drone.
See also
Adaptive compliant wing
Aerospace engineering
Kármán vortex street
Mathematical modeling
Oscillation
Parker Variable Wing
Vortex shedding
Vortex-induced vibration
X-53 Active Aeroelastic Wing
References
Further reading
Bisplinghoff, R. L., Ashley, H. and Halfman, H., Aeroelasticity. Dover Science, 1996, , 880 p.
Maurice Biot & L. Arnold (1948) "Low speed flutter and its physical interpretation", Journal of Aeronautical Sciences 15: 232–6
Dowell, E. H., A Modern Course on Aeroelasticity. .
Fung, Y. C., An Introduction to the Theory of Aeroelasticity. Dover, 1994, .
Hodges, D. H. and Pierce, A., Introduction to Structural Dynamics and Aeroelasticity, Cambridge, 2002, .
Wright, J. R. and Cooper, J. E., Introduction to Aircraft Aeroelasticity and Loads, Wiley 2007, .
Hoque, M. E., "Active Flutter Control", LAP Lambert Academic Publishing, Germany, 2010, .
Collar, A. R., "The first fifty years of aeroelasticity", Aerospace, vol. 5, no. 2, pp. 12–20, 1978.
Garrick, I. E. and Reed W. H., "Historical development of aircraft flutter", Journal of Aircraft, vol. 18, pp. 897–912, Nov. 1981.
External links
Aeroelasticity Branch – NASA Langley Research Center
DLR Institute of Aeroelasticity
National Aerospace Laboratory
The Aeroelasticity Group – Texas A&M University
NACA Technical Reports – NASA Langley Research Center
NASA Aeroelasticity Handbook
Aerodynamics
Aircraft wing design
Aerospace engineering
Solid mechanics
Elasticity (physics)
Articles containing video clips | Aeroelasticity | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,525 | [
"Solid mechanics",
"Physical phenomena",
"Elasticity (physics)",
"Deformation (mechanics)",
"Aerodynamics",
"Mechanics",
"Aerospace engineering",
"Physical properties",
"Fluid dynamics"
] |
75,367 | https://en.wikipedia.org/wiki/Non-Newtonian%20fluid | In physics and chemistry, a non-Newtonian fluid is a fluid that does not follow Newton's law of viscosity, that is, it has variable viscosity dependent on stress. In particular, the viscosity of non-Newtonian fluids can change when subjected to force. Ketchup, for example, becomes runnier when shaken and is thus a non-Newtonian fluid. Many salt solutions and molten polymers are , as are many commonly found substances such as custard, toothpaste, starch suspensions, corn starch, paint, blood, melted butter and shampoo.
Most commonly, the viscosity (the gradual deformation by shear or tensile stresses) of non-Newtonian fluids is dependent on shear rate or shear rate history. Some non-Newtonian fluids with shear-independent viscosity, however, still exhibit normal stress-differences or other non-Newtonian behavior. In a Newtonian fluid, the relation between the shear stress and the shear rate is linear, passing through the origin, the constant of proportionality being the coefficient of viscosity. In a non-Newtonian fluid, the relation between the shear stress and the shear rate is different. The fluid can even exhibit time-dependent viscosity. Therefore, a constant coefficient of viscosity cannot be defined.
Although the concept of viscosity is commonly used in fluid mechanics to characterize the shear properties of a fluid, it can be inadequate to describe non-Newtonian fluids. They are best studied through several other rheological properties that relate stress and strain rate tensors under many different flow conditions—such as oscillatory shear or extensional flow—which are measured using different devices or rheometers. The properties are better studied using tensor-valued constitutive equations, which are common in the field of continuum mechanics.
For non-Newtonian fluid's viscosity, there are pseudoplastic, plastic, and dilatant flows that are time-independent, and there are thixotropic and rheopectic flows that are time-dependent. Three well-known time-dependent non-newtonian fluids which can be identified by the defining authors are the Oldroyd-B model, Walters’ Liquid B and Williamson fluids.
Time-dependent self-similar analysis of the Ladyzenskaya-type model with a non-linear velocity dependent stress tensor was performed unfortunately no analytical solutions could be derived, however a rigorous mathematical existence theorem was given for the solution.
For time-independent non-Newtonian fluids the known analytic solutions are much broader
Types of non-Newtonian behavior
Summary
Shear thickening fluid
The viscosity of a shear thickeningi.e. dilatant fluid appears to increase when the shear rate increases. Corn starch suspended in water ("oobleck", see below) is a common example: when stirred slowly it looks milky, when stirred vigorously it feels like a very viscous liquid.
Shear thinning fluid
A familiar example of the opposite, a shear thinning fluid, or pseudoplastic fluid, is wall paint: The paint should flow readily off the brush when it is being applied to a surface but not drip excessively. Note that all thixotropic fluids are extremely shear thinning, but they are significantly time dependent, whereas the colloidal "shear thinning" fluids respond instantaneously to changes in shear rate. Thus, to avoid confusion, the latter classification is more clearly termed pseudoplastic.
Another example of a shear thinning fluid is blood. This application is highly favoured within the body, as it allows the viscosity of blood to decrease with increased shear strain rate.
Bingham plastic
Fluids that have a linear shear stress/shear strain relationship but require a finite yield stress before they begin to flow (the plot of shear stress against shear strain does not pass through the origin) are called Bingham plastics. Several examples are clay suspensions, drilling mud, toothpaste, mayonnaise, chocolate, and mustard. The surface of a Bingham plastic can hold peaks when it is still. By contrast Newtonian fluids have flat featureless surfaces when still.
Rheopectic or anti-thixotropic
There are also fluids whose strain rate is a function of time. Fluids that require a gradually increasing shear stress to maintain a constant strain rate are referred to as rheopectic. An opposite case of this is a fluid that thins out with time and requires a decreasing stress to maintain a constant strain rate (thixotropic).
Examples
Many common substances exhibit non-Newtonian flows. These include:
Soap solutions, cosmetics, and toothpaste
Food such as butter, cheese, jam, mayonnaise, soup, taffy, and yogurt
Natural substances such as magma, lava, gums, honey, and extracts such as vanilla extract
Biological fluids such as blood, saliva, semen, mucus, and synovial fluid
Slurries such as cement slurry and paper pulp, emulsions such as mayonnaise, and some kinds of dispersions
Oobleck
An inexpensive, non-toxic example of a non-Newtonian fluid is a suspension of starch (e.g., cornstarch/cornflour) in water, sometimes called "oobleck", "ooze", or "magic mud" (1 part of water to 1.5–2 parts of corn starch). The name "oobleck" is derived from the Dr. Seuss book Bartholomew and the Oobleck.
Because of its dilatant properties, oobleck is often used in demonstrations that exhibit its unusual behavior. A person may walk on a large tub of oobleck without sinking due to its shear thickening properties, as long as the individual moves quickly enough to provide enough force with each step to cause the thickening. Also, if oobleck is placed on a large subwoofer driven at a sufficiently high volume, it will thicken and form standing waves in response to low frequency sound waves from the speaker. If a person were to punch or hit oobleck, it would thicken and act like a solid. After the blow, the oobleck will go back to its thin liquid-like state.
Flubber (slime)
Flubber, also commonly known as slime, is a non-Newtonian fluid, easily made from polyvinyl alcohol–based glues (such as white "school" glue) and borax. It flows under low stresses but breaks under higher stresses and pressures. This combination of fluid-like and solid-like properties makes it a Maxwell fluid. Its behaviour can also be described as being viscoplastic or gelatinous.
Chilled caramel topping
Another example of non-Newtonian fluid flow is chilled caramel ice cream topping (so long as it incorporates hydrocolloids such as carrageenan and gellan gum). The sudden application of force—by stabbing the surface with a finger, for example, or rapidly inverting the container holding it—causes the fluid to behave like a solid rather than a liquid. This is the "shear thickening" property of this non-Newtonian fluid. More gentle treatment, such as slowly inserting a spoon, will leave it in its liquid state. Trying to jerk the spoon back out again, however, will trigger the return of the temporary solid state.
Silly Putty
Silly Putty is a silicone polymer based suspension that will flow, bounce, or break, depending on strain rate.
Plant resin
Plant resin is a viscoelastic solid polymer. When left in a container, it will flow slowly as a liquid to conform to the contours of its container. If struck with greater force, however, it will shatter as a solid.
Quicksand
Quicksand is a shear thinning non-Newtonian colloid that gains viscosity at rest. Quicksand's non-Newtonian properties can be observed when it experiences a slight shock (for example, when someone walks on it or agitates it with a stick), shifting between its gel and sol phase and seemingly liquefying, causing objects on the surface of the quicksand to sink.
Ketchup
Ketchup is a shear thinning fluid. Shear thinning means that the fluid viscosity decreases with increasing shear stress. In other words, fluid motion is initially difficult at slow rates of deformation, but will flow more freely at high rates. Shaking an inverted bottle of ketchup can cause it to transition to a lower viscosity through shear thinning, making it easier to pour from the bottle.
Dry granular flows
Under certain circumstances, flows of granular materials can be modelled as a continuum, for example using the μ(I) rheology. Such continuum models tend to be non-Newtonian, since the apparent viscosity of granular flows increases with pressure and decreases with shear rate. The main difference is the shearing stress and rate of shear.
See also
Complex fluid
Dilatant
Dissipative particle dynamics
Generalized Newtonian fluid
Herschel–Bulkley fluid
Liquefaction
Navier–Stokes equations
Newtonian fluid
Pseudoplastic
Quicksand
Quick clay
Rheology
Superfluids
Thixotropy
Weissenberg effect
References
External links
Continuum mechanics
Fluid dynamics
Viscosity
Polymers
Tribology | Non-Newtonian fluid | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,937 | [
"Tribology",
"Physical phenomena",
"Physical quantities",
"Continuum mechanics",
"Chemical engineering",
"Classical mechanics",
"Surface science",
"Materials science",
"Polymer chemistry",
"Mechanical engineering",
"Piping",
"Polymers",
"Wikipedia categories named after physical quantities",... |
75,485 | https://en.wikipedia.org/wiki/Electrical%20discharge%20machining | Electrical discharge machining (EDM), also known as spark machining, spark eroding, die sinking, wire burning or wire erosion, is a metal
fabrication process whereby a desired shape is obtained by using electrical discharges (sparks). Material is removed from the work piece by a series of rapidly recurring current discharges between two electrodes, separated by a dielectric liquid and subject to an electric voltage. One of the electrodes is called the tool-electrode, or simply the or , while the other is called the workpiece-electrode, or . The process depends upon the tool and work piece not making physical contact. Extremely hard materials like carbides, ceramics, titanium alloys and heat treated tool steels that are very difficult to machine using conventional machining can be precisely machined by EDM.
When the voltage between the two electrodes is increased, the intensity of the electric field in the volume between the electrodes becomes greater, causing dielectric break down of the liquid, and produces an electric arc. As a result, material is removed from the electrodes. Once the current stops (or is stopped, depending on the type of generator), new liquid dielectric is conveyed into the inter-electrode volume, enabling the solid particles (debris) to be carried away and the insulating properties of the dielectric to be restored. Adding new liquid dielectric in the inter-electrode volume is commonly referred to as . After a current flow, the voltage between the electrodes is restored to what it was before the breakdown, so that a new liquid dielectric breakdown can occur to repeat the cycle.
History
The erosive effect of electrical discharges was first noted in 1770 by English physicist Joseph Priestley.
Die-sink EDM
Two Soviet scientists, B. R. Lazarenko and N. I. Lazarenko, were tasked in 1943 to investigate ways of preventing the erosion of tungsten electrical contacts due to sparking. They failed in this task but found that the erosion was more precisely controlled if the electrodes were immersed in a dielectric fluid. This led them to invent an EDM machine used for working difficult-to-machine materials such as tungsten. The Lazarenkos' machine is known as an R-C-type machine, after the resistor–capacitor circuit (RC circuit) used to charge the electrodes.
Simultaneously but independently, an American team, Harold Stark, Victor Harding, and Jack Beaver, developed an EDM machine for removing broken drills and taps from aluminium castings. Initially constructing their machines from under-powered electric-etching tools, they were not very successful. But more powerful sparking units, combined with automatic spark repetition and fluid replacement with an electromagnetic interrupter arrangement produced practical machines. Stark, Harding, and Beaver's machines produced 60 sparks per second. Later machines based on their design used vacuum tube circuits that produced thousands of sparks per second, significantly increasing the speed of cutting.
Wire-cut EDM
The wire-cut type of machine arose in the 1960s for making tools (dies) from hardened steel. The tool electrode in wire EDM is simply a wire. To avoid the erosion of the wire causing it to break, the wire is wound between two spools so that the active part of the wire is constantly changing. The earliest numerical controlled (NC) machines were conversions of punched-tape vertical milling machines. The first commercially available NC machine built as a wire-cut EDM machine was manufactured in the USSR in 1967. Machines that could optically follow lines on a master drawing were developed by David H. Dulebohn's group in the 1960s at Andrew Engineering Company for milling and grinding machines. Master drawings were later produced by computer numerical controlled (CNC) plotters for greater accuracy. A wire-cut EDM machine using the CNC drawing plotter and optical line follower techniques was produced in 1974. Dulebohn later used the same plotter CNC program to directly control the EDM machine, and the first CNC EDM machine was produced in 1976.
Commercial wire EDM capability and use has advanced substantially during recent decades. Feed rates have increased and surface finish can be finely controlled.
Generalities
Electrical discharge machining is a machining method primarily used for hard metals or those that would be very difficult to machine with traditional techniques. EDM typically works with materials that are electrically conductive, although methods have also been proposed for using EDM to machine insulating ceramics. EDM can cut intricate contours or cavities in pre-hardened steel without the need for heat treatment to soften and re-harden them. This method can be used with any other metal or metal alloy such as titanium, hastelloy, kovar, and inconel. Also, applications of this process to shape polycrystalline diamond tools have been reported.
EDM is often included in the "non-traditional" or "non-conventional" group of machining methods together with processes such as electrochemical machining (ECM), water jet cutting (WJ, AWJ), laser cutting, and opposite to the "conventional" group (turning, milling, grinding, drilling, and any other process whose material removal mechanism is essentially based on mechanical forces).
Ideally, EDM can be seen as a series of breakdown and restoration of the liquid dielectric in-between the electrodes. However, caution should be exerted in considering such a statement because it is an idealized model of the process, introduced to describe the fundamental ideas underlying the process. Yet, any practical application involves many aspects that may also need to be considered. For instance, the removal of the debris from the inter-electrode volume is likely to be always partial. Thus the electrical properties of the dielectric in the inter-electrodes volume can be different from their nominal values and can even vary with time. The inter-electrode distance, often also referred to as spark-gap, is the result of the control algorithms of the specific machine used. The control of such a distance appears logically to be central to this process. Also, not all of the current between the dielectric is of the ideal type described above: the spark-gap can be short-circuited by the debris. The control system of the electrode may fail to react quickly enough to prevent the two electrodes (tool and workpiece) from coming into contact, with a consequent short circuit. This is unwanted because a short circuit contributes to material removal differently from the ideal case. The flushing action can be inadequate to restore the insulating properties of the dielectric so that the current always happens in the point of the inter-electrode volume (this is referred to as arcing), with a consequent unwanted change of shape (damage) of the tool-electrode and workpiece. Ultimately, a description of this process in a suitable way for the specific purpose at hand is what makes the EDM area such a rich field for further investigation and research.
To obtain a specific geometry, the EDM tool is guided along the desired path very close to the work; ideally it should not touch the workpiece, although in reality this may happen due to the performance of the specific motion control in use. In this way, a large number of current discharges (colloquially also called sparks) happen, each contributing to the removal of material from both tool and workpiece, where small craters are formed. The size of the craters is a function of the technological parameters set for the specific job at hand. They can be with typical dimensions ranging from the nanoscale (in micro-EDM operations) to some hundreds of micrometers in roughing conditions.
The presence of these small craters on the tool results in the gradual erosion of the electrode. This erosion of the tool-electrode is also referred to as wear. Strategies are needed to counteract the detrimental effect of the wear on the geometry of the workpiece. One possibility is that of continuously replacing the tool-electrode during a machining operation. This is what happens if a continuously replaced wire is used as electrode. In this case, the correspondent EDM process is also called wire EDM. The tool-electrode can also be used in such a way that only a small portion of it is actually engaged in the machining process and this portion is changed on a regular basis. This is, for instance, the case when using a rotating disk as a tool-electrode. The corresponding process is often also referred to as EDM grinding.
A further strategy consists in using a set of electrodes with different sizes and shapes during the same EDM operation. This is often referred to as multiple electrode strategy, and is most common when the tool electrode replicates in negative the wanted shape and is advanced towards the blank along a single direction, usually the vertical direction (i.e. z-axis). This resembles the sink of the tool into the dielectric liquid in which the workpiece is immersed, so, not surprisingly, it is often referred to as die-sinking EDM (also called conventional EDM and ram EDM). The corresponding machines are often called sinker EDM. Usually, the electrodes of this type have quite complex forms. If the final geometry is obtained using a usually simple-shaped electrode which is moved along several directions and is possibly also subject to rotations, often the term EDM milling is used.
In any case, the severity of the wear is strictly dependent on the technological parameters used in the operation (for instance: polarity, maximum current, open circuit voltage). For example, in micro-EDM, also known as μ-EDM, these parameters are usually set at values which generates severe wear. Therefore, wear is a major problem in that area.
The problem of wear to graphite electrodes is being addressed. In one approach, a digital generator, controllable within milliseconds, reverses polarity as electro-erosion takes place. That produces an effect similar to electroplating that continuously deposits the eroded graphite back on the electrode. In another method, a so-called "Zero Wear" circuit reduces how often the discharge starts and stops, keeping it on for as long a time as possible.
Definition of the technological parameters
Difficulties have been encountered in the definition of the technological parameters that drive the process.
Two broad categories of generators, also known as power supplies, are in use on EDM machines commercially available: the group based on RC circuits and the group based on transistor-controlled pulses.
In both categories, the primary parameters at setup are the current and frequency delivered. In RC circuits, however, little control is expected over the time duration of the discharge, which is likely to depend on the actual spark-gap conditions (size and pollution) at the moment of the discharge. Also, the open circuit voltage (i.e. the voltage between the electrodes when the dielectric is not yet broken) can be identified as steady state voltage of the RC circuit.
In generators based on transistor control, the user is usually able to deliver a train of pulses of voltage to the electrodes. Each pulse can be controlled in shape, for instance, quasi-rectangular. In particular, the time between two consecutive pulses and the duration of each pulse can be set. The amplitude of each pulse constitutes the open circuit voltage. Thus, the maximum duration of discharge is equal to the duration of a pulse of voltage in the train. Two pulses of current are then expected not to occur for a duration equal or larger than the time interval between two consecutive pulses of voltage.
The maximum current during a discharge that the generator delivers can also be controlled. Because other sorts of generators may also be used by different machine builders, the parameters that may actually be set on a particular machine will depend on the generator manufacturer. The details of the generators and control systems on their machines are not always easily available to their user. This is a barrier to describing unequivocally the technological parameters of the EDM process. Moreover, the parameters affecting the phenomena occurring between tool and electrode are also related to the controller of the motion of the electrodes.
A framework to define and measure the electrical parameters during an EDM operation directly on inter-electrode volume with an oscilloscope external to the machine has been recently proposed by Ferri et al. These authors conducted their research in the field of μ-EDM, but the same approach can be used in any EDM operation. This would enable the user to estimate directly the electrical parameters that affect their operations without relying upon machine manufacturer's claims. When machining different materials in the same setup conditions, the actual electrical parameters of the process are significantly different.
Material removal mechanism
The first serious attempt at providing a physical explanation of the material removal during electric discharge machining is perhaps that of Van Dijck. Van Dijck presented a thermal model together with a computational simulation to explain the phenomena between the electrodes during electric discharge machining. However, as Van Dijck himself admitted in his study, the number of assumptions made to overcome the lack of experimental data at that time was quite significant.
Further models of what occurs during electric discharge machining in terms of heat transfer were developed in the late eighties and early nineties. It resulted in three scholarly papers: the first presenting a thermal model of material removal on the cathode, the second presenting a thermal model for the erosion occurring on the anode and the third introducing a model describing the plasma channel formed during the passage of the discharge current through the dielectric liquid. Validation of these models is supported by experimental data provided by AGIE.
These models give the most authoritative support for the claim that EDM is a thermal process, removing material from the two electrodes because of melting or vaporization, along with pressure dynamics established in the spark-gap by the collapsing of the plasma channel. However, for small discharge energies the models are inadequate to explain the experimental data. All these models hinge on a number of assumptions from such disparate research areas as submarine explosions, discharges in gases, and failure of transformers, so it is not surprising that alternative models have been proposed more recently in the literature trying to explain the EDM process.
Among these, the model from Singh and Ghosh reconnects the removal of material from the electrode to the presence of an electrical force on the surface of the electrode that could mechanically remove material and create the craters. This would be possible because the material on the surface has altered mechanical properties due to an increased temperature caused by the passage of electric current. The authors' simulations showed how they might explain EDM better than a thermal model (melting or evaporation), especially for small discharge energies, which are typically used in μ-EDM and in finishing operations.
Given the many available models, it appears that the material removal mechanism in EDM is not yet well understood and that further investigation is necessary to clarify it, especially considering the lack of experimental scientific evidence to build and validate the current EDM models. This explains an increased current research effort in related experimental techniques.
Types
Sinker EDM
Sinker EDM, also called ram EDM, cavity type EDM or volume EDM, consists of an electrode and workpiece submerged in an insulating liquid such as, more typically, oil or, less frequently, other dielectric fluids. The electrode and workpiece are connected to a suitable power supply. The power supply generates an electrical potential between the two parts. As the electrode approaches the workpiece, dielectric breakdown occurs in the fluid, forming a plasma channel, and a small spark jumps.
These sparks usually strike one at a time, because it is very unlikely that different locations in the inter-electrode space have the identical local electrical characteristics which would enable a spark to occur simultaneously in all such locations. These sparks happen in huge numbers at seemingly random locations between the electrode and the workpiece. As the base metal is eroded, and the spark gap subsequently increased, the electrode is lowered automatically by the machine so that the process can continue uninterrupted. Several hundred thousand sparks occur per second, with the actual duty cycle carefully controlled by the setup parameters. These controlling cycles are sometimes known as "on time" and "off time", which are more formally defined in the literature.
The on time setting determines the length or duration of the spark. Hence, a longer on time produces a deeper cavity from each spark, creating a rougher finish on the workpiece. The reverse is true for a shorter on time. Off time is the period of time between sparks. Although not directly affecting the machining of the part, the off time allows the flushing of dielectric fluid through a nozzle to clean out the eroded debris. Insufficient debris removal can cause repeated strikes in the same location which can lead to a short circuit. Modern controllers monitor the characteristics of the arcs and can alter parameters in microseconds to compensate. The typical part geometry is a complex 3D shape, often with small or odd shaped angles. Vertical, orbital, vectorial, directional, helical, conical, rotational, spin, and indexing machining cycles are also used.
Wire EDM
In wire electrical discharge machining (WEDM), also known as wire-cut EDM and wire cutting, a thin single-strand metal wire, usually brass, is fed through the workpiece, submerged in a tank of dielectric fluid, typically deionized water. Wire-cut EDM is typically used to cut plates as thick as 300mm and to make punches, tools, and dies from hard metals that are difficult to machine with other methods.
The wire, which is constantly fed from a spool, is held between upper and lower diamond guides which is centered in a water nozzle head. The guides, usually CNC-controlled, move in the x–y plane. On most machines, the upper guide can also move independently in the z–u–v axis, giving rise to the ability to cut tapered and transitioning shapes (circle on the bottom, square at the top for example). The upper guide can control axis movements in the GCode standard, x–y–u–v–i–j–k–l–. This allows the wire-cut EDM to be programmed to cut very intricate and delicate shapes.
The upper and lower diamond guides are usually accurate to , and can have a cutting path or kerf as small as using Ø wire, though the average cutting kerf that achieves the best economic cost and machining time is using Ø brass wire. The reason that the cutting width is greater than the width of the wire is because sparking occurs from the sides of the wire to the work piece, causing erosion. This "overcut" is necessary, for many applications it is adequately predictable and therefore can be compensated for (for instance in micro-EDM this is not often the case). Spools of wire are long — an 8 kg spool of 0.25 mm wire is just over 19 kilometers in length. Wire diameter can be as small as and the geometry precision is not far from ± .
The wire-cut process uses water as its dielectric fluid, controlling its resistivity and other electrical properties with filters and PID controlled de-ionizer units. The water flushes the cut debris away from the cutting zone. Flushing is an important factor in determining the maximum feed rate for a given material thickness.
Along with tighter tolerances, multi axis EDM wire-cutting machining centers have added features such as multi heads for cutting two parts at the same time, controls for preventing wire breakage, automatic self-threading features in case of wire breakage, and programmable machining strategies to optimize the operation.
Wire-cutting EDM is commonly used when low residual stresses are desired, because it does not require high cutting forces for removal of material. If the energy per pulse is relatively low (as in finishing operations), little change in the mechanical properties of a material is expected due to these low residual stresses, although material that hasn't been stress-relieved can distort in the machining process.
The work piece may undergo a significant thermal cycle, its severity depending on the technological parameters used. Such thermal cycles may cause formation of a recast layer on the part and residual tensile stresses on the work piece. If machining takes place after heat treatment, dimensional accuracy will not be affected by heat treat distortion.
Fast hole drilling EDM
Fast hole drilling EDM was designed for producing fast, accurate, small, deep holes. It is conceptually akin to sinker EDM but the electrode is a rotating tube conveying a pressurized jet of dielectric fluid. It can make a hole an inch deep in about a minute and is a good way to machine holes in materials too hard for twist-drill machining. This EDM drilling type is used largely in the aerospace industry, producing cooling holes into aero blades and other components. It is also used to drill holes in industrial gas turbine blades, in molds and dies, and in bearings.
Applications
Prototype production
The EDM process is most widely used by the mold-making, tool, and die industries, but is becoming a common method of making prototype and production parts, especially in the aerospace, automobile and electronics industries in which production quantities are relatively low. In sinker EDM, a graphite, copper tungsten, or pure copper electrode is machined into the desired (negative) shape and fed into the workpiece on the end of a vertical ram.
Coinage die making
For the creation of dies for producing jewelry and badges, or blanking and piercing (through use of a pancake die) by the coinage (stamping) process, the positive master may be made from sterling silver, since (with appropriate machine settings) the master is significantly eroded and is used only once. The resultant negative die is then hardened and used in a drop hammer to produce stamped flats from cutout sheet blanks of bronze, silver, or low proof gold alloy. For badges these flats may be further shaped to a curved surface by another die. This type of EDM is usually performed submerged in an oil-based dielectric. The finished object may be further refined by hard (glass) or soft (paint) enameling, or electroplated with pure gold or nickel. Softer materials such as silver may be hand engraved as a refinement.
Small hole drilling
Small hole drilling EDM is used in a variety of applications.
On wire-cut EDM machines, small hole drilling EDM is used to make a through hole in a workpiece through which to thread the wire for the wire-cut EDM operation. A separate EDM head specifically for small hole drilling is mounted on a wire-cut machine and allows large hardened plates to have finished parts eroded from them as needed and without pre-drilling.
Small hole EDM is used to drill rows of holes into the leading and trailing edges of turbine blades used in jet engines. Gas flow through these small holes allows the engines to use higher temperatures than otherwise possible. The high-temperature, very hard, single crystal alloys employed in these blades makes conventional machining of these holes with high aspect ratio extremely difficult, if not impossible.
Small hole EDM is also used to create microscopic orifices for fuel system components, spinnerets for synthetic fibers such as rayon, and other applications.
There are also stand-alone small hole drilling EDM machines with an x–y axis also known as a super drill or hole popper that can machine blind or through holes. EDM drills bore holes with a long brass or copper tube electrode that rotates in a chuck with a constant flow of distilled or deionized water flowing through the electrode as a flushing agent and dielectric. The electrode tubes operate like the wire in wire-cut EDM machines, having a spark gap and wear rate. Some small-hole drilling EDMs are able to drill through 100 mm of soft or hardened steel in less than 10 seconds, averaging 50% to 80% wear rate. Holes of 0.3 mm to 6.1 mm can be achieved in this drilling operation. Brass electrodes are easier to machine but are not recommended for wire-cut operations due to eroded brass particles causing "brass on brass" wire breakage, therefore copper is recommended.
Metal disintegration machining
Several manufacturers produce EDM machines for the specific purpose of removing broken cutting tools and fasteners from work pieces. In this application, the process is termed "metal disintegration machining" or MDM. The metal disintegration process removes only the center of the broken tool or fastener, leaving the hole intact and allowing a ruined part to be reclaimed.
Closed-loop manufacturing
Closed-loop manufacturing can improve the accuracy and reduce the tool costs
Advantages and disadvantages
EDM is often compared to electrochemical machining.
Advantages of EDM include:
Ability to machine complex shapes that would otherwise be difficult to produce with conventional cutting tools.
Machining of extremely hard material to very close tolerances.
Very small work pieces can be machined where conventional cutting tools may damage the part from excess cutting tool pressure.
There is no direct contact between tool and work piece. Therefore, delicate sections and weak materials can be machined without perceivable distortion.
A good surface finish can be obtained; a very good surface may be obtained by redundant finishing paths.
Very fine holes can be attained.
Tapered holes may be produced.
Pipe or container internal contours and internal corners down to R 0.001".
Disadvantages of EDM include:
Difficulty finding expert machinists.
The slow rate of material removal.
Potential fire hazard associated with use of combustible oil based dielectrics.
The additional time and cost used for creating electrodes for ram/sinker EDM.
Reproducing sharp corners on the workpiece is difficult due to electrode wear.
Specific power consumption is very high.
Power consumption is high.
"Overcut" is formed.
Excessive tool wear occurs during machining.
Electrically non-conductive materials can be machined only with specific set-up of the process.
A recast layer is formed at the cut surface due to melting of the material by the arc.
See also
Electrochemical machining
References
Bibliography
External links
New Arc Detection Technology for Highly Efficient Electro-Discharge Machining
Engineering Design For Electrical Discharge Machining
by Steve Mould (Apr 2023)
Electric arcs
Hole making
Machining
Metallurgical processes
Articles containing video clips | Electrical discharge machining | [
"Physics",
"Chemistry",
"Materials_science"
] | 5,433 | [
"Electric arcs",
"Physical phenomena",
"Metallurgy",
"Plasma phenomena",
"Metallurgical processes"
] |
75,629 | https://en.wikipedia.org/wiki/Abstract%20syntax%20tree | An abstract syntax tree (AST) is a data structure used in computer science to represent the structure of a program or code snippet. It is a tree representation of the abstract syntactic structure of text (often source code) written in a formal language. Each node of the tree denotes a construct occurring in the text. It is sometimes called just a syntax tree.
The syntax is "abstract" in the sense that it does not represent every detail appearing in the real syntax, but rather just the structural or content-related details. For instance, grouping parentheses are implicit in the tree structure, so these do not have to be represented as separate nodes. Likewise, a syntactic construct like an if-condition-then statement may be denoted by means of a single node with three branches.
This distinguishes abstract syntax trees from concrete syntax trees, traditionally designated parse trees. Parse trees are typically built by a parser during the source code translation and compiling process. Once built, additional information is added to the AST by means of subsequent processing, e.g., contextual analysis.
Abstract syntax trees are also used in program analysis and program transformation systems.
Application in compilers
Abstract syntax trees are data structures widely used in compilers to represent the structure of program code. An AST is usually the result of the syntax analysis phase of a compiler. It often serves as an intermediate representation of the program through several stages that the compiler requires, and has a strong impact on the final output of the compiler.
Motivation
An AST has several properties that aid the further steps of the compilation process:
An AST can be edited and enhanced with information such as properties and annotations for every element it contains. Such editing and annotation is impossible with the source code of a program, since it would imply changing it.
Compared to the source code, an AST does not include inessential punctuation and delimiters (braces, semicolons, parentheses, etc.).
An AST usually contains extra information about the program, due to the consecutive stages of analysis by the compiler. For example, it may store the position of each element in the source code, allowing the compiler to print useful error messages.
Languages are often ambiguous by nature. In order to avoid this ambiguity, programming languages are often specified as a context-free grammar (CFG). However, there are often aspects of programming languages that a CFG can't express, but are part of the language and are documented in its specification. These are details that require a context to determine their validity and behaviour. For example, if a language allows new types to be declared, a CFG cannot predict the names of such types nor the way in which they should be used. Even if a language has a predefined set of types, enforcing proper usage usually requires some context. Another example is duck typing, where the type of an element can change depending on context. Operator overloading is yet another case where correct usage and final function are context-dependent.
Design
The design of an AST is often closely linked with the design of a compiler and its expected features.
Core requirements include the following:
Variable types must be preserved, as well as the location of each declaration in source code.
The order of executable statements must be explicitly represented and well defined.
Left and right components of binary operations must be stored and correctly identified.
Identifiers and their assigned values must be stored for assignment statements.
These requirements can be used to design the data structure for the AST.
Some operations will always require two elements, such as the two terms for addition. However, some language constructs require an arbitrarily large number of children, such as argument lists passed to programs from the command shell. As a result, an AST used to represent code written in such a language has to also be flexible enough to allow for quick addition of an unknown quantity of children.
To support compiler verification it should be possible to unparse an AST into source code form. The source code produced should be sufficiently similar to the original in appearance and identical in execution, upon recompilation.
The AST is used intensively during semantic analysis, where the compiler checks for correct usage of the elements of the program and the language. The compiler also generates symbol tables based on the AST during semantic analysis. A complete traversal of the tree allows verification of the correctness of the program.
After verifying correctness, the AST serves as the base for code generation. The AST is often used to generate an intermediate representation (IR), sometimes called an intermediate language, for the code generation.
Other usages
AST differencing
AST differencing, or for short tree differencing, consists of computing the list of differences between two ASTs. This list of differences is typically called an edit script. The edit script directly refers to the AST of the code. For instance, an edit action may result in the addition of a new AST node representing a function.
Clone detection
An AST is a powerful abstraction to perform code clone detection.
See also
Abstract semantic graph (ASG), also called term graph
Composite pattern
Control-flow graph
Directed acyclic graph (DAG)
Document Object Model (DOM)
Expression tree
Extended Backus–Naur form
Lisp, a family of languages written in trees, with macros to manipulate code trees
Parse tree, also known as concrete syntax tree
Semantic resolution tree (SRT)
Shunting-yard algorithm
Symbol table
TreeDL
Abstract Syntax Tree Interpreters
References
Further reading
(overview of AST implementation in various language families)
External links
AST View: an Eclipse plugin to visualize a Java abstract syntax tree
eli project: Abstract Syntax Tree Unparsing
(OMG standard).
JavaParser: The JavaParser library provides you with an Abstract Syntax Tree of your Java code. The AST structure then allows you to work with your Java code in an easy programmatic way.
Spoon: A library to analyze, transform, rewrite, and transpile Java source code. It parses source files to build a well-designed AST with powerful analysis and transformation API.
AST Explorer: A website to help visualize ASTs in several popular languages such as Go, Python, Java, and JavaScript.
Trees (data structures)
Formal languages | Abstract syntax tree | [
"Mathematics"
] | 1,299 | [
"Formal languages",
"Mathematical logic"
] |
75,654 | https://en.wikipedia.org/wiki/Hyperthermia | Hyperthermia, also known simply as overheating, is a condition in which an individual's body temperature is elevated beyond normal due to failed thermoregulation. The person's body produces or absorbs more heat than it dissipates. When extreme temperature elevation occurs, it becomes a medical emergency requiring immediate treatment to prevent disability or death. Almost half a million deaths are recorded every year from hyperthermia.
The most common causes include heat stroke and adverse reactions to drugs. Heat stroke is an acute temperature elevation caused by exposure to excessive heat, or combination of heat and humidity, that overwhelms the heat-regulating mechanisms of the body. The latter is a relatively rare side effect of many drugs, particularly those that affect the central nervous system. Malignant hyperthermia is a rare complication of some types of general anesthesia. Hyperthermia can also be caused by a traumatic brain injury.
Hyperthermia differs from fever in that the body's temperature set point remains unchanged. The opposite is hypothermia, which occurs when the temperature drops below that required to maintain normal metabolism. The term is from Greek ὑπέρ, hyper, meaning "above", and θέρμος, thermos, meaning "heat".
Classification
In humans, hyperthermia is defined as a temperature greater than , depending on the reference used, that occurs without a change in the body's temperature set point.
The normal human body temperature can be as high as in the late afternoon. Hyperthermia requires an elevation from the temperature that would otherwise be expected. Such elevations range from mild to extreme; body temperatures above can be life-threatening.
Signs and symptoms
An early stage of hyperthermia can be "heat exhaustion" (or "heat prostration" or "heat stress"), whose symptoms can include heavy sweating, rapid breathing and a fast, weak pulse. If the condition progresses to heat stroke, then hot, dry skin is typical as blood vessels dilate in an attempt to increase heat loss. An inability to cool the body through perspiration may cause dry skin. Hyperthermia from neurological disease may include little or no sweating, cardiovascular problems, and confusion or delirium.
Other signs and symptoms vary. Accompanying dehydration can produce nausea, vomiting, headaches, and low blood pressure and the latter can lead to fainting or dizziness, especially if the standing position is assumed quickly.
In severe heat stroke, confusion and aggressive behavior may be observed. Heart rate and respiration rate will increase (tachycardia and tachypnea) as blood pressure drops and the heart attempts to maintain adequate circulation. The decrease in blood pressure can then cause blood vessels to contract reflexively, resulting in a pale or bluish skin color in advanced cases. Young children, in particular, may have seizures. Eventually, organ failure, unconsciousness and death will result.
Causes
Heat stroke occurs when thermoregulation is overwhelmed by a combination of excessive metabolic production of heat (exertion), excessive environmental heat, and insufficient or impaired heat loss, resulting in an abnormally high body temperature. In severe cases, temperatures can exceed . Heat stroke may be non-exertional (classic) or exertional.
Exertional
Significant physical exertion in hot conditions can generate heat beyond the ability to cool, because, in addition to the heat, humidity of the environment may reduce the efficiency of the body's normal cooling mechanisms. Human heat-loss mechanisms are limited primarily to sweating (which dissipates heat by evaporation, assuming sufficiently low humidity) and vasodilation of skin vessels (which dissipates heat by convection proportional to the temperature difference between the body and its surroundings, according to Newton's law of cooling). Other factors, such as insufficient water intake, consuming alcohol, or lack of air conditioning, can worsen the problem.
The increase in body temperature that results from a breakdown in thermoregulation affects the body biochemically. Enzymes involved in metabolic pathways within the body such as cellular respiration fail to work effectively at higher temperatures, and further increases can lead them to denature, reducing their ability to catalyse essential chemical reactions. This loss of enzymatic control affects the functioning of major organs with high energy demands such as the heart and brain. Loss of fluid and electrolytes cause heat cramps – slow muscular contraction and severe muscular spasm lasting between one and three minutes. Almost all cases of heat cramps involve vigorous physical exertion. Body temperature may remain normal or a little higher than normal and cramps are concentrated in heavily used muscles.
Situational
Situational heat stroke occurs in the absence of exertion. It mostly affects the young and elderly. In the elderly in particular, it can be precipitated by medications that reduce vasodilation and sweating, such as anticholinergic drugs, antihistamines, and diuretics. In this situation, the body's tolerance for high environmental temperature may be insufficient, even at rest.
Heat waves are often followed by a rise in the death rate, and these 'classical hyperthermia' deaths typically involve the elderly and infirm. This is partly because thermoregulation involves cardiovascular, respiratory and renal systems which may be inadequate for the additional stress because of the existing burden of aging and disease, further compromised by medications. During the July 1995 heatwave in Chicago, there were at least 700 heat-related deaths. The strongest risk factors were being confined to bed, and living alone, while the risk was reduced for those with working air conditioners and those with access to transportation. Even then, reported deaths may be underestimated as diagnosis can be mis-classified as stroke or heart attack.
Drugs
Some drugs cause excessive internal heat production. The rate of drug-induced hyperthermia is higher where use of these drugs is higher.
Many psychotropic medications, such as selective serotonin reuptake inhibitors (SSRIs), monoamine oxidase inhibitors (MAOIs), and tricyclic antidepressants, can cause hyperthermia. Serotonin syndrome is a rare adverse reaction to overdose of these medications or the use of several simultaneously. Similarly, neuroleptic malignant syndrome is an uncommon reaction to neuroleptic agents. These syndromes are differentiated by other associated symptoms, such as tremor in serotonin syndrome and "lead-pipe" muscle rigidity in neuroleptic malignant syndrome.
Recreational drugs such as amphetamines and cocaine, PCP, dextromethorphan, LSD, and MDMA may cause hyperthermia.
Malignant hyperthermia is a rare reaction to common anesthetic agents (such as halothane) or the paralytic agent succinylcholine. Those who have this reaction, which is potentially fatal, have a genetic predisposition.
The use of anticholinergics, more specifically muscarinic antagonists are thought to cause mild hyperthermic episodes due to its parasympatholytic effects. The sympathetic nervous system, also known as the "fight-or-flight response", dominates by raising catecholamine levels by the blocked action of the "rest and digest system".
Drugs that decouple oxidative phosphorylation may also cause hyperthermia. From this group of drugs the most well-known is 2,4-dinitrophenol which was used as a weight loss drug until dangers from its use became apparent.
Personal protective equipment
Those working in industry, in the military, or as first responders may be required to wear personal protective equipment (PPE) against hazards such as chemical agents, gases, fire, small arms and improvised explosive devices (IEDs). PPE includes a range of hazmat suits, firefighting turnout gear, body armor and bomb suits, among others. Depending on design, the wearer may be encapsulated in a microclimate, due to an increase in thermal resistance and decrease in vapor permeability. As physical work is performed, the body's natural thermoregulation (i.e. sweating) becomes ineffective. This is compounded by increased work rates, high ambient temperature and humidity levels, and direct exposure to the sun. The net effect is that desired protection from some environmental threats inadvertently increases the threat of heat stress.
The effect of PPE on hyperthermia has been noted in fighting the 2014 Ebola virus epidemic in Western Africa. Doctors and healthcare workers were only able to work for 40 minutes at a time in their protective suits, fearing heat stroke.
Other
Other rare causes of hyperthermia include thyrotoxicosis and an adrenal gland tumor, called pheochromocytoma, both of which can cause increased heat production. Damage to the central nervous system from brain hemorrhage, traumatic brain injury, status epilepticus, and other kinds of injury to the hypothalamus can also cause hyperthermia.
Pathophysiology
A fever occurs when the core temperature is set higher, through the action of the pre-optic region of the anterior hypothalamus. For example, in response to a bacterial or viral infection, certain white blood cells within the blood will release pyrogens which have a direct effect on the anterior hypothalamus, causing body temperature to rise, much like raising the temperature setting on a thermostat.
In contrast, hyperthermia occurs when the body temperature rises without a change in the heat control centers.
Some of the gastrointestinal symptoms of acute exertional heatstroke, such as vomiting, diarrhea, and gastrointestinal bleeding, may be caused by barrier dysfunction and subsequent endotoxemia. Ultraendurance athletes have been found to have significantly increased plasma endotoxin levels. Endotoxin stimulates many inflammatory cytokines, which in turn may cause multiorgan dysfunction. Experimentally, monkeys treated with oral antibiotics prior to induction of heat stroke do not become endotoxemic.
There is scientific support for the concept of a temperature set point; that is, maintenance of an optimal temperature for the metabolic processes that life depends on. Nervous activity in the preoptic-anterior hypothalamus of the brain triggers heat losing (sweating, etc.) or heat generating (shivering and muscle contraction, etc.) activities through stimulation of the autonomic nervous system. The pre-optic anterior hypothalamus has been shown to contain warm sensitive, cool sensitive, and temperature insensitive neurons, to determine the body's temperature setpoint. As the temperature that these neurons are exposed to rises above , the rate of electrical discharge of the warm-sensitive neurons increases progressively. Cold-sensitive neurons increase their rate of electrical discharge progressively below .
Diagnosis
Hyperthermia is generally diagnosed by the combination of unexpectedly high body temperature and a history that supports hyperthermia instead of a fever. Most commonly this means that the elevated temperature has occurred in a hot, humid environment (heat stroke) or in someone taking a drug for which hyperthermia is a known side effect (drug-induced hyperthermia). The presence of signs and symptoms related to hyperthermia syndromes, such as extrapyramidal symptoms characteristic of neuroleptic malignant syndrome, and the absence of signs and symptoms more commonly related to infection-related fevers, are also considered in making the diagnosis.
If fever-reducing drugs lower the body temperature, even if the temperature does not return entirely to normal, then hyperthermia is excluded.
Prevention
When ambient temperature is excessive, humans and many other animals cool themselves below ambient by evaporative cooling of sweat (or other aqueous liquid; saliva in dogs, for example); this helps prevent potentially fatal hyperthermia. The effectiveness of evaporative cooling depends upon humidity. Wet-bulb temperature, which takes humidity into account, or more complex calculated quantities such as wet-bulb globe temperature (WBGT), which also takes solar radiation into account, give useful indications of the degree of heat stress and are used by several agencies as the basis for heat-stress prevention guidelines. (Wet-bulb temperature is essentially the lowest skin temperature attainable by evaporative cooling at a given ambient temperature and humidity.)
A sustained wet-bulb temperature exceeding is likely to be fatal even to fit and healthy people unclothed in the shade next to a fan; at this temperature, environmental heat gain instead of loss occurs. , wet-bulb temperatures only very rarely exceeded anywhere, although significant global warming may change this.
In cases of heat stress caused by physical exertion, hot environments, or protective equipment, prevention or mitigation by frequent rest breaks, careful hydration, and monitoring body temperature should be attempted. However, in situations demanding one is exposed to a hot environment for a prolonged period or must wear protective equipment, a personal cooling system is required as a matter of health and safety. There are a variety of active or passive personal cooling systems; these can be categorized by their power sources and whether they are person- or vehicle-mounted.
Because of the broad variety of operating conditions, these devices must meet specific requirements concerning their rate and duration of cooling, their power source, and their adherence to health and safety regulations. Among other criteria are the user's need for physical mobility and autonomy. For example, active-liquid systems operate by chilling water and circulating it through a garment; the skin surface area is thereby cooled through conduction. This type of system has proven successful in certain military, law enforcement, and industrial applications. Bomb-disposal technicians wearing special suits to protect against improvised explosive devices (IEDs) use a small, ice-based chiller unit that is strapped to one leg; a liquid-circulating garment, usually a vest, is worn over the torso to maintain a safe core body temperature. By contrast, soldiers traveling in combat vehicles can face microclimate temperatures in excess of and require a multiple-user, vehicle-powered cooling system with rapid connection capabilities. Requirements for hazmat teams, the medical community, and workers in heavy industry vary further.
Treatment
The underlying cause must be removed. Mild hyperthemia caused by exertion on a hot day may be adequately treated through self-care measures, such as increased water consumption and resting in a cool place. Hyperthermia that results from drug exposure requires prompt cessation of that drug, and occasionally the use of other drugs as counter measures.
Antipyretics (e.g., acetaminophen, aspirin, other nonsteroidal anti-inflammatory drugs) have no role in the treatment of heatstroke because antipyretics interrupt the change in the hypothalamic set point caused by pyrogens; they are not expected to work on a healthy hypothalamus that has been overloaded, as in the case of heatstroke. In this situation, antipyretics actually may be harmful in patients who develop hepatic, hematologic, and renal complications because they may aggravate bleeding tendencies.
When body temperature is significantly elevated, mechanical cooling methods are used to remove heat and to restore the body's ability to regulate its own temperatures. Passive cooling techniques, such as resting in a cool, shady area and removing clothing can be applied immediately. Active cooling methods, such as sponging the head, neck, and trunk with cool water, remove heat from the body and thereby speed the body's return to normal temperatures. When methods such as immersion are impractical, misting the body with water and using a fan have also been shown to be effective.
Sitting in a bathtub of tepid or cool water (immersion method) can remove a significant amount of heat in a relatively short period of time. It was once thought that immersion in very cold water is counterproductive, as it causes vasoconstriction in the skin and thereby prevents heat from escaping the body core. However, a British analysis of various studies stated: "this has never been proven experimentally. Indeed, a recent study using normal volunteers has shown that cooling rates were fastest when the coldest water was used." The analysis concluded that iced water immersion is the most-effective cooling technique for exertional heat stroke. No superior cooling method has been found for non-exertional heat stroke. Thus, aggressive ice-water immersion remains the gold standard for life-threatening heat stroke.
When the body temperature reaches about , or if the affected person is unconscious or showing signs of confusion, hyperthermia is considered a medical emergency that requires treatment in a proper medical facility. A cardiopulmonary resuscitation (CPR) may be necessary if the person goes into cardiac arrest (stop of heart beats). Already in a hospital, more aggressive cooling measures are available, including intravenous hydration, gastric lavage with iced saline, and even hemodialysis to cool the blood.
Epidemiology
Hyperthermia affects those who are unable to regulate their body heat, mainly due to environmental conditions. The main risk factor for hyperthermia is the lack of ability to sweat. People who are dehydrated or who are older may not produce the sweat they need to regulate their body temperature. High heat conditions can put certain groups at risk for hyperthermia including: physically active individuals, soldiers, construction workers, landscapers and factory workers. Some people that do not have access to cooler living conditions, like people with lower socioeconomic status, may have a difficult time fighting the heat. People are at risk for hyperthermia during high heat and dry conditions, most commonly seen in the summer.
Various cases of different types of hyperthermia have been reported. A research study was published in March 2019 that looked into multiple case reports of drug induced hyperthermia. The study concluded that psychotropic drugs such as anti-psychotics, antidepressants, and anxiolytics were associated with an increased heat-related mortality as opposed to the other drugs researched (anticholinergics, diuretics, cardiovascular agents, etc.). A different study was published in June 2019 that examined the association between hyperthermia in older adults and the temperatures in the United States. Hospitalization records of elderly patients in the US between 1991 and 2006 were analyzed and concluded that cases of hyperthermia were observed to be highest in regions with arid climates. The study discussed finding a disproportionately high number of cases of hyperthermia in early seasonal heat waves indicating that people were not yet practicing proper techniques to stay cool and prevent overheating in the early presence of warm, dry weather.
In urban areas people are at an increased susceptibility to hyperthermia. This is due to a phenomenon called the urban heat island effect. Since the 20th century in the United States, the north-central region (Ohio, Indiana, Illinois, Missouri, Iowa, and Nebraska) was the region with the highest morbidity resulting from hyperthermia. Northeastern states had the next highest. Regions least affected by heat wave-related hyperthermia causing death were Southern and Pacific Coastal states. Northern cities in the United States are at greater risk of hyperthermia during heat waves due to the fact that people tend to have a lower minimum mortality temperature at higher latitudes. In contrast, cities residing in lower latitudes within the continental US typically have higher thresholds for ambient temperatures. In India, hundreds die every year from summer heat waves, including more than 2,500 in the year 2015. Later that same summer, the 2015 Pakistani heat wave killed about 2,000 people. An extreme 2003 European heat wave caused tens of thousands of deaths.
Causes of hyperthermia include dehydration, use of certain medications, using cocaine and amphetamines or excessive alcohol use. Bodily temperatures greater than can be diagnosed as a hyperthermic case. As body temperatures increase or excessive body temperatures persist, individuals are at a heightened risk of developing progressive conditions. Greater risk complications of hyperthermia include heat stroke, organ malfunction, organ failure, and death. There are two forms of heat stroke; classical heatstroke and exertional heatstroke. Classical heatstroke occurs from extreme environmental conditions, such as heat waves. Those who are most commonly affected by classical heatstroke are very young, elderly or chronically ill. Exertional heatstroke appears in individuals after vigorous physical activity. Exertional heatstroke is displayed most commonly in healthy 15-50 year old people. Sweating is often present in exertional heatstroke. The associated mortality rate of heatstroke is 40 to 64%.
Research
Hyperthermia can also be deliberately induced using drugs or medical devices, and is being studied and applied in clinical routine as a treatment of some kinds of cancer. Research has shown that medically controlled hyperthermia can shrink tumours. This occurs when a high body temperature damages cancerous cells by destroying proteins and structures within each cell. Hyperthermia has also been researched to investigate whether it causes cancerous tumours to be more prone to radiation as a form of treatment; which as a result has allowed hyperthermia to be used to complement other forms of cancer therapy. Various techniques of hyperthermia in the treatment of cancer include local or regional hyperthermia, as well as whole body techniques.
See also
Effects of climate change on human health
Space blanket
References
External links
Tips to Beat the Heat —American Red Cross
Extreme Heat—CDC Emergency Preparedness and Response
Workplace Safety and Health Topics: Heat Stress—CDC and NIOSH
Excessive Heat Events Guidebook—US EPA
Physiological Responses to Exercise in the Heat—US National Academies
Causes of death
Heat waves
Medical emergencies
Weather and health
Physiology
Thermoregulation | Hyperthermia | [
"Biology"
] | 4,496 | [
"Thermoregulation",
"Homeostasis",
"Physiology"
] |
12,234,815 | https://en.wikipedia.org/wiki/Pentadiene | In organic chemistry, pentadiene is any hydrocarbon with an open chain of five carbons, connected by two single bonds and two double bonds. All those compounds have the same molecular formula . The inventory of pentadienes include:
1,2-pentadiene, or ethyl allene, .
1,3-pentadiene, with two isomers:
cis-1,3-pentadiene.
trans-1,3-pentadiene, also known as piperylene.
1,4-pentadiene, .
2,3-pentadiene, , with two enantiomers (R and S).
Well known derivatives containing pentadiene groups include hexadienes, cyclopentadiene, and especially three fatty acids linoleic acid, α-linolenic acid, and arachidonic acid as well as their triglycerides (fats).
Preparation and basic reactions
1,4-Pentadiene can be prepared from 1,5-pentadiol via the diacetate.
1,3-Pentadiene, like 1,3-butadiene, undergoes a variety of cycloaddition reactions. For example, it forms a sulfolene upon treatment with sulfur dioxide.
Pentadienyl
Pentadienyl refers to the organic radical, anion, or cation with the formula , where z = 0, −1, +1, respectively.
Biochemistry
Methylene-interrupted polyenes are 1,4-pentadiene groups found in polyunsaturated fatty acids linoleic acid, α-linolenic acid, and arachidonic acid. These pentadiene derivatives are susceptible to lipid peroxidation, far moreso than monounsaturated or saturated fatty acids. The basis for this reactivity is the weakness of doubly allylic C-H bonds, leading to pentadienyl radicals. A range of reactions with oxygen occur. Products include fatty acid hydroperoxides, epoxy-hydroxy polyunsaturated fatty acids, jasmonates, divinylether fatty acids, and leaf aldehydes. Some of these derivatives are signallng molecules, some are used in plant defense (antifeedants), some are precursors to other metabolites that are used by the plant.
Cyclooxygenases ("COX") are enzymes that generate prostanoids, including thromboxane and prostaglandins such as prostacyclin. Aspirin and ibuprofen exert their effects through inhibition of COX.
Drying and rancidification
Fats containing 1,4-pentadiene groups are drying oils, i.e. film-forming liquids suitable as paints. One practical consequence is that polyunsaturated fatty acids have poor shelf life owing to their tendency toward autoxidation, leading, in the case of edibles, to rancidification. Metals accelerate the degradation.
Organometallic chemistry
In organometallic chemistry, the pentadienyl anion is a ligand, the acyclic analogue of the more-common cyclopentadienyl anion. The pentadienyl anion is generated by deprotonation of pentadiene. A number of complexes are known, including bis(pentadienyl) iron, , the "open" analog of ferrocene. Only few pentadienyl complexes feature simple ligands. More common is the dimethyl derivative 2,4-. Additionally, many pentadienyl ligands are cyclic, being derived from the addition of hydride to η6-arene complexes or hydride abstraction from cyclohexadiene complexes.
The first pentadienyl complex to be reported was derived from protonolysis of a complex of pentadienol:
Treatment of this cation with sodium borohydride gives the pentadiene complex:
Further reading
Juergen Herzler , Jeffrey A. Manion, and Wing Tsang (2001): "1,2‐Pentadiene decomposition". International Journal of Chemical Kinetics, volume 33, issue 11, pages 755-767.
References
Anions
Free radicals
Alkadienes | Pentadiene | [
"Physics",
"Chemistry",
"Biology"
] | 903 | [
"Matter",
"Anions",
"Free radicals",
"Senescence",
"Biomolecules",
"Ions"
] |
12,234,905 | https://en.wikipedia.org/wiki/Ribosome-binding%20site | A ribosome binding site, or ribosomal binding site (RBS), is a sequence of nucleotides upstream of the start codon of an mRNA transcript that is responsible for the recruitment of a ribosome during the initiation of translation. Mostly, RBS refers to bacterial sequences, although internal ribosome entry sites (IRES) have been described in mRNAs of eukaryotic cells or viruses that infect eukaryotes. Ribosome recruitment in eukaryotes is generally mediated by the 5' cap present on eukaryotic mRNAs.
Prokaryotes
The RBS in prokaryotes is a region upstream of the start codon. This region of the mRNA has the consensus 5'-AGGAGG-3', also called the Shine-Dalgarno (SD) sequence. The complementary sequence (CCUCCU), called the anti-Shine-Dalgarno (ASD) is contained in the 3’ end of the 16S region of the smaller (30S) ribosomal subunit. Upon encountering the Shine-Dalgarno sequence, the ASD of the ribosome base pairs with it, after which translation is initiated.
Variations of the 5'-AGGAGG-3' sequence have been found in Archaea as highly conserved 5′-GGTG-3′ regions, 5 basepairs upstream of the start site. Additionally, some bacterial initiation regions, such as rpsA in E.coli completely lack identifiable SD sequences.
Effect on translation initiation rate
Prokaryotic ribosomes begin translation of the mRNA transcript while DNA is still being transcribed. Thus translation and transcription are parallel processes. Bacterial mRNA are usually polycistronic and contain multiple ribosome binding sites. Translation initiation is the most highly regulated step of protein synthesis in prokaryotes.
The rate of translation depends on two factors:
the rate at which a ribosome is recruited to the RBS
the rate at which a recruited ribosome is able to initiate translation (i.e. the translation initiation efficiency)
The RBS sequence affects both of these factors.
Factors affecting rate of ribosome recruitment
The ribosomal protein S1 binds to adenine sequences upstream of the RBS. Increasing the concentration of adenine upstream of the RBS will increase the rate of ribosome recruitment.
Factors affecting the efficiency of translation initiation
The level of complementarity of the mRNA SD sequence to the ribosomal ASD greatly affects the efficiency of translation initiation. Richer complementarity results in higher initiation efficiency. This only holds up to a certain point - having too rich of a complementarity is known to paradoxically decrease the rate of translation as the ribosome then happens to be bound too tightly to proceed downstream.
The optimal distance between the RBS and the start codon is variable - it depends on the portion of the SD sequence encoded in the actual RBS and its distance to the start site of a consensus SD sequence. Optimal spacing increases the rate of translation initiation once a ribosome has been bound. The composition of nucleotides in the spacer region itself was also found to affect the rate of translation initiation in one study.
Heat shock proteins
Secondary structures formed by the RBS can affect the translational efficiency of mRNA, generally inhibiting translation. These secondary structures are formed by H-bonding of the mRNA base pairs and are sensitive to temperature. At a higher-than-usual temperature (~42 °C), the RBS secondary structure of heat shock proteins becomes undone thus allowing ribosomes to bind and initiate translation. This mechanism allows a cell to quickly respond to an increase in temperature.
Eukaryotes
5' cap
Ribosome recruitment in eukaryotes happens when eukaryote initiation factors elF4F and poly(A)-binding protein (PABP) recognize the 5' capped mRNA and recruit the 43S ribosome complex at that location.
Translation initiation happens following recruitment of the ribosome, at the start codon (underlined) found within the Kozak consensus sequence ACCAUGG. Since the Kozak sequence itself is not involved in the recruitment of the ribosome, it is not considered a ribosome binding site.
Internal ribosome entry site (IRES)
Eukaryotic ribosomes are known to bind to transcripts in a mechanism unlike the one involving the 5' cap, at a sequence called the internal ribosome entry site. This process is not dependent on the full set of translation initiation factors (although this depends on the specific IRES) and is commonly found in the translation of viral mRNA.
Gene annotation
The identification of RBSs is used to determine the site of translation initiation in an unannotated sequence. This is referred to as N-terminal prediction. This is especially useful when multiple start codons are situated around the potential start site of the protein coding sequence.
Identification of RBSs is particularly difficult, because they tend to be highly degenerated. One approach to identifying RBS in E.coli is using neural networks. Another approach is using the Gibbs sampling method.
History
The Shine-Dalgarno sequence, of the prokaryotic RBS, was discovered by John Shine and Lynn Dalgarno in 1975. The Kozak consensus sequence was first identified by Marilyn Kozak in 1984 while she was in the Department of Biological Sciences at the University of Pittsburgh.
See also
Alpha operon ribosome binding site
Eukaryotic translation
Bacterial translation
Archaeal translation
Gene prediction
References
Protein biosynthesis
Molecular biology
Cell biology | Ribosome-binding site | [
"Chemistry",
"Biology"
] | 1,146 | [
"Protein biosynthesis",
"Cell biology",
"Gene expression",
"Biosynthesis",
"Molecular biology",
"Biochemistry"
] |
12,235,089 | https://en.wikipedia.org/wiki/Small-angle%20X-ray%20scattering | Small-angle X-ray scattering (SAXS) is a small-angle scattering technique by which nanoscale density differences in a sample can be quantified. This means that it can determine nanoparticle size distributions, resolve the size and shape of (monodisperse) macromolecules, determine pore sizes and characteristic distances of partially ordered materials. This is achieved by analyzing the elastic scattering behaviour of X-rays when travelling through the material, recording their scattering at small angles (typically 0.1 – 10°, hence the "Small-angle" in its name). It belongs to the family of small-angle scattering (SAS) techniques along with small-angle neutron scattering, and is typically done using hard X-rays with a wavelength of 0.07 – 0.2 nm. Depending on the angular range in which a clear scattering signal can be recorded, SAXS is capable of delivering structural information of dimensions between 1 and 100 nm, and of repeat distances in partially ordered systems of up to 150 nm. USAXS (ultra-small angle X-ray scattering) can resolve even larger dimensions, as the smaller the recorded angle, the larger the object dimensions that are probed.
SAXS and USAXS belong to a family of X-ray scattering techniques that are used in the characterization of materials. In the case of biological macromolecules such as proteins, the advantage of SAXS over crystallography is that a crystalline sample is not needed. Furthermore, the properties of SAXS allow investigation of conformational diversity in these molecules. Nuclear magnetic resonance spectroscopy methods encounter problems with macromolecules of higher molecular mass (> 30–40 kDa). However, owing to the random orientation of dissolved or partially ordered molecules, the spatial averaging leads to a loss of information in SAXS compared to crystallography.
Applications
SAXS is used for the determination of the microscale or nanoscale structure of particle systems in terms of such parameters as averaged particle sizes, shapes, distribution, and surface-to-volume ratio. The materials can be solid or liquid and they can contain solid, liquid or gaseous domains (so-called particles) of the same or another material in any combination. Not only particles, but also the structure of ordered systems like lamellae, and fractal-like materials can be studied. The method is accurate, non-destructive and usually requires only a minimum of sample preparation. Applications are very broad and include colloids,,, of all types including interpolyelectrolyte complexes,,, micelles,,,,, microgels, liposomes,,, polymersomes,, metals, cement, oil, polymers,,,, plastics, proteins,, foods and pharmaceuticals and can be found in research as well as in quality control. The X-ray source can be a laboratory source or synchrotron light which provides a higher X-ray flux.
Resonant small-angle X-ray scattering
It is possible to enhance the X-ray scattering yield by matching the energy of X-ray source to a resonant absorption edge in as it is done for resonant inelastic X-ray scattering. Different from standard RIXS measurements, the scattered photons are considered to have the same energy as the incident photons.
SAXS instruments
In a SAXS instrument, a monochromatic beam of X-rays is brought to a sample from which some of the X-rays scatter, while most simply go through the sample without interacting with it. The scattered X-rays form a scattering pattern which is then detected at a detector which is typically a 2-dimensional flat X-ray detector situated behind the sample perpendicular to the direction of the primary beam that initially hit the sample. The scattering pattern contains the information on the structure of the sample.
The major problem that must be overcome in SAXS instrumentation is the separation of the weak scattered intensity from the strong main beam. The smaller the desired angle, the more difficult this becomes. The problem is comparable to one encountered when trying to observe a weakly radiant object close to the Sun, like the Sun's corona. Only if the Moon blocks out the main light source does the corona become visible. Likewise, in SAXS the non-scattered beam that merely travels through the sample must be blocked, without blocking the closely adjacent scattered radiation. Most available X-ray sources produce divergent beams and this compounds the problem. In principle the problem could be overcome by focusing the beam, but this is not easy when dealing with X-rays and was previously not done except on synchrotrons where large bent mirrors can be used. This is why most laboratory small angle devices rely on collimation instead.
Laboratory SAXS instruments can be divided into two main groups: point-collimation and line-collimation instruments:
Point-collimation instruments
Point-collimation instruments have pinholes that shape the X-ray beam to a small circular or elliptical spot that illuminates the sample. Thus the scattering is centro-symmetrically distributed around the primary X-ray beam and the scattering pattern in the detection plane consists of circles around the primary beam. Owing to the small illuminated sample volume and the wastefulness of the collimation process—only those photons are allowed to pass that happen to fly in the right direction—the scattered intensity is small and therefore the measurement time is in the order of hours or days in case of very weak scatterers. If focusing optics like bent mirrors or bent monochromator crystals or collimating and monochromating optics like multilayers are used, measurement time can be greatly reduced. Point-collimation allows the orientation of non-isotropic systems (fibres, sheared liquids) to be determined.
Line-collimation instruments
Line-collimation instruments restrict the beam only in one dimension (rather than two as for point collimation) so that the beam cross-section is a long but narrow line. The illuminated sample volume is much larger compared to point-collimation and the scattered intensity at the same flux density is proportionally larger. Thus measuring times with line-collimation SAXS instruments are much shorter compared to point-collimation and are in the range of minutes. A disadvantage is that the recorded pattern is essentially an integrated superposition (a self-convolution) of many adjacent pinhole patterns. The resulting smearing can be easily removed using model-free algorithms or deconvolution methods based on Fourier transformation, but only if the system is isotropic. Line collimation is of great benefit for any isotropic nanostructured materials, e.g. proteins, surfactants, particle dispersion and emulsions.
SAXS instrument manufacturers
SAXS instrument manufacturers include Anton Paar, Austria; Bruker AXS, Germany; Hecus X-Ray Systems Graz, Austria; Malvern Panalytical. the Netherlands, Rigaku Corporation, Japan; Xenocs, France; and Xenocs, United States.
See also
Biological small-angle scattering
GISAS (Grazing-incidence small-angle scattering)
Fluctuation X-ray scattering (FXS)
Wide-angle X-ray scattering
References
External links
SAXS at a Synchrotron
A movie demonstrating small-angle scattering using laserlight on a hair
Small-angle scattering
X-ray scattering | Small-angle X-ray scattering | [
"Chemistry"
] | 1,528 | [
"X-ray scattering",
"Scattering"
] |
12,236,470 | https://en.wikipedia.org/wiki/C4F8 | The molecular formula C4F8 (molar mass: 200.03 g/mol, exact mass: 199.9872 u) may refer to:
Octafluorocyclobutane, or perfluorocyclobutane
Perfluoroisobutene (PFIB)
Molecular formulas | C4F8 | [
"Physics",
"Chemistry"
] | 71 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
12,241,684 | https://en.wikipedia.org/wiki/C4H6O | {{DISPLAYTITLE:C4H6O}}
The molecular formula C4H6O may refer to:
Crotonaldehyde
Cyclobutanone
Dihydrofurans
2,3-Dihydrofuran
2,5-Dihydrofuran
Divinyl ether
Methacrolein
Methyl vinyl ketone
Molecular formulas | C4H6O | [
"Physics",
"Chemistry"
] | 76 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
19,048 | https://en.wikipedia.org/wiki/Mass | Mass is an intrinsic property of a body. It was traditionally believed to be related to the quantity of matter in a body, until the discovery of the atom and particle physics. It was found that different atoms and different elementary particles, theoretically with the same amount of matter, have nonetheless different masses. Mass in modern physics has multiple definitions which are conceptually distinct, but physically equivalent. Mass can be experimentally defined as a measure of the body's inertia, meaning the resistance to acceleration (change of velocity) when a net force is applied. The object's mass also determines the strength of its gravitational attraction to other bodies.
The SI base unit of mass is the kilogram (kg). In physics, mass is not the same as weight, even though mass is often determined by measuring the object's weight using a spring scale, rather than balance scale comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity, but it would still have the same mass. This is because weight is a force, while mass is the property that (along with gravity) determines the strength of this force.
In the Standard Model of physics, the mass of elementary particles is believed to be a result of their coupling with the Higgs boson in what is known as the Brout–Englert–Higgs mechanism.
Phenomena
There are several distinct phenomena that can be used to measure mass. Although some theorists have speculated that some of these phenomena could be independent of each other, current experiments have found no difference in results regardless of how it is measured:
Inertial mass measures an object's resistance to being accelerated by a force (represented by the relationship ).
Active gravitational mass determines the strength of the gravitational field generated by an object.
Passive gravitational mass measures the gravitational force exerted on an object in a known gravitational field.
The mass of an object determines its acceleration in the presence of an applied force. The inertia and the inertial mass describe this property of physical bodies at the qualitative and quantitative level respectively. According to Newton's second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A body's mass also determines the degree to which it generates and is affected by a gravitational field. If a first body of mass mA is placed at a distance r (center of mass to center of mass) from a second body of mass mB, each body is subject to an attractive force , where is the "universal gravitational constant". This is sometimes referred to as gravitational mass. Repeated experiments since the 17th century have demonstrated that inertial and gravitational mass are identical; since 1915, this observation has been incorporated a priori in the equivalence principle of general relativity.
Units of mass
The International System of Units (SI) unit of mass is the kilogram (kg). The kilogram is 1000 grams (g), and was first defined in 1795 as the mass of one cubic decimetre of water at the melting point of ice. However, because precise measurement of a cubic decimetre of water at the specified temperature and pressure was difficult, in 1889 the kilogram was redefined as the mass of a metal object, and thus became independent of the metre and the properties of water, this being a copper prototype of the grave in 1793, the platinum Kilogramme des Archives in 1799, and the platinum–iridium International Prototype of the Kilogram (IPK) in 1889.
However, the mass of the IPK and its national copies have been found to drift over time. The re-definition of the kilogram and several other units came into effect on 20 May 2019, following a final vote by the CGPM in November 2018. The new definition uses only invariant quantities of nature: the speed of light, the caesium hyperfine frequency, the Planck constant and the elementary charge.
Non-SI units accepted for use with SI units include:
the tonne (t) (or "metric ton"), equal to 1000 kg
the electronvolt (eV), a unit of energy, used to express mass in units of eV/c2 through mass–energy equivalence
the dalton (Da), equal to 1/12 of the mass of a free carbon-12 atom, approximately .
Outside the SI system, other units of mass include:
the slug (sl), an Imperial unit of mass (about 14.6 kg)
the pound (lb), a unit of mass (about 0.45 kg), which is used alongside the similarly named pound (force) (about 4.5 N), a unit of force
the Planck mass (about ), a quantity derived from fundamental constants
the solar mass (), defined as the mass of the Sun, primarily used in astronomy to compare large masses such as stars or galaxies (≈ )
the mass of a particle, as identified with its inverse Compton wavelength ()
the mass of a star or black hole, as identified with its Schwarzschild radius ().
Definitions
In physical science, one may distinguish conceptually between at least seven different aspects of mass, or seven physical notions that involve the concept of mass. Every experiment to date has shown these seven values to be proportional, and in some cases equal, and this proportionality gives rise to the abstract concept of mass. There are a number of ways mass can be measured or operationally defined:
Inertial mass is a measure of an object's resistance to acceleration when a force is applied. It is determined by applying a force to an object and measuring the acceleration that results from that force. An object with small inertial mass will accelerate more than an object with large inertial mass when acted upon by the same force. One says the body of greater mass has greater inertia.
Active gravitational mass is a measure of the strength of an object's gravitational flux (gravitational flux is equal to the surface integral of gravitational field over an enclosing surface). Gravitational field can be measured by allowing a small "test object" to fall freely and measuring its free-fall acceleration. For example, an object in free-fall near the Moon is subject to a smaller gravitational field, and hence accelerates more slowly, than the same object would if it were in free-fall near the Earth. The gravitational field near the Moon is weaker because the Moon has less active gravitational mass.
Passive gravitational mass is a measure of the strength of an object's interaction with a gravitational field. Passive gravitational mass is determined by dividing an object's weight by its free-fall acceleration. Two objects within the same gravitational field will experience the same acceleration; however, the object with a smaller passive gravitational mass will experience a smaller force (less weight) than the object with a larger passive gravitational mass.
According to relativity, mass is nothing else than the rest energy of a system of particles, meaning the energy of that system in a reference frame where it has zero momentum. Mass can be converted into other forms of energy according to the principle of mass–energy equivalence. This equivalence is exemplified in a large number of physical processes including pair production, beta decay and nuclear fusion. Pair production and nuclear fusion are processes in which measurable amounts of mass are converted to kinetic energy or vice versa.
Curvature of spacetime is a relativistic manifestation of the existence of mass. Such curvature is extremely weak and difficult to measure. For this reason, curvature was not discovered until after it was predicted by Einstein's theory of general relativity. Extremely precise atomic clocks on the surface of the Earth, for example, are found to measure less time (run slower) when compared to similar clocks in space. This difference in elapsed time is a form of curvature called gravitational time dilation. Other forms of curvature have been measured using the Gravity Probe B satellite.
Quantum mass manifests itself as a difference between an object's quantum frequency and its wave number. The quantum mass of a particle is proportional to the inverse Compton wavelength and can be determined through various forms of spectroscopy. In relativistic quantum mechanics, mass is one of the irreducible representation labels of the Poincaré group.
Weight vs. mass
In everyday usage, mass and "weight" are often used interchangeably. For instance, a person's weight may be stated as 75 kg. In a constant gravitational field, the weight of an object is proportional to its mass, and it is unproblematic to use the same unit for both concepts. But because of slight differences in the strength of the Earth's gravitational field at different places, the distinction becomes important for measurements with a precision better than a few percent, and for places far from the surface of the Earth, such as in space or on other planets. Conceptually, "mass" (measured in kilograms) refers to an intrinsic property of an object, whereas "weight" (measured in newtons) measures an object's resistance to deviating from its current course of free fall, which can be influenced by the nearby gravitational field. No matter how strong the gravitational field, objects in free fall are weightless, though they still have mass.
The force known as "weight" is proportional to mass and acceleration in all situations where the mass is accelerated away from free fall. For example, when a body is at rest in a gravitational field (rather than in free fall), it must be accelerated by a force from a scale or the surface of a planetary body such as the Earth or the Moon. This force keeps the object from going into free fall. Weight is the opposing force in such circumstances and is thus determined by the acceleration of free fall. On the surface of the Earth, for example, an object with a mass of 50 kilograms weighs 491 newtons, which means that 491 newtons is being applied to keep the object from going into free fall. By contrast, on the surface of the Moon, the same object still has a mass of 50 kilograms but weighs only 81.5 newtons, because only 81.5 newtons is required to keep this object from going into a free fall on the moon. Restated in mathematical terms, on the surface of the Earth, the weight W of an object is related to its mass m by , where is the acceleration due to Earth's gravitational field, (expressed as the acceleration experienced by a free-falling object).
For other situations, such as when objects are subjected to mechanical accelerations from forces other than the resistance of a planetary surface, the weight force is proportional to the mass of an object multiplied by the total acceleration away from free fall, which is called the proper acceleration. Through such mechanisms, objects in elevators, vehicles, centrifuges, and the like, may experience weight forces many times those caused by resistance to the effects of gravity on objects, resulting from planetary surfaces. In such cases, the generalized equation for weight W of an object is related to its mass m by the equation , where a is the proper acceleration of the object caused by all influences other than gravity. (Again, if gravity is the only influence, such as occurs when an object falls freely, its weight will be zero).
Inertial vs. gravitational mass
Although inertial mass, passive gravitational mass and active gravitational mass are conceptually distinct, no experiment has ever unambiguously demonstrated any difference between them. In classical mechanics, Newton's third law implies that active and passive gravitational mass must always be identical (or at least proportional), but the classical theory offers no compelling reason why the gravitational mass has to equal the inertial mass. That it does is merely an empirical fact.
Albert Einstein developed his general theory of relativity starting with the assumption that the inertial and passive gravitational masses are the same. This is known as the equivalence principle.
The particular equivalence often referred to as the "Galilean equivalence principle" or the "weak equivalence principle" has the most important consequence for freely falling objects. Suppose an object has inertial and gravitational masses m and M, respectively. If the only force acting on the object comes from a gravitational field g, the force on the object is:
Given this force, the acceleration of the object can be determined by Newton's second law:
Putting these together, the gravitational acceleration is given by:
This says that the ratio of gravitational to inertial mass of any object is equal to some constant K if and only if all objects fall at the same rate in a given gravitational field. This phenomenon is referred to as the "universality of free-fall". In addition, the constant K can be taken as 1 by defining our units appropriately.
The first experiments demonstrating the universality of free-fall were—according to scientific 'folklore'—conducted by Galileo obtained by dropping objects from the Leaning Tower of Pisa. This is most likely apocryphal: he is more likely to have performed his experiments with balls rolling down nearly frictionless inclined planes to slow the motion and increase the timing accuracy. Increasingly precise experiments have been performed, such as those performed by Loránd Eötvös, using the torsion balance pendulum, in 1889. , no deviation from universality, and thus from Galilean equivalence, has ever been found, at least to the precision 10−6. More precise experimental efforts are still being carried out.
The universality of free-fall only applies to systems in which gravity is the only acting force. All other forces, especially friction and air resistance, must be absent or at least negligible. For example, if a hammer and a feather are dropped from the same height through the air on Earth, the feather will take much longer to reach the ground; the feather is not really in free-fall because the force of air resistance upwards against the feather is comparable to the downward force of gravity. On the other hand, if the experiment is performed in a vacuum, in which there is no air resistance, the hammer and the feather should hit the ground at exactly the same time (assuming the acceleration of both objects towards each other, and of the ground towards both objects, for its own part, is negligible). This can easily be done in a high school laboratory by dropping the objects in transparent tubes that have the air removed with a vacuum pump. It is even more dramatic when done in an environment that naturally has a vacuum, as David Scott did on the surface of the Moon during Apollo 15.
A stronger version of the equivalence principle, known as the Einstein equivalence principle or the strong equivalence principle, lies at the heart of the general theory of relativity. Einstein's equivalence principle states that within sufficiently small regions of spacetime, it is impossible to distinguish between a uniform acceleration and a uniform gravitational field. Thus, the theory postulates that the force acting on a massive object caused by a gravitational field is a result of the object's tendency to move in a straight line (in other words its inertia) and should therefore be a function of its inertial mass and the strength of the gravitational field.
Origin
In theoretical physics, a mass generation mechanism is a theory which attempts to explain the origin of mass from the most fundamental laws of physics. To date, a number of different models have been proposed which advocate different views of the origin of mass. The problem is complicated by the fact that the notion of mass is strongly related to the gravitational interaction but a theory of the latter has not been yet reconciled with the currently popular model of particle physics, known as the Standard Model.
Pre-Newtonian concepts
Weight as an amount
The concept of amount is very old and predates recorded history. The concept of "weight" would incorporate "amount" and acquire a double meaning that was not clearly recognized as such.
Humans, at some early era, realized that the weight of a collection of similar objects was directly proportional to the number of objects in the collection:
where W is the weight of the collection of similar objects and n is the number of objects in the collection. Proportionality, by definition, implies that two values have a constant ratio:
, or equivalently
An early use of this relationship is a balance scale, which balances the force of one object's weight against the force of another object's weight. The two sides of a balance scale are close enough that the objects experience similar gravitational fields. Hence, if they have similar masses then their weights will also be similar. This allows the scale, by comparing weights, to also compare masses.
Consequently, historical weight standards were often defined in terms of amounts. The Romans, for example, used the carob seed (carat or siliqua) as a measurement standard. If an object's weight was equivalent to 1728 carob seeds, then the object was said to weigh one Roman pound. If, on the other hand, the object's weight was equivalent to 144 carob seeds then the object was said to weigh one Roman ounce (uncia). The Roman pound and ounce were both defined in terms of different sized collections of the same common mass standard, the carob seed. The ratio of a Roman ounce (144 carob seeds) to a Roman pound (1728 carob seeds) was:
Planetary motion
In 1600 AD, Johannes Kepler sought employment with Tycho Brahe, who had some of the most precise astronomical data available. Using Brahe's precise observations of the planet Mars, Kepler spent the next five years developing his own method for characterizing planetary motion. In 1609, Johannes Kepler published his three laws of planetary motion, explaining how the planets orbit the Sun. In Kepler's final planetary model, he described planetary orbits as following elliptical paths with the Sun at a focal point of the ellipse. Kepler discovered that the square of the orbital period of each planet is directly proportional to the cube of the semi-major axis of its orbit, or equivalently, that the ratio of these two values is constant for all planets in the Solar System.
On 25 August 1609, Galileo Galilei demonstrated his first telescope to a group of Venetian merchants, and in early January 1610, Galileo observed four dim objects near Jupiter, which he mistook for stars. However, after a few days of observation, Galileo realized that these "stars" were in fact orbiting Jupiter. These four objects (later named the Galilean moons in honor of their discoverer) were the first celestial bodies observed to orbit something other than the Earth or Sun. Galileo continued to observe these moons over the next eighteen months, and by the middle of 1611, he had obtained remarkably accurate estimates for their periods.
Galilean free fall
Sometime prior to 1638, Galileo turned his attention to the phenomenon of objects in free fall, attempting to characterize these motions. Galileo was not the first to investigate Earth's gravitational field, nor was he the first to accurately describe its fundamental characteristics. However, Galileo's reliance on scientific experimentation to establish physical principles would have a profound effect on future generations of scientists. It is unclear if these were just hypothetical experiments used to illustrate a concept, or if they were real experiments performed by Galileo, but the results obtained from these experiments were both realistic and compelling. A biography by Galileo's pupil Vincenzo Viviani stated that Galileo had dropped balls of the same material, but different masses, from the Leaning Tower of Pisa to demonstrate that their time of descent was independent of their mass. In support of this conclusion, Galileo had advanced the following theoretical argument: He asked if two bodies of different masses and different rates of fall are tied by a string, does the combined system fall faster because it is now more massive, or does the lighter body in its slower fall hold back the heavier body? The only convincing resolution to this question is that all bodies must fall at the same rate.
A later experiment was described in Galileo's Two New Sciences published in 1638. One of Galileo's fictional characters, Salviati, describes an experiment using a bronze ball and a wooden ramp. The wooden ramp was "12 cubits long, half a cubit wide and three finger-breadths thick" with a straight, smooth, polished groove. The groove was lined with "parchment, also smooth and polished as possible". And into this groove was placed "a hard, smooth and very round bronze ball". The ramp was inclined at various angles to slow the acceleration enough so that the elapsed time could be measured. The ball was allowed to roll a known distance down the ramp, and the time taken for the ball to move the known distance was measured. The time was measured using a water clock described as follows:
a large vessel of water placed in an elevated position; to the bottom of this vessel was soldered a pipe of small diameter giving a thin jet of water, which we collected in a small glass during the time of each descent, whether for the whole length of the channel or for a part of its length; the water thus collected was weighed, after each descent, on a very accurate balance; the differences and ratios of these weights gave us the differences and ratios of the times, and this with such accuracy that although the operation was repeated many, many times, there was no appreciable discrepancy in the results.
Galileo found that for an object in free fall, the distance that the object has fallen is always proportional to the square of the elapsed time:
Galileo had shown that objects in free fall under the influence of the Earth's gravitational field have a constant acceleration, and Galileo's contemporary, Johannes Kepler, had shown that the planets follow elliptical paths under the influence of the Sun's gravitational mass. However, Galileo's free fall motions and Kepler's planetary motions remained distinct during Galileo's lifetime.
Mass as distinct from weight
According to K. M. Browne: "Kepler formed a [distinct] concept of mass ('amount of matter' (copia materiae)), but called it 'weight' as did everyone at that time." Finally, in 1686, Newton gave this distinct concept its own name. In the first paragraph of Principia, Newton defined quantity of matter as “density and bulk conjunctly”, and mass as quantity of matter.
Newtonian mass
Robert Hooke had published his concept of gravitational forces in 1674, stating that all celestial bodies have an attraction or gravitating power towards their own centers, and also attract all the other celestial bodies that are within the sphere of their activity. He further stated that gravitational attraction increases by how much nearer the body wrought upon is to its own center. In correspondence with Isaac Newton from 1679 and 1680, Hooke conjectured that gravitational forces might decrease according to the double of the distance between the two bodies. Hooke urged Newton, who was a pioneer in the development of calculus, to work through the mathematical details of Keplerian orbits to determine if Hooke's hypothesis was correct. Newton's own investigations verified that Hooke was correct, but due to personal differences between the two men, Newton chose not to reveal this to Hooke. Isaac Newton kept quiet about his discoveries until 1684, at which time he told a friend, Edmond Halley, that he had solved the problem of gravitational orbits, but had misplaced the solution in his office. After being encouraged by Halley, Newton decided to develop his ideas about gravity and publish all of his findings. In November 1684, Isaac Newton sent a document to Edmund Halley, now lost but presumed to have been titled De motu corporum in gyrum (Latin for "On the motion of bodies in an orbit"). Halley presented Newton's findings to the Royal Society of London, with a promise that a fuller presentation would follow. Newton later recorded his ideas in a three-book set, entitled Philosophiæ Naturalis Principia Mathematica (English: Mathematical Principles of Natural Philosophy). The first was received by the Royal Society on 28 April 1685–86; the second on 2 March 1686–87; and the third on 6 April 1686–87. The Royal Society published Newton's entire collection at their own expense in May 1686–87.
Isaac Newton had bridged the gap between Kepler's gravitational mass and Galileo's gravitational acceleration, resulting in the discovery of the following relationship which governed both of these:
where g is the apparent acceleration of a body as it passes through a region of space where gravitational fields exist, μ is the gravitational mass (standard gravitational parameter) of the body causing gravitational fields, and R is the radial coordinate (the distance between the centers of the two bodies).
By finding the exact relationship between a body's gravitational mass and its gravitational field, Newton provided a second method for measuring gravitational mass. The mass of the Earth can be determined using Kepler's method (from the orbit of Earth's Moon), or it can be determined by measuring the gravitational acceleration on the Earth's surface, and multiplying that by the square of the Earth's radius. The mass of the Earth is approximately three-millionths of the mass of the Sun. To date, no other accurate method for measuring gravitational mass has been discovered.
Newton's cannonball
Newton's cannonball was a thought experiment used to bridge the gap between Galileo's gravitational acceleration and Kepler's elliptical orbits. It appeared in Newton's 1728 book A Treatise of the System of the World. According to Galileo's concept of gravitation, a dropped stone falls with constant acceleration down towards the Earth. However, Newton explains that when a stone is thrown horizontally (meaning sideways or perpendicular to Earth's gravity) it follows a curved path. "For a stone projected is by the pressure of its own weight forced out of the rectilinear path, which by the projection alone it should have pursued, and made to describe a curve line in the air; and through that crooked way is at last brought down to the ground. And the greater the velocity is with which it is projected, the farther it goes before it falls to the Earth." Newton further reasons that if an object were "projected in an horizontal direction from the top of a high mountain" with sufficient velocity, "it would reach at last quite beyond the circumference of the Earth, and return to the mountain from which it was projected."
Universal gravitational mass
In contrast to earlier theories (e.g. celestial spheres) which stated that the heavens were made of entirely different material, Newton's theory of mass was groundbreaking partly because it introduced universal gravitational mass: every object has gravitational mass, and therefore, every object generates a gravitational field. Newton further assumed that the strength of each object's gravitational field would decrease according to the square of the distance to that object. If a large collection of small objects were formed into a giant spherical body such as the Earth or Sun, Newton calculated the collection would create a gravitational field proportional to the total mass of the body, and inversely proportional to the square of the distance to the body's center.
For example, according to Newton's theory of universal gravitation, each carob seed produces a gravitational field. Therefore, if one were to gather an immense number of carob seeds and form them into an enormous sphere, then the gravitational field of the sphere would be proportional to the number of carob seeds in the sphere. Hence, it should be theoretically possible to determine the exact number of carob seeds that would be required to produce a gravitational field similar to that of the Earth or Sun. In fact, by unit conversion it is a simple matter of abstraction to realize that any traditional mass unit can theoretically be used to measure gravitational mass.
Measuring gravitational mass in terms of traditional mass units is simple in principle, but extremely difficult in practice. According to Newton's theory, all objects produce gravitational fields and it is theoretically possible to collect an immense number of small objects and form them into an enormous gravitating sphere. However, from a practical standpoint, the gravitational fields of small objects are extremely weak and difficult to measure. Newton's books on universal gravitation were published in the 1680s, but the first successful measurement of the Earth's mass in terms of traditional mass units, the Cavendish experiment, did not occur until 1797, over a hundred years later. Henry Cavendish found that the Earth's density was 5.448 ± 0.033 times that of water. As of 2009, the Earth's mass in kilograms is only known to around five digits of accuracy, whereas its gravitational mass is known to over nine significant figures.
Given two objects A and B, of masses MA and MB, separated by a displacement RAB, Newton's law of gravitation states that each object exerts a gravitational force on the other, of magnitude
,
where G is the universal gravitational constant. The above statement may be reformulated in the following way: if g is the magnitude at a given location in a gravitational field, then the gravitational force on an object with gravitational mass M is
.
This is the basis by which masses are determined by weighing. In simple spring scales, for example, the force F is proportional to the displacement of the spring beneath the weighing pan, as per Hooke's law, and the scales are calibrated to take g into account, allowing the mass M to be read off. Assuming the gravitational field is equivalent on both sides of the balance, a balance measures relative weight, giving the relative gravitation mass of each object.
Inertial mass
Mass was traditionally believed to be a measure of the quantity of matter in a physical body, equal to the "amount of matter" in an object. For example, Barre´ de Saint-Venant argued in 1851 that every object contains a number of "points" (basically, interchangeable elementary particles), and that mass is proportional to the number of points the object contains. (In practice, this "amount of matter" definition is adequate for most of classical mechanics, and sometimes remains in use in basic education, if the priority is to teach the difference between mass from weight.) This traditional "amount of matter" belief was contradicted by the fact that different atoms (and, later, different elementary particles) can have different masses, and was further contradicted by Einstein's theory of relativity (1905), which showed that the measurable mass of an object increases when energy is added to it (for example, by increasing its temperature or forcing it near an object that electrically repels it.) This motivates a search for a different definition of mass that is more accurate than the traditional definition of "the amount of matter in an object".
Inertial mass is the mass of an object measured by its resistance to acceleration. This definition has been championed by Ernst Mach and has since been developed into the notion of operationalism by Percy W. Bridgman. The simple classical mechanics definition of mass differs slightly from the definition in the theory of special relativity, but the essential meaning is the same.
In classical mechanics, according to Newton's second law, we say that a body has a mass m if, at any instant of time, it obeys the equation of motion
where F is the resultant force acting on the body and a is the acceleration of the body's centre of mass. For the moment, we will put aside the question of what "force acting on the body" actually means.
This equation illustrates how mass relates to the inertia of a body. Consider two objects with different masses. If we apply an identical force to each, the object with a bigger mass will experience a smaller acceleration, and the object with a smaller mass will experience a bigger acceleration. We might say that the larger mass exerts a greater "resistance" to changing its state of motion in response to the force.
However, this notion of applying "identical" forces to different objects brings us back to the fact that we have not really defined what a force is. We can sidestep this difficulty with the help of Newton's third law, which states that if one object exerts a force on a second object, it will experience an equal and opposite force. To be precise, suppose we have two objects of constant inertial masses m1 and m2. We isolate the two objects from all other physical influences, so that the only forces present are the force exerted on m1 by m2, which we denote F12, and the force exerted on m2 by m1, which we denote F21. Newton's second law states that
where a1 and a2 are the accelerations of m1 and m2, respectively. Suppose that these accelerations are non-zero, so that the forces between the two objects are non-zero. This occurs, for example, if the two objects are in the process of colliding with one another. Newton's third law then states that
and thus
If is non-zero, the fraction is well-defined, which allows us to measure the inertial mass of m1. In this case, m2 is our "reference" object, and we can define its mass m as (say) 1 kilogram. Then we can measure the mass of any other object in the universe by colliding it with the reference object and measuring the accelerations.
Additionally, mass relates a body's momentum p to its linear velocity v:
,
and the body's kinetic energy K to its velocity:
.
The primary difficulty with Mach's definition of mass is that it fails to take into account the potential energy (or binding energy) needed to bring two masses sufficiently close to one another to perform the measurement of mass. This is most vividly demonstrated by comparing the mass of the proton in the nucleus of deuterium, to the mass of the proton in free space (which is greater by about 0.239%—this is due to the binding energy of deuterium). Thus, for example, if the reference weight m2 is taken to be the mass of the neutron in free space, and the relative accelerations for the proton and neutron in deuterium are computed, then the above formula over-estimates the mass m1 (by 0.239%) for the proton in deuterium. At best, Mach's formula can only be used to obtain ratios of masses, that is, as m1 / m2 = / . An additional difficulty was pointed out by Henri Poincaré, which is that the measurement of instantaneous acceleration is impossible: unlike the measurement of time or distance, there is no way to measure acceleration with a single measurement; one must make multiple measurements (of position, time, etc.) and perform a computation to obtain the acceleration. Poincaré termed this to be an "insurmountable flaw" in the Mach definition of mass.
Atomic masses
Typically, the mass of objects is measured in terms of the kilogram, which since 2019 is defined in terms of fundamental constants of nature. The mass of an atom or other particle can be compared more precisely and more conveniently to that of another atom, and thus scientists developed the dalton (also known as the unified atomic mass unit). By definition, 1 Da (one dalton) is exactly one-twelfth of the mass of a carbon-12 atom, and thus, a carbon-12 atom has a mass of exactly 12 Da.
In relativity
Special relativity
In some frameworks of special relativity, physicists have used different definitions of the term. In these frameworks, two kinds of mass are defined: rest mass (invariant mass), and relativistic mass (which increases with velocity). Rest mass is the Newtonian mass as measured by an observer moving along with the object. Relativistic mass is the total quantity of energy in a body or system divided by c2. The two are related by the following equation:
where is the Lorentz factor:
The invariant mass of systems is the same for observers in all inertial frames, while the relativistic mass depends on the observer's frame of reference. In order to formulate the equations of physics such that mass values do not change between observers, it is convenient to use rest mass. The rest mass of a body is also related to its energy E and the magnitude of its momentum p by the relativistic energy-momentum equation:
So long as the system is closed with respect to mass and energy, both kinds of mass are conserved in any given frame of reference. The conservation of mass holds even as some types of particles are converted to others. Matter particles (such as atoms) may be converted to non-matter particles (such as photons of light), but this does not affect the total amount of mass or energy. Although things like heat may not be matter, all types of energy still continue to exhibit mass. Thus, mass and energy do not change into one another in relativity; rather, both are names for the same thing, and neither mass nor energy appear without the other.
Both rest and relativistic mass can be expressed as an energy by applying the well-known relationship E = mc2, yielding rest energy and "relativistic energy" (total system energy) respectively:
The "relativistic" mass and energy concepts are related to their "rest" counterparts, but they do not have the same value as their rest counterparts in systems where there is a net momentum. Because the relativistic mass is proportional to the energy, it has gradually fallen into disuse among physicists. There is disagreement over whether the concept remains useful pedagogically.
In bound systems, the binding energy must often be subtracted from the mass of the unbound system, because binding energy commonly leaves the system at the time it is bound. The mass of the system changes in this process merely because the system was not closed during the binding process, so the energy escaped. For example, the binding energy of atomic nuclei is often lost in the form of gamma rays when the nuclei are formed, leaving nuclides which have less mass than the free particles (nucleons) of which they are composed.
Mass–energy equivalence also holds in macroscopic systems. For example, if one takes exactly one kilogram of ice, and applies heat, the mass of the resulting melt-water will be more than a kilogram: it will include the mass from the thermal energy (latent heat) used to melt the ice; this follows from the conservation of energy. This number is small but not negligible: about 3.7 nanograms. It is given by the latent heat of melting ice (334 kJ/kg) divided by the speed of light squared (c2 ≈ ).
General relativity
In general relativity, the equivalence principle is the equivalence of gravitational and inertial mass. At the core of this assertion is Albert Einstein's idea that the gravitational force as experienced locally while standing on a massive body (such as the Earth) is the same as the pseudo-force experienced by an observer in a non-inertial (i.e. accelerated) frame of reference.
However, it turns out that it is impossible to find an objective general definition for the concept of invariant mass in general relativity. At the core of the problem is the non-linearity of the Einstein field equations, making it impossible to write the gravitational field energy as part of the stress–energy tensor in a way that is invariant for all observers. For a given observer, this can be achieved by the stress–energy–momentum pseudotensor.
In quantum physics
In classical mechanics, the inert mass of a particle appears in the Euler–Lagrange equation as a parameter m:
After quantization, replacing the position vector x with a wave function, the parameter m appears in the kinetic energy operator:
In the ostensibly covariant (relativistically invariant) Dirac equation, and in natural units, this becomes:
where the "mass" parameter m is now simply a constant associated with the quantum described by the wave function ψ.
In the Standard Model of particle physics as developed in the 1960s, this term arises from the coupling of the field ψ to an additional field Φ, the Higgs field. In the case of fermions, the Higgs mechanism results in the replacement of the term mψ in the Lagrangian with . This shifts the explanandum of the value for the mass of each elementary particle to the value of the unknown coupling constant Gψ.
Tachyonic particles and imaginary (complex) mass
A tachyonic field, or simply tachyon, is a quantum field with an imaginary mass. Although tachyons (particles that move faster than light) are a purely hypothetical concept not generally believed to exist, fields with imaginary mass have come to play an important role in modern physics and are discussed in popular books on physics. Under no circumstances do any excitations ever propagate faster than light in such theories—the presence or absence of a tachyonic mass has no effect whatsoever on the maximum velocity of signals (there is no violation of causality). While the field may have imaginary mass, any physical particles do not; the "imaginary mass" shows that the system becomes unstable, and sheds the instability by undergoing a type of phase transition called tachyon condensation (closely related to second order phase transitions) that results in symmetry breaking in current models of particle physics.
The term "tachyon" was coined by Gerald Feinberg in a 1967 paper, but it was soon realized that Feinberg's model in fact did not allow for superluminal speeds. Instead, the imaginary mass creates an instability in the configuration:- any configuration in which one or more field excitations are tachyonic will spontaneously decay, and the resulting configuration contains no physical tachyons. This process is known as tachyon condensation. Well known examples include the condensation of the Higgs boson in particle physics, and ferromagnetism in condensed matter physics.
Although the notion of a tachyonic imaginary mass might seem troubling because there is no classical interpretation of an imaginary mass, the mass is not quantized. Rather, the scalar field is; even for tachyonic quantum fields, the field operators at spacelike separated points still commute (or anticommute), thus preserving causality. Therefore, information still does not propagate faster than light, and solutions grow exponentially, but not superluminally (there is no violation of causality). Tachyon condensation drives a physical system that has reached a local limit and might naively be expected to produce physical tachyons, to an alternate stable state where no physical tachyons exist. Once the tachyonic field reaches the minimum of the potential, its quanta are not tachyons any more but rather are ordinary particles with a positive mass-squared.
This is a special case of the general rule, where unstable massive particles are formally described as having a complex mass, with the real part being their mass in the usual sense, and the imaginary part being the decay rate in natural units. However, in quantum field theory, a particle (a "one-particle state") is roughly defined as a state which is constant over time; i.e., an eigenvalue of the Hamiltonian. An unstable particle is a state which is only approximately constant over time; If it exists long enough to be measured, it can be formally described as having a complex mass, with the real part of the mass greater than its imaginary part. If both parts are of the same magnitude, this is interpreted as a resonance appearing in a scattering process rather than a particle, as it is considered not to exist long enough to be measured independently of the scattering process. In the case of a tachyon, the real part of the mass is zero, and hence no concept of a particle can be attributed to it.
In a Lorentz invariant theory, the same formulas that apply to ordinary slower-than-light particles (sometimes called "bradyons" in discussions of tachyons) must also apply to tachyons. In particular the energy–momentum relation:
(where p is the relativistic momentum of the bradyon and m is its rest mass) should still apply, along with the formula for the total energy of a particle:
This equation shows that the total energy of a particle (bradyon or tachyon) contains a contribution from its rest mass (the "rest mass–energy") and a contribution from its motion, the kinetic energy.
When v is larger than c, the denominator in the equation for the energy is "imaginary", as the value under the radical is negative. Because the total energy must be real, the numerator must also be imaginary: i.e. the rest mass m must be imaginary, as a pure imaginary number divided by another pure imaginary number is a real number.
See also
Mass versus weight
Effective mass (spring–mass system)
Effective mass (solid-state physics)
Extension (metaphysics)
International System of Quantities
2019 revision of the SI base
Notes
References
External links
Jim Baggott (27 September 2017). The Concept of Mass (video) published by the Royal Institution on YouTube.
Physical quantities
SI base quantities
Moment (physics)
Extensive quantities | Mass | [
"Physics",
"Chemistry",
"Mathematics"
] | 9,260 | [
"Scalar physical quantities",
"Physical phenomena",
"Physical quantities",
"SI base quantities",
"Mass",
"Chemical quantities",
"Quantity",
"Size",
"Extensive quantities",
"Wikipedia categories named after physical quantities",
"Physical properties",
"Matter",
"Moment (physics)"
] |
19,051 | https://en.wikipedia.org/wiki/Manganese | Manganese is a chemical element; it has symbol Mn and atomic number 25. It is a hard, brittle, silvery metal, often found in minerals in combination with iron. Manganese was first isolated in the 1770s. It is a transition metal with a multifaceted array of industrial alloy uses, particularly in stainless steels. It improves strength, workability, and resistance to wear. Manganese oxide is used as an oxidising agent; as a rubber additive; and in glass making, fertilisers, and ceramics. Manganese sulfate can be used as a fungicide.
Manganese is also an essential human dietary element, important in macronutrient metabolism, bone formation, and free radical defense systems. It is a critical component in dozens of proteins and enzymes. It is found mostly in the bones, but also the liver, kidneys, and brain. In the human brain, the manganese is bound to manganese metalloproteins, most notably glutamine synthetase in astrocytes.
It is familiar in the laboratory in the form of the deep violet salt potassium permanganate. It occurs at the active sites in some enzymes. Of particular interest is the use of a Mn-O cluster, the oxygen-evolving complex, in the production of oxygen by plants.
Characteristics
Physical properties
Manganese is a silvery-gray metal that resembles iron. It is hard and very brittle, difficult to fuse, but easy to oxidize. Manganese and its common ions are paramagnetic. Manganese tarnishes slowly in air and oxidizes ("rusts") like iron in water containing dissolved oxygen.
Isotopes
Naturally occurring manganese is composed of one stable isotope, 55Mn. Several radioisotopes have been isolated and described, ranging in atomic weight from 46 u (46Mn) to 72 u (72Mn). The most stable are 53Mn with a half-life of 3.7 million years, 54Mn with a half-life of 312.2 days, and 52Mn with a half-life of 5.591 days. All of the remaining radioactive isotopes have half-lives of less than three hours, and the majority of less than one minute. The primary decay mode in isotopes lighter than the most abundant stable isotope, 55Mn, is electron capture and the primary mode in heavier isotopes is beta decay. Manganese also has three meta states.
Manganese is part of the iron group of elements, which are thought to be synthesized in large stars shortly before the supernova explosion. 53Mn decays to 53Cr with a half-life of 3.7 million years. Because of its relatively short half-life, 53Mn is relatively rare, produced by cosmic rays impact on iron. Manganese isotopic contents are typically combined with chromium isotopic contents and have found application in isotope geology and radiometric dating. Mn–Cr isotopic ratios reinforce the evidence from 26Al and 107Pd for the early history of the Solar System. Variations in 53Cr/52Cr and Mn/Cr ratios from several meteorites suggest an initial 53Mn/55Mn ratio, which indicate that Mn–Cr isotopic composition must result from in situ decay of 53Mn in differentiated planetary bodies. Hence, 53Mn provides additional evidence for nucleosynthetic processes immediately before coalescence of the Solar System.
Allotropes
Four allotropes (structural forms) of solid manganese are known, labeled α, β, γ and δ, and occurring at successively higher temperatures. All are metallic, stable at standard pressure, and have a cubic crystal lattice, but they vary widely in their atomic structures.
Alpha manganese (α-Mn) is the equilibrium phase at room temperature. It has a body-centered cubic lattice and is unusual among elemental metals in having a very complex unit cell, with 58 atoms per cell (29 atoms per primitive unit cell) in four different types of site. It is paramagnetic at room temperature and antiferromagnetic at temperatures below .
Beta manganese (β-Mn) forms when heated above the transition temperature of . It has a primitive cubic structure with 20 atoms per unit cell at two types of sites, which is as complex as that of any other elemental metal. It is easily obtained as a metastable phase at room temperature by rapid quenching. It does not show magnetic ordering, remaining paramagnetic down to the lowest temperature measured (1.1 K).
Gamma manganese (γ-Mn) forms when heated above . It has a simple face-centered cubic structure (four atoms per unit cell). When quenched to room temperature it converts to β-Mn, but it can be stabilized at room temperature by alloying it with at least 5 percent of other elements (such as C, Fe, Ni, Cu, Pd or Au), and these solute-stabilized alloys distort into a face-centered tetragonal structure.
Delta manganese (δ-Mn) forms when heated above and is stable up to the manganese melting point of . It has a body-centered cubic structure (two atoms per cubic unit cell).
Chemical compounds
Common oxidation states of manganese are +2, +3, +4, +6, and +7, although all oxidation states from −3 to +7 have been observed. Manganese in oxidation state +7 is represented by salts of the intensely purple permanganate anion . Potassium permanganate is a commonly used laboratory reagent because of its oxidizing properties; it is used as a topical medicine (for example, in the treatment of fish diseases). Solutions of potassium permanganate were among the first stains and fixatives to be used in the preparation of biological cells and tissues for electron microscopy.
Aside from various permanganate salts, Mn(VII) is represented by the unstable, volatile derivative Mn2O7. Oxyhalides (MnO3F and MnO3Cl) are powerful oxidizing agents. The most prominent example of Mn in the +6 oxidation state is the green anion manganate, [MnO4]2−. Manganate salts are intermediates in the extraction of manganese from its ores. Compounds with oxidation states +5 are somewhat elusive, and often found associated to an oxide (O2−) or nitride (N3−) ligand. One example is the blue anion hypomanganate [MnO4]3−.
Mn(IV) is somewhat enigmatic because it is common in nature but far rarer in synthetic chemistry. The most common Mn ore, pyrolusite, is MnO2. It is the dark brown pigment of many cave drawings but is also a common ingredient in dry cell batteries. Complexes of Mn(IV) are well known, but they require elaborate ligands. Mn(IV)-OH complexes are an intermediate in some enzymes, including the oxygen evolving center (OEC) in plants.
Simple derivatives Mn3+ are rarely encountered but can be stabilized by suitably basic ligands. Manganese(III) acetate is an oxidant useful in organic synthesis. Solid compounds of manganese(III) are characterized by its strong purple-red color and a preference for distorted octahedral coordination resulting from the Jahn-Teller effect.
A particularly common oxidation state for manganese in aqueous solution is +2, which has a pale pink color. Many manganese(II) compounds are known, such as the aquo complexes derived from manganese(II) sulfate (MnSO4) and manganese(II) chloride (MnCl2). This oxidation state is also seen in the mineral rhodochrosite (manganese(II) carbonate). Manganese(II) commonly exists with a high spin, S = 5/2 ground state because of the high pairing energy for manganese(II). There are no spin-allowed d–d transitions in manganese(II), which explain its faint color.
Organomanganese compounds
Manganese forms a large variety of organometallic derivatives, i.e., compounds with Mn-C bonds. The organometallic derivatives include numerous examples of Mn in its lower oxidation states, i.e. Mn(−III) up through Mn(I). This area of organometallic chemistry is attractive because Mn is inexpensive and of relatively low toxicity.
Of greatest commercial interest is "MMT", methylcyclopentadienyl manganese tricarbonyl, which is used as an anti-knock compound added to gasoline (petrol) in some countries. It features Mn(I). Consistent with other aspects of Mn(II) chemistry, manganocene () is high-spin. In contrast, its neighboring metal iron forms an air-stable, low-spin derivative in the form of ferrocene (). When conducted under an atmosphere of carbon monoxide, reduction of Mn(II) salts gives dimanganese decacarbonyl , an orange and volatile solid. The air-stability of this Mn(0) compound (and its many derivatives) reflects the powerful electron-acceptor properties of carbon monoxide. Many alkene complexes and alkyne complexes are derived from .
In Mn(CH3)2(dmpe)2, Mn(II) is low spin, which contrasts with the high spin character of its precursor, MnBr2(dmpe)2 (dmpe = (CH3)2PCH2CH2P(CH3)2). Polyalkyl and polyaryl derivatives of manganese often exist in higher oxidation states, reflecting the electron-releasing properties of alkyl and aryl ligands. One example is [Mn(CH3)6]2−.
History
The origin of the name manganese is complex. In ancient times, two black minerals were identified from the regions of the Magnetes (either Magnesia, located within modern Greece, or Magnesia ad Sipylum, located within modern Turkey). They were both called magnes from their place of origin, but were considered to differ in sex. The male magnes attracted iron, and was the iron ore now known as lodestone or magnetite, and which probably gave us the term magnet. The female magnes ore did not attract iron, but was used to decolorize glass. This female magnes was later called magnesia, known now in modern times as pyrolusite or manganese dioxide. Neither this mineral nor elemental manganese is magnetic. In the 16th century, manganese dioxide was called manganesum (note the two Ns instead of one) by glassmakers, possibly as a corruption and concatenation of two words, since alchemists and glassmakers eventually had to differentiate a magnesia nigra (the black ore) from magnesia alba (a white ore, also from Magnesia, also useful in glassmaking). Michele Mercati called magnesia nigra manganesa, and finally the metal isolated from it became known as manganese (). The name magnesia eventually was then used to refer only to the white magnesia alba (magnesium oxide), which provided the name magnesium for the free element when it was isolated much later.
Manganese dioxide, which is abundant in nature, has long been used as a pigment. The cave paintings in Gargas that are 30,000 to 24,000 years old are made from the mineral form of MnO2 pigments.
Manganese compounds were used by Egyptian and Roman glassmakers, either to add to, or remove, color from glass. Use as "glassmakers soap" continued through the Middle Ages until modern times and is evident in 14th-century glass from Venice.
Because it was used in glassmaking, manganese dioxide was available for experiments by alchemists, the first chemists. Ignatius Gottfried Kaim (1770) and Johann Glauber (17th century) discovered that manganese dioxide could be converted to permanganate, a useful laboratory reagent. Kaim also may have reduced manganese dioxide to isolate the metal, but that is uncertain. By the mid-18th century, the Swedish chemist Carl Wilhelm Scheele used manganese dioxide to produce chlorine. First, hydrochloric acid, or a mixture of dilute sulfuric acid and sodium chloride was made to react with manganese dioxide, and later hydrochloric acid from the Leblanc process was used and the manganese dioxide was recycled by the Weldon process. The production of chlorine and hypochlorite bleaching agents was a large consumer of manganese ores.
Scheele and others were aware that pyrolusite (mineral form of manganese dioxide) contained a new element. Johan Gottlieb Gahn isolated an impure sample of manganese metal in 1774, which he did by reducing the dioxide with carbon.
The manganese content of some iron ores used in Greece led to speculations that steel produced from that ore contains additional manganese, making the Spartan steel exceptionally hard. Around the beginning of the 19th century, manganese was used in steelmaking and several patents were granted. In 1816, it was documented that iron alloyed with manganese was harder but not more brittle. In 1837, British academic James Couper noted an association between miners' heavy exposure to manganese and a form of Parkinson's disease. In 1912, United States patents were granted for protecting firearms against rust and corrosion with manganese phosphate electrochemical conversion coatings, and the process has seen widespread use ever since.
The invention of the Leclanché cell in 1866 and the subsequent improvement of batteries containing manganese dioxide as cathodic depolarizer increased the demand for manganese dioxide. Until the development of batteries with nickel–cadmium and lithium, most batteries contained manganese. The zinc–carbon battery and the alkaline battery normally use industrially produced manganese dioxide because naturally occurring manganese dioxide contains impurities. In the 20th century, manganese dioxide was widely used as the cathodic for commercial disposable dry batteries of both the standard (zinc–carbon) and alkaline types.
Manganese is essential to iron and steel production by virtue of its sulfur-fixing, deoxidizing, and alloying properties. This application was first recognized by the British metallurgist Robert Forester Mushet (1811–1891) who, in 1856, introduced the element, in the form of Spiegeleisen.
Occurrence
Manganese comprises about 1000 ppm (0.1%) of the Earth's crust and is the 12th most abundant element. Soil contains 7–9000 ppm of manganese with an average of 440 ppm. The atmosphere contains 0.01 μg/m3. Manganese occurs principally as pyrolusite (MnO2), braunite (Mn2+Mn3+6)SiO12), psilomelane , and to a lesser extent as rhodochrosite (MnCO3).
The most important manganese ore is pyrolusite (MnO2). Other economically important manganese ores usually show a close spatial relation to the iron ores, such as sphalerite. Land-based resources are large but irregularly distributed. About 80% of the known world manganese resources are in South Africa; other important manganese deposits are in Ukraine, Australia, India, China, Gabon and Brazil. According to 1978 estimate, the ocean floor has 500 billion tons of manganese nodules. Attempts to find economically viable methods of harvesting manganese nodules were abandoned in the 1970s.
In South Africa, most identified deposits are located near Hotazel in the Northern Cape Province, (Kalahari manganese fields), with a 2011 estimate of 15 billion tons. In 2011 South Africa produced 3.4 million tons, topping all other nations.
Manganese is mainly mined in South Africa, Australia, China, Gabon, Brazil, India, Kazakhstan, Ghana, Ukraine and Malaysia.
Production
For the production of ferromanganese, the manganese ore is mixed with iron ore and carbon, and then reduced either in a blast furnace or in an electric arc furnace. The resulting ferromanganese has a manganese content of 30–80%. Pure manganese used for the production of iron-free alloys is produced by leaching manganese ore with sulfuric acid and a subsequent electrowinning process.
A more progressive extraction process involves directly reducing (a low grade) manganese ore by heap leaching. This is done by percolating natural gas through the bottom of the heap; the natural gas provides the heat (needs to be at least 850 °C) and the reducing agent (carbon monoxide). This reduces all of the manganese ore to manganese oxide (MnO), which is a leachable form. The ore then travels through a grinding circuit to reduce the particle size of the ore to between 150 and 250 μm, increasing the surface area to aid leaching. The ore is then added to a leach tank of sulfuric acid and ferrous iron (Fe2+) in a 1.6:1 ratio. The iron reacts with the manganese dioxide (MnO2) to form iron hydroxide (FeO(OH)) and elemental manganese (Mn).
This process yields approximately 92% recovery of the manganese. For further purification, the manganese can then be sent to an electrowinning facility.
Oceanic environment
In 1972, the CIA's Project Azorian, through billionaire Howard Hughes, commissioned the ship Hughes Glomar Explorer with the cover story of harvesting manganese nodules from the sea floor. That triggered a rush of activity to collect manganese nodules, which was not actually practical until the 2020s. The real mission of Hughes Glomar Explorer was to raise a sunken Soviet submarine, the K-129, with the goal of retrieving Soviet code books.
An abundant resource of manganese in the form of manganese nodules found on the ocean floor. These nodules, which are composed of 29% manganese, are located along the ocean floor. The environmental impacts of nodule collection are of interest.
Dissolved manganese (dMn) is found throughout the world's oceans, 90% of which originates from hydrothermal vents. Particulate Mn develops in buoyant plumes over an active vent source, while the dMn behaves conservatively. Mn concentrations vary between the water columns of the ocean. At the surface, dMn is elevated due to input from external sources such as rivers, dust, and shelf sediments. Coastal sediments normally have lower Mn concentrations, but can increase due to anthropogenic discharges from industries such as mining and steel manufacturing, which enter the ocean from river inputs. Surface dMn concentrations can also be elevated biologically through photosynthesis and physically from coastal upwelling and wind-driven surface currents. Internal cycling such as photo-reduction from UV radiation can also elevate levels by speeding up the dissolution of Mn-oxides and oxidative scavenging, preventing Mn from sinking to deeper waters. Elevated levels at mid-depths can occur near mid-ocean ridges and hydrothermal vents. The hydrothermal vents release dMn enriched fluid into the water. The dMn can then travel up to 4,000 km due to the microbial capsules present, preventing exchange with particles, lowing the sinking rates. Dissolved Mn concentrations are even higher when oxygen levels are low. Overall, dMn concentrations are normally higher in coastal regions and decrease when moving offshore.
Soils
Manganese occurs in soils in three oxidation states: the divalent cation, Mn2+ and as brownish-black oxides and hydroxides containing Mn (III,IV), such as MnOOH and MnO2. Soil pH and oxidation-reduction conditions affect which of these three forms of Mn is dominant in a given soil. At pH values less than 6 or under anaerobic conditions, Mn(II) dominates, while under more alkaline and aerobic conditions, Mn(III,IV) oxides and hydroxides predominate. These effects of soil acidity and aeration state on the form of Mn can be modified or controlled by microbial activity. Microbial respiration can cause both the oxidation of Mn2+ to the oxides, and it can cause reduction of the oxides to the divalent cation.
The Mn(III,IV) oxides exist as brownish-black stains and small nodules on sand, silt, and clay particles. These surface coatings on other soil particles have high surface area and carry negative charge. The charged sites can adsorb and retain various cations, especially heavy metals (e.g., Cr3+, Cu2+, Zn2+, and Pb2+). In addition, the oxides can adsorb organic acids and other compounds. The adsorption of the metals and organic compounds can then cause them to be oxidized while the Mn(III,IV) oxides are reduced to Mn2+ (e.g., Cr3+ to Cr(VI) and colorless hydroquinone to tea-colored quinone polymers).
Applications
Steel
Manganese is essential to iron and steel production by virtue of its sulfur-fixing, deoxidizing, and alloying properties. Manganese has no satisfactory substitute in these applications in metallurgy. Steelmaking, including its ironmaking component, has accounted for most manganese demand, presently in the range of 85% to 90% of the total demand. Manganese is a key component of low-cost stainless steel. Often ferromanganese (usually about 80% manganese) is the intermediate in modern processes.
Small amounts of manganese improve the workability of steel at high temperatures by forming a high-melting sulfide and preventing the formation of a liquid iron sulfide at the grain boundaries. If the manganese content reaches 4%, the embrittlement of the steel becomes a dominant feature. The embrittlement decreases at higher manganese concentrations and reaches an acceptable level at 8%. Steel containing 8 to 15% of manganese has a high tensile strength of up to 863 MPa. Steel with 12% manganese was discovered in 1882 by Robert Hadfield and is still known as Hadfield steel (mangalloy). It was used for British military steel helmets and later by the U.S. military.
Aluminium alloys
Manganese is used in production of alloys with aluminium. Aluminium with roughly 1.5% manganese has increased resistance to corrosion through grains that absorb impurities which would lead to galvanic corrosion. The corrosion-resistant aluminium alloys 3004 and 3104 (0.8 to 1.5% manganese) are used for most beverage cans. Before 2000, more than 1.6 million tonnes of those alloys were used; at 1% manganese, this consumed 16,000 tonnes of manganese.
Batteries
Manganese(IV) oxide was used in the original type of dry cell battery as an electron acceptor from zinc, and is the blackish material in carbon–zinc type flashlight cells. The manganese dioxide is reduced to the manganese oxide-hydroxide MnO(OH) during discharging, preventing the formation of hydrogen at the anode of the battery.
MnO2 + H2O + e− → MnO(OH) +
The same material also functions in newer alkaline batteries (usually battery cells), which use the same basic reaction, but a different electrolyte mixture. In 2002, more than 230,000 tons of manganese dioxide was used for this purpose.
Resistors
Copper alloys of manganese, such as Manganin, are commonly found in metal element shunt resistors used for measuring relatively large amounts of current. These alloys have very low temperature coefficient of resistance and are resistant to sulfur. This makes the alloys particularly useful in harsh automotive and industrial environments.
Fertilizers and feed additive
Manganese oxide and sulfate are components of fertilizers. In the year 2000, an estimated 20,000 tons of these compounds were used in fertilizers in the US alone. A comparable amount of Mn compounds was also used in animal feeds.
Niche
Methylcyclopentadienyl manganese tricarbonyl is an additive in some unleaded gasoline to boost octane rating and reduce engine knocking.
Manganese(IV) oxide (manganese dioxide, MnO2) is used as a reagent in organic chemistry for the oxidation of benzylic alcohols (where the hydroxyl group is adjacent to an aromatic ring). Manganese dioxide has been used since antiquity to oxidize and neutralize the greenish tinge in glass from trace amounts of iron contamination. MnO2 is also used in the manufacture of oxygen and chlorine and in drying black paints. In some preparations, it is a brown pigment for paint and is a constituent of natural umber.
Tetravalent manganese is used as an activator in red-emitting phosphors. While many compounds are known which show luminescence, the majority are not used in commercial application due to low efficiency or deep red emission. However, several Mn4+ activated fluorides were reported as potential red-emitting phosphors for warm-white LEDs. But to this day, only K2SiF6:Mn4+ is commercially available for use in warm-white LEDs.
The metal is occasionally used in coins; until 2000, the only United States coin to use manganese was the "wartime" nickel from 1942 to 1945. An alloy of 75% copper and 25% nickel was traditionally used for the production of nickel coins. However, because of shortage of nickel metal during the war, it was substituted by more available silver and manganese, thus resulting in an alloy of 56% copper, 35% silver and 9% manganese. Since 2000, dollar coins, for example the Sacagawea dollar and the Presidential $1 coins, are made from a brass containing 7% of manganese with a pure copper core. In both cases of nickel and dollar, the use of manganese in the coin was to duplicate the electromagnetic properties of a previous identically sized and valued coin in the mechanisms of vending machines. In the case of the later U.S. dollar coins, the manganese alloy was intended to duplicate the properties of the copper/nickel alloy used in the previous Susan B. Anthony dollar.
Manganese compounds have been used as pigments and for the coloring of ceramics and glass. The brown color of ceramic is sometimes the result of manganese compounds. In the glass industry, manganese compounds are used for two effects. Manganese(III) reacts with iron(II) to reduce strong green color in glass by forming less-colored iron(III) and slightly pink manganese(II), compensating for the residual color of the iron(III). Larger quantities of manganese are used to produce pink colored glass. In 2009, Mas Subramanian and associates at Oregon State University discovered that manganese can be combined with yttrium and indium to form an intensely blue, non-toxic, inert, fade-resistant pigment, YInMn Blue, the first new blue pigment discovered in 200 years.
Biochemistry
Many classes of enzymes contain manganese cofactors including oxidoreductases, transferases, hydrolases, lyases, isomerases and ligases. Other enzymes containing manganese are arginase and a Mn-containing superoxide dismutase (Mn-SOD). Some reverse transcriptases of many retroviruses (although not lentiviruses such as HIV) contain manganese. Manganese-containing polypeptides are the diphtheria toxin, lectins, and integrins.
The oxygen-evolving complex (OEC), containing four atoms of manganese, is a part of photosystem II contained in the thylakoid membranes of chloroplasts. The OEC is responsible for the terminal photooxidation of water during the light reactions of photosynthesis, i.e., it is the catalyst that makes the O2 produced by plants.
Human health and nutrition
Manganese is an essential human dietary element and is present as a coenzyme in several biological processes, which include macronutrient metabolism, bone formation, and free radical defense systems. Manganese is a critical component in dozens of proteins and enzymes. The human body contains about 12 mg of manganese, mostly in the bones. The soft tissue remainder is concentrated in the liver and kidneys. In the human brain, the manganese is bound to manganese metalloproteins, most notably glutamine synthetase in astrocytes.
Regulation
The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for minerals in 2001. For manganese, there was not sufficient information to set EARs and RDAs, so needs are described as estimates for Adequate Intakes (AIs). As for safety, the IOM sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of manganese, the adult UL is set at 11 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). Manganese deficiency is rare.
The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For people ages 15 and older, the AI is set at 3.0 mg/day. AIs for pregnancy and lactation is 3.0 mg/day. For children ages 1–14 years, the AIs increase with age from 0.5 to 2.0 mg/day. The adult AIs are higher than the U.S. RDAs. The EFSA reviewed the same safety question and decided that there was insufficient information to set a UL.
For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For manganese labeling purposes, 100% of the Daily Value was 2.0 mg, but as of 27 May 2016 it was revised to 2.3 mg to bring it into agreement with the RDA. A table of the old and new adult daily values is provided at Reference Daily Intake.
Excessive exposure or intake may lead to a condition known as manganism, a neurodegenerative disorder that causes dopaminergic neuronal death and symptoms similar to Parkinson's disease.
Deficiency
Manganese deficiency in humans, which is rare, results in a number of medical problems. A deficiency of manganese causes skeletal deformation in animals and inhibits the production of collagen in wound healing.
Exposure
In water
Waterborne manganese has a greater bioavailability than dietary manganese. According to results from a 2010 study, higher levels of exposure to manganese in drinking water are associated with increased intellectual impairment and reduced intelligence quotients in school-age children. It is hypothesized that long-term exposure due to inhaling the naturally occurring manganese in shower water puts up to 8.7 million Americans at risk. However, data indicates that the human body can recover from certain adverse effects of overexposure to manganese if the exposure is stopped and the body can clear the excess.
Mn levels can increase in seawater when hypoxic periods occur. Since 1990 there have been reports of Mn accumulation in marine organisms including fish, crustaceans, mollusks, and echinoderms. Specific tissues are targets in different species, including the gills, brain, blood, kidney, and liver/hepatopancreas. Physiological effects have been reported in these species. Mn can affect the renewal of immunocytes and their functionality, such as phagocytosis and activation of pro-phenoloxidase, suppressing the organisms' immune systems. This causes the organisms to be more susceptible to infections. As climate change occurs, pathogen distributions increase, and in order for organisms to survive and defend themselves against these pathogens, they need a healthy, strong immune system. If their systems are compromised from high Mn levels, they will not be able to fight off these pathogens and die.
Gasoline
Methylcyclopentadienyl manganese tricarbonyl (MMT) is an additive developed to replace lead compounds for gasolines to improve the octane rating. MMT is used only in a few countries. Fuels containing manganese tend to form manganese carbides, which damage exhaust valves.
Air
Compared to 1953, levels of manganese in air have dropped. Generally, exposure to ambient Mn air concentrations in excess of 5 μg Mn/m3 can lead to Mn-induced symptoms. Increased ferroportin protein expression in human embryonic kidney (HEK293) cells is associated with decreased intracellular Mn concentration and attenuated cytotoxicity, characterized by the reversal of Mn-reduced glutamate uptake and diminished lactate dehydrogenase leakage.
Regulation
Manganese exposure in United States is regulated by the Occupational Safety and Health Administration (OSHA). People can be exposed to manganese in the workplace by breathing it in or swallowing it. OSHA has set the legal limit (permissible exposure limit) for manganese exposure in the workplace as 5 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 1 mg/m3 over an 8-hour workday and a short term limit of 3 mg/m3. At levels of 500 mg/m3, manganese is immediately dangerous to life and health.
Health and safety
Manganese is essential for human health, albeit in milligram amounts.
The current maximum safe concentration under U.S. EPA rules is 50 μg Mn/L.
Manganism
Manganese overexposure is most frequently associated with manganism, a rare neurological disorder associated with excessive manganese ingestion or inhalation. Historically, persons employed in the production or processing of manganese alloys have been at risk for developing manganism; however, health and safety regulations protect workers in developed nations. The disorder was first described in 1837 by British academic John Couper, who studied two patients who were manganese grinders.
Manganism is a biphasic disorder. In its early stages, an intoxicated person may experience depression, mood swings, compulsive behaviors, and psychosis. Early neurological symptoms give way to late-stage manganism, which resembles Parkinson's disease. Symptoms include weakness, monotone and slowed speech, an expressionless face, tremor, forward-leaning gait, inability to walk backwards without falling, rigidity, and general problems with dexterity, gait and balance. Unlike Parkinson's disease, manganism is not associated with loss of the sense of smell and patients are typically unresponsive to treatment with L-DOPA. Symptoms of late-stage manganism become more severe over time even if the source of exposure is removed and brain manganese levels return to normal.
Chronic manganese exposure has been shown to produce a parkinsonism-like illness characterized by movement abnormalities. This condition is not responsive to typical therapies used in the treatment of PD, suggesting an alternative pathway to the typical dopaminergic loss within the substantia nigra. Manganese may accumulate in the basal ganglia, leading to the abnormal movements. A mutation of the SLC30A10 gene, a manganese efflux transporter necessary for decreasing intracellular Mn, has been linked with the development of this Parkinsonism-like disease. The Lewy bodies typical to PD are not seen in Mn-induced parkinsonism.
Animal experiments have given the opportunity to examine the consequences of manganese overexposure under controlled conditions. In (non-aggressive) rats, manganese induces mouse-killing behavior.
Toxicity
Manganese compounds are less toxic than those of other widespread metals, such as nickel and copper. However, exposure to manganese dusts and fumes should not exceed the ceiling value of 5 mg/m3 even for short periods because of its toxicity level. Manganese poisoning has been linked to impaired motor skills and cognitive disorders.
Neurodegenerative diseases
A protein called DMT1 is the major transporter in manganese absorption from the intestine and may be the major transporter of manganese across the blood–brain barrier. DMT1 also transports inhaled manganese across the nasal epithelium. The proposed mechanism for manganese toxicity is that dysregulation leads to oxidative stress, mitochondrial dysfunction, glutamate-mediated excitotoxicity, and aggregation of proteins.
See also
Manganese exporter, membrane transport protein
List of countries by manganese production
Parkerizing
References
Sources
External links
National Pollutant Inventory – Manganese and compounds Fact Sheet
International Manganese Institute
NIOSH Manganese Topic Page
Manganese at The Periodic Table of Videos (University of Nottingham)
All about Manganese Dendrites
Electric Arc Furnace (EAF) Slag
Chemical elements
Transition metals
Deoxidizers
Chemical hazards
Dietary minerals
Reducing agents
Chemical elements with body-centered cubic structure
Native element minerals | Manganese | [
"Physics",
"Chemistry",
"Materials_science"
] | 7,802 | [
"Chemical elements",
"Redox",
"Deoxidizers",
"Metallurgy",
"Reducing agents",
"Chemical hazards",
"Atoms",
"Matter"
] |
19,053 | https://en.wikipedia.org/wiki/Mineral | In geology and mineralogy, a mineral or mineral species is, broadly speaking, a solid substance with a fairly well-defined chemical composition and a specific crystal structure that occurs naturally in pure form.
The geological definition of mineral normally excludes compounds that occur only in living organisms. However, some minerals are often biogenic (such as calcite) or organic compounds in the sense of chemistry (such as mellite). Moreover, living organisms often synthesize inorganic minerals (such as hydroxylapatite) that also occur in rocks.
The concept of mineral is distinct from rock, which is any bulk solid geologic material that is relatively homogeneous at a large enough scale. A rock may consist of one type of mineral or may be an aggregate of two or more different types of minerals, spacially segregated into distinct phases.
Some natural solid substances without a definite crystalline structure, such as opal or obsidian, are more properly called mineraloids. If a chemical compound occurs naturally with different crystal structures, each structure is considered a different mineral species. Thus, for example, quartz and stishovite are two different minerals consisting of the same compound, silicon dioxide.
The International Mineralogical Association (IMA) is the generally recognized standard body for the definition and nomenclature of mineral species. , the IMA recognizes 6,100 official mineral species.
The chemical composition of a named mineral species may vary somewhat due to the inclusion of small amounts of impurities. Specific varieties of a species sometimes have conventional or official names of their own. For example, amethyst is a purple variety of the mineral species quartz. Some mineral species can have variable proportions of two or more chemical elements that occupy equivalent positions in the mineral's structure; for example, the formula of mackinawite is given as , meaning , where x is a variable number between 0 and 9. Sometimes a mineral with variable composition is split into separate species, more or less arbitrarily, forming a mineral group; that is the case of the silicates , the olivine group.
Besides the essential chemical composition and crystal structure, the description of a mineral species usually includes its common physical properties such as habit, hardness, lustre, diaphaneity, colour, streak, tenacity, cleavage, fracture, system, zoning, parting, specific gravity, magnetism, fluorescence, radioactivity, as well as its taste or smell and its reaction to acid.
Minerals are classified by key chemical constituents; the two dominant systems are the Dana classification and the Strunz classification. Silicate minerals comprise approximately 90% of the Earth's crust. Other important mineral groups include the native elements (made up of a single pure element) and compounds (combinations of multiple elements) namely sulfides (e.g. Galena PbS), oxides (e.g. quartz SiO2), halides (e.g. rock salt NaCl), carbonates (e.g. calcite CaCO3), sulfates (e.g. gypsum CaSO4·2H2O), silicates (e.g. orthoclase KAlSi3O8), molybdates (e.g. wulfenite PbMoO4) and phosphates (e.g. pyromorphite Pb5(PO4)3Cl).
Definitions
International Mineralogical Association
The International Mineralogical Association has established the following requirements for a substance to be considered a distinct mineral:
It must be a naturally occurring substance formed by natural geological processes, on Earth or other extraterrestrial bodies. This excludes compounds directly and exclusively generated by human activities (anthropogenic) or in living beings (biogenic), such as tungsten carbide, urinary calculi, calcium oxalate crystals in plant tissues, and seashells. However, substances with such origins may qualify if geological processes were involved in their genesis (as is the case of evenkite, derived from plant material; or taranakite, from bat guano; or alpersite, from mine tailings). Hypothetical substances are also excluded, even if they are predicted to occur in inaccessible natural environments like the Earth's core or other planets.
It must be a solid substance in its natural occurrence. A major exception to this rule is native mercury: it is still classified as a mineral by the IMA, even though crystallizes only below −39 °C, because it was included before the current rules were established. Water and carbon dioxide are not considered minerals, even though they are often found as inclusions in other minerals; but water ice is considered a mineral.
It must have a well-defined crystallographic structure; or, more generally, an ordered atomic arrangement. This property implies several macroscopic physical properties, such as crystal form, hardness, and cleavage. It excludes ozokerite, limonite, obsidian and many other amorphous (non-crystalline) materials that occur in geologic contexts.
It must have a fairly well defined chemical composition. However, certain crystalline substances with a fixed structure but variable composition may be considered single mineral species. A common class of examples are solid solutions such as mackinawite, (Fe, Ni)9S8, which is mostly a ferrous sulfide with a significant fraction of iron atoms replaced by nickel atoms. Other examples include layered crystals with variable layer stacking, or crystals that differ only in the regular arrangement of vacancies and substitutions. On the other hand, some substances that have a continuous series of compositions, may be arbitrarily split into several minerals. The typical example is the olivine group (Mg, Fe)2SiO4, whose magnesium-rich and iron-rich end-members are considered separate minerals (forsterite and fayalite).
The details of these rules are somewhat controversial. For instance, there have been several recent proposals to classify amorphous substances as minerals, but they have not been accepted by the IMA.
The IMA is also reluctant to accept minerals that occur naturally only in the form of nanoparticles a few hundred atoms across, but has not defined a minimum crystal size.
Some authors require the material to be a stable or metastable solid at room temperature (25 °C). However, the IMA only requires that the substance be stable enough for its structure and composition to be well-determined. For example, it recognizes meridianiite (a naturally occurring hydrate of magnesium sulfate) as a mineral, even though it is formed and stable only below 2 °C.
, 6,100 mineral species are approved by the IMA. They are most commonly named after a person, followed by discovery location; names based on chemical composition or physical properties are the two other major groups of mineral name etymologies. Most names end in "-ite"; the exceptions are usually names that were well-established before the organization of mineralogy as a discipline, for example galena and diamond.
Biogenic minerals
A topic of contention among geologists and mineralogists has been the IMA's decision to exclude biogenic crystalline substances. For example, Lowenstam (1981) stated that "organisms are capable of forming a diverse array of minerals, some of which cannot be formed inorganically in the biosphere."
Skinner (2005) views all solids as potential minerals and includes biominerals in the mineral kingdom, which are those that are created by the metabolic activities of organisms. Skinner expanded the previous definition of a mineral to classify "element or compound, amorphous or crystalline, formed through biogeochemical processes," as a mineral.
Recent advances in high-resolution genetics and X-ray absorption spectroscopy are providing revelations on the biogeochemical relations between microorganisms and minerals that may shed new light on this question. For example, the IMA-commissioned "Working Group on Environmental Mineralogy and Geochemistry " deals with minerals in the hydrosphere, atmosphere, and biosphere. The group's scope includes mineral-forming microorganisms, which exist on nearly every rock, soil, and particle surface spanning the globe to depths of at least 1600 metres below the sea floor and 70 kilometres into the stratosphere (possibly entering the mesosphere).
Biogeochemical cycles have contributed to the formation of minerals for billions of years. Microorganisms can precipitate metals from solution, contributing to the formation of ore deposits. They can also catalyze the dissolution of minerals.
Prior to the International Mineralogical Association's listing, over 60 biominerals had been discovered, named, and published. These minerals (a sub-set tabulated in Lowenstam (1981)) are considered minerals proper according to Skinner's (2005) definition. These biominerals are not listed in the International Mineral Association official list of mineral names; however, many of these biomineral representatives are distributed amongst the 78 mineral classes listed in the Dana classification scheme.
Skinner's (2005) definition of a mineral takes this matter into account by stating that a mineral can be crystalline or amorphous. Although biominerals are not the most common form of minerals, they help to define the limits of what constitutes a mineral proper. Nickel's (1995) formal definition explicitly mentioned crystallinity as a key to defining a substance as a mineral. A 2011 article defined icosahedrite, an aluminium-iron-copper alloy, as a mineral; named for its unique natural icosahedral symmetry, it is a quasicrystal. Unlike a true crystal, quasicrystals are ordered but not periodic.
Rocks, ores, and gems
A rock is an aggregate of one or more minerals or mineraloids. Some rocks, such as limestone or quartzite, are composed primarily of one mineral – calcite or aragonite in the case of limestone, and quartz in the latter case. Other rocks can be defined by relative abundances of key (essential) minerals; a granite is defined by proportions of quartz, alkali feldspar, and plagioclase feldspar. The other minerals in the rock are termed accessory minerals, and do not greatly affect the bulk composition of the rock. Rocks can also be composed entirely of non-mineral material; coal is a sedimentary rock composed primarily of organically derived carbon.
In rocks, some mineral species and groups are much more abundant than others; these are termed the rock-forming minerals. The major examples of these are quartz, the feldspars, the micas, the amphiboles, the pyroxenes, the olivines, and calcite; except for the last one, all of these minerals are silicates. Overall, around 150 minerals are considered particularly important, whether in terms of their abundance or aesthetic value in terms of collecting.
Commercially valuable minerals and rocks, other than gemstones, metal ores, or mineral fuels, are referred to as industrial minerals. For example, muscovite, a white mica, can be used for windows (sometimes referred to as isinglass), as a filler, or as an insulator.
Ores are minerals that have a high concentration of a certain element, typically a metal. Examples are cinnabar (HgS), an ore of mercury; sphalerite (ZnS), an ore of zinc; cassiterite (SnO2), an ore of tin; and colemanite, an ore of boron.
Gems are minerals with an ornamental value, and are distinguished from non-gems by their beauty, durability, and usually, rarity. There are about 20 mineral species that qualify as gem minerals, which constitute about 35 of the most common gemstones. Gem minerals are often present in several varieties, and so one mineral can account for several different gemstones; for example, ruby and sapphire are both corundum, Al2O3.
Etymology
The first known use of the word "mineral" in the English language (Middle English) was the 15th century. The word came from , from , mine, ore.
The word "species" comes from the Latin species, "a particular sort, kind, or type with distinct look, or appearance".
Chemistry
The abundance and diversity of minerals is controlled directly by their chemistry, in turn dependent on elemental abundances in the Earth. The majority of minerals observed are derived from the Earth's crust. Eight elements account for most of the key components of minerals, due to their abundance in the crust. These eight elements, summing to over 98% of the crust by weight, are, in order of decreasing abundance: oxygen, silicon, aluminium, iron, magnesium, calcium, sodium and potassium. Oxygen and silicon are by far the two most important – oxygen composes 47% of the crust by weight, and silicon accounts for 28%.
The minerals that form are those that are most stable at the temperature and pressure of formation, within the limits imposed by the bulk chemistry of the parent body. For example, in most igneous rocks, the aluminium and alkali metals (sodium and potassium) that are present are primarily found in combination with oxygen, silicon, and calcium as feldspar minerals. However, if the rock is unusually rich in alkali metals, there will not be enough aluminium to combine with all the sodium as feldspar, and the excess sodium will form sodic amphiboles such as riebeckite. If the aluminium abundance is unusually high, the excess aluminium will form muscovite or other aluminium-rich minerals. If silicon is deficient, part of the feldspar will be replaced by feldspathoid minerals. Precise predictions of which minerals will be present in a rock of a particular composition formed at a particular temperature and pressure requires complex thermodynamic calculations. However, approximate estimates may be made using relatively simple rules of thumb, such as the CIPW norm, which gives reasonable estimates for volcanic rock formed from dry magma.
The chemical composition may vary between end member species of a solid solution series. For example, the plagioclase feldspars comprise a continuous series from sodium-rich end member albite (NaAlSi3O8) to calcium-rich anorthite (CaAl2Si2O8) with four recognized intermediate varieties between them (given in order from sodium- to calcium-rich): oligoclase, andesine, labradorite, and bytownite. Other examples of series include the olivine series of magnesium-rich forsterite and iron-rich fayalite, and the wolframite series of manganese-rich hübnerite and iron-rich ferberite.
Chemical substitution and coordination polyhedra explain this common feature of minerals. In nature, minerals are not pure substances, and are contaminated by whatever other elements are present in the given chemical system. As a result, it is possible for one element to be substituted for another. Chemical substitution will occur between ions of a similar size and charge; for example, K+ will not substitute for Si4+ because of chemical and structural incompatibilities caused by a big difference in size and charge. A common example of chemical substitution is that of Si4+ by Al3+, which are close in charge, size, and abundance in the crust. In the example of plagioclase, there are three cases of substitution. Feldspars are all framework silicates, which have a silicon-oxygen ratio of 2:1, and the space for other elements is given by the substitution of Si4+ by Al3+ to give a base unit of [AlSi3O8]−; without the substitution, the formula would be charge-balanced as SiO2, giving quartz. The significance of this structural property will be explained further by coordination polyhedra. The second substitution occurs between Na+ and Ca2+; however, the difference in charge has to accounted for by making a second substitution of Si4+ by Al3+.
Coordination polyhedra are geometric representations of how a cation is surrounded by an anion. In mineralogy, coordination polyhedra are usually considered in terms of oxygen, due its abundance in the crust. The base unit of silicate minerals is the silica tetrahedron – one Si4+ surrounded by four O2−. An alternate way of describing the coordination of the silicate is by a number: in the case of the silica tetrahedron, the silicon is said to have a coordination number of 4. Various cations have a specific range of possible coordination numbers; for silicon, it is almost always 4, except for very high-pressure minerals where the compound is compressed such that silicon is in six-fold (octahedral) coordination with oxygen. Bigger cations have a bigger coordination numbers because of the increase in relative size as compared to oxygen (the last orbital subshell of heavier atoms is different too). Changes in coordination numbers leads to physical and mineralogical differences; for example, at high pressure, such as in the mantle, many minerals, especially silicates such as olivine and garnet, will change to a perovskite structure, where silicon is in octahedral coordination. Other examples are the aluminosilicates kyanite, andalusite, and sillimanite (polymorphs, since they share the formula Al2SiO5), which differ by the coordination number of the Al3+; these minerals transition from one another as a response to changes in pressure and temperature. In the case of silicate materials, the substitution of Si4+ by Al3+ allows for a variety of minerals because of the need to balance charges.
Because the eight most common elements make up over 98% of the Earth's crust, the small quantities of the other elements that are typically present are substituted into the common rock-forming minerals. The distinctive minerals of most elements are quite rare, being found only where these elements have been concentrated by geological processes, such as hydrothermal circulation, to the point where they can no longer be accommodated in common minerals.
Changes in temperature and pressure and composition alter the mineralogy of a rock sample. Changes in composition can be caused by processes such as weathering or metasomatism (hydrothermal alteration). Changes in temperature and pressure occur when the host rock undergoes tectonic or magmatic movement into differing physical regimes. Changes in thermodynamic conditions make it favourable for mineral assemblages to react with each other to produce new minerals; as such, it is possible for two rocks to have an identical or a very similar bulk rock chemistry without having a similar mineralogy. This process of mineralogical alteration is related to the rock cycle. An example of a series of mineral reactions is illustrated as follows.
Orthoclase feldspar (KAlSi3O8) is a mineral commonly found in granite, a plutonic igneous rock. When exposed to weathering, it reacts to form kaolinite (Al2Si2O5(OH)4, a sedimentary mineral, and silicic acid):
2 KAlSi3O8 + 5 H2O + 2 H+ → Al2Si2O5(OH)4 + 4 H2SiO3 + 2 K+
Under low-grade metamorphic conditions, kaolinite reacts with quartz to form pyrophyllite (Al2Si4O10(OH)2):
Al2Si2O5(OH)4 + SiO2 → Al2Si4O10(OH)2 + H2O
As metamorphic grade increases, the pyrophyllite reacts to form kyanite and quartz:
Al2Si4O10(OH)2 → Al2SiO5 + 3 SiO2 + H2O
Alternatively, a mineral may change its crystal structure as a consequence of changes in temperature and pressure without reacting. For example, quartz will change into a variety of its SiO2 polymorphs, such as tridymite and cristobalite at high temperatures, and coesite at high pressures.
Physical properties
Classifying minerals ranges from simple to difficult. A mineral can be identified by several physical properties, some of them being sufficient for full identification without equivocation. In other cases, minerals can only be classified by more complex optical, chemical or X-ray diffraction analysis; these methods, however, can be costly and time-consuming. Physical properties applied for classification include crystal structure and habit, hardness, lustre, diaphaneity, colour, streak, cleavage and fracture, and specific gravity. Other less general tests include fluorescence, phosphorescence, magnetism, radioactivity, tenacity (response to mechanical induced changes of shape or form), piezoelectricity and reactivity to dilute acids.
Crystal structure and habit
Crystal structure results from the orderly geometric spatial arrangement of atoms in the internal structure of a mineral. This crystal structure is based on regular internal atomic or ionic arrangement that is often expressed in the geometric form that the crystal takes. Even when the mineral grains are too small to see or are irregularly shaped, the underlying crystal structure is always periodic and can be determined by X-ray diffraction. Minerals are typically described by their symmetry content. Crystals are restricted to 32 point groups, which differ by their symmetry. These groups are classified in turn into more broad categories, the most encompassing of these being the six crystal families.
These families can be described by the relative lengths of the three crystallographic axes, and the angles between them; these relationships correspond to the symmetry operations that define the narrower point groups. They are summarized below; a, b, and c represent the axes, and α, β, γ represent the angle opposite the respective crystallographic axis (e.g. α is the angle opposite the a-axis, viz. the angle between the b and c axes):
The hexagonal crystal family is also split into two crystal systems – the trigonal, which has a three-fold axis of symmetry, and the hexagonal, which has a six-fold axis of symmetry.
Chemistry and crystal structure together define a mineral. With a restriction to 32 point groups, minerals of different chemistry may have identical crystal structure. For example, halite (NaCl), galena (PbS), and periclase (MgO) all belong to the hexaoctahedral point group (isometric family), as they have a similar stoichiometry between their different constituent elements. In contrast, polymorphs are groupings of minerals that share a chemical formula but have a different structure. For example, pyrite and marcasite, both iron sulfides, have the formula FeS2; however, the former is isometric while the latter is orthorhombic. This polymorphism extends to other sulfides with the generic AX2 formula; these two groups are collectively known as the pyrite and marcasite groups.
Polymorphism can extend beyond pure symmetry content. The aluminosilicates are a group of three minerals – kyanite, andalusite, and sillimanite – which share the chemical formula Al2SiO5. Kyanite is triclinic, while andalusite and sillimanite are both orthorhombic and belong to the dipyramidal point group. These differences arise corresponding to how aluminium is coordinated within the crystal structure. In all minerals, one aluminium ion is always in six-fold coordination with oxygen. Silicon, as a general rule, is in four-fold coordination in all minerals; an exception is a case like stishovite (SiO2, an ultra-high pressure quartz polymorph with rutile structure). In kyanite, the second aluminium is in six-fold coordination; its chemical formula can be expressed as Al[6]Al[6]SiO5, to reflect its crystal structure. Andalusite has the second aluminium in five-fold coordination (Al[6]Al[5]SiO5) and sillimanite has it in four-fold coordination (Al[6]Al[4]SiO5).
Differences in crystal structure and chemistry greatly influence other physical properties of the mineral. The carbon allotropes diamond and graphite have vastly different properties; diamond is the hardest natural substance, has an adamantine lustre, and belongs to the isometric crystal family, whereas graphite is very soft, has a greasy lustre, and crystallises in the hexagonal family. This difference is accounted for by differences in bonding. In diamond, the carbons are in sp3 hybrid orbitals, which means they form a framework where each carbon is covalently bonded to four neighbours in a tetrahedral fashion; on the other hand, graphite is composed of sheets of carbons in sp2 hybrid orbitals, where each carbon is bonded covalently to only three others. These sheets are held together by much weaker van der Waals forces, and this discrepancy translates to large macroscopic differences.
Twinning is the intergrowth of two or more crystals of a single mineral species. The geometry of the twinning is controlled by the mineral's symmetry. As a result, there are several types of twins, including contact twins, reticulated twins, geniculated twins, penetration twins, cyclic twins, and polysynthetic twins. Contact, or simple twins, consist of two crystals joined at a plane; this type of twinning is common in spinel. Reticulated twins, common in rutile, are interlocking crystals resembling netting. Geniculated twins have a bend in the middle that is caused by start of the twin. Penetration twins consist of two single crystals that have grown into each other; examples of this twinning include cross-shaped staurolite twins and Carlsbad twinning in orthoclase. Cyclic twins are caused by repeated twinning around a rotation axis. This type of twinning occurs around three, four, five, six, or eight-fold axes, and the corresponding patterns are called threelings, fourlings, fivelings, sixlings, and eightlings. Sixlings are common in aragonite. Polysynthetic twins are similar to cyclic twins through the presence of repetitive twinning; however, instead of occurring around a rotational axis, polysynthetic twinning occurs along parallel planes, usually on a microscopic scale.
Crystal habit refers to the overall shape of the aggregate crystal of any mineral. Several terms are used to describe this property. Common habits include acicular, which describes needle-like crystals as in natrolite; dendritic (tree-pattern) is common in native copper or native gold with a groundmass (matrix); equant, which is typical of garnet; prismatic (elongated in one direction) as seen in kunzite or stibnite; botryoidal (like a bunch of grapes) seen in chalcedony; fibrous, which has fibre-like crystals as seen in wollastonite; tabular, which differs from bladed habit in that the former is platy whereas the latter has a defined elongation as seen in muscovite; and massive, which has no definite shape as seen in carnallite. Related to crystal form, the quality of crystal faces is diagnostic of some minerals, especially with a petrographic microscope. Euhedral crystals have a defined external shape, while anhedral crystals do not; those intermediate forms are termed subhedral.
Hardness
The hardness of a mineral defines how much it can resist scratching or indentation. This physical property is controlled by the chemical composition and crystalline structure of a mineral.
The most commonly used scale of measurement is the ordinal Mohs hardness scale, which measures resistance to scratching. Defined by ten indicators, a mineral with a higher index scratches those below it. The scale ranges from talc, a phyllosilicate, to diamond, a carbon polymorph that is the hardest natural material. The scale is provided below:
A mineral's hardness is a function of its structure. Hardness is not necessarily constant for all crystallographic directions; crystallographic weakness renders some directions softer than others. An example of this hardness variability exists in kyanite, which has a Mohs hardness of 5 parallel to [001] but 7 parallel to [100].
Other scales include these;
Shore's hardness test, which measures the endurance of a mineral based on the indentation of a spring-loaded contraption.
The Rockwell scale
The Vickers hardness test
The Brinell scale
Lustre and diaphaneity
Lustre indicates how light reflects from the mineral's surface, with regards to its quality and intensity. There are numerous qualitative terms used to describe this property, which are split into metallic and non-metallic categories. Metallic and sub-metallic minerals have high reflectivity like metal; examples of minerals with this lustre are galena and pyrite. Non-metallic lustres include: adamantine, such as in diamond; vitreous, which is a glassy lustre very common in silicate minerals; pearly, such as in talc and apophyllite; resinous, such as members of the garnet group; silky which is common in fibrous minerals such as asbestiform chrysotile.
The diaphaneity of a mineral describes the ability of light to pass through it. Transparent minerals do not diminish the intensity of light passing through them. An example of a transparent mineral is muscovite (potassium mica); some varieties are sufficiently clear to have been used for windows. Translucent minerals allow some light to pass, but less than those that are transparent. Jadeite and nephrite (mineral forms of jade are examples of minerals with this property). Minerals that do not allow light to pass are called opaque.
The diaphaneity of a mineral depends on the thickness of the sample. When a mineral is sufficiently thin (e.g., in a thin section for petrography), it may become transparent even if that property is not seen in a hand sample. In contrast, some minerals, such as hematite or pyrite, are opaque even in thin-section.
Colour and streak
Colour is the most obvious property of a mineral, but it is often non-diagnostic. It is caused by electromagnetic radiation interacting with electrons (except in the case of incandescence, which does not apply to minerals). Two broad classes of elements (idiochromatic and allochromatic) are defined with regards to their contribution to a mineral's colour: Idiochromatic elements are essential to a mineral's composition; their contribution to a mineral's colour is diagnostic. Examples of such minerals are malachite (green) and azurite (blue). In contrast, allochromatic elements in minerals are present in trace amounts as impurities. An example of such a mineral would be the ruby and sapphire varieties of the mineral corundum.
The colours of pseudochromatic minerals are the result of interference of light waves. Examples include labradorite and bornite.
In addition to simple body colour, minerals can have various other distinctive optical properties, such as play of colours, asterism, chatoyancy, iridescence, tarnish, and pleochroism. Several of these properties involve variability in colour. Play of colour, such as in opal, results in the sample reflecting different colours as it is turned, while pleochroism describes the change in colour as light passes through a mineral in a different orientation. Iridescence is a variety of the play of colours where light scatters off a coating on the surface of crystal, cleavage planes, or off layers having minor gradations in chemistry. In contrast, the play of colours in opal is caused by light refracting from ordered microscopic silica spheres within its physical structure. Chatoyancy ("cat's eye") is the wavy banding of colour that is observed as the sample is rotated; asterism, a variety of chatoyancy, gives the appearance of a star on the mineral grain. The latter property is particularly common in gem-quality corundum.
The streak of a mineral refers to the colour of a mineral in powdered form, which may or may not be identical to its body colour. The most common way of testing this property is done with a streak plate, which is made out of porcelain and coloured either white or black. The streak of a mineral is independent of trace elements or any weathering surface. A common example of this property is illustrated with hematite, which is coloured black, silver or red in hand sample, but has a cherry-red to reddish-brown streak; or with chalcopyrite, which is brassy golden in colour and leaves a black streak. Streak is more often distinctive for metallic minerals, in contrast to non-metallic minerals whose body colour is created by allochromatic elements. Streak testing is constrained by the hardness of the mineral, as those harder than 7 powder the streak plate instead.
Cleavage, parting, fracture, and tenacity
By definition, minerals have a characteristic atomic arrangement. Weakness in this crystalline structure causes planes of weakness, and the breakage of a mineral along such planes is termed cleavage. The quality of cleavage can be described based on how cleanly and easily the mineral breaks; common descriptors, in order of decreasing quality, are "perfect", "good", "distinct", and "poor". In particularly transparent minerals, or in thin-section, cleavage can be seen as a series of parallel lines marking the planar surfaces when viewed from the side. Cleavage is not a universal property among minerals; for example, quartz, consisting of extensively interconnected silica tetrahedra, does not have a crystallographic weakness which would allow it to cleave. In contrast, micas, which have perfect basal cleavage, consist of sheets of silica tetrahedra which are very weakly held together.
As cleavage is a function of crystallography, there are a variety of cleavage types. Cleavage occurs typically in either one, two, three, four, or six directions. Basal cleavage in one direction is a distinctive property of the micas. Two-directional cleavage is described as prismatic, and occurs in minerals such as the amphiboles and pyroxenes. Minerals such as galena or halite have cubic (or isometric) cleavage in three directions, at 90°; when three directions of cleavage are present, but not at 90°, such as in calcite or rhodochrosite, it is termed rhombohedral cleavage. Octahedral cleavage (four directions) is present in fluorite and diamond, and sphalerite has six-directional dodecahedral cleavage.
Minerals with many cleavages might not break equally well in all of the directions; for example, calcite has good cleavage in three directions, but gypsum has perfect cleavage in one direction, and poor cleavage in two other directions. Angles between cleavage planes vary between minerals. For example, as the amphiboles are double-chain silicates and the pyroxenes are single-chain silicates, the angle between their cleavage planes is different. The pyroxenes cleave in two directions at approximately 90°, whereas the amphiboles distinctively cleave in two directions separated by approximately 120° and 60°. The cleavage angles can be measured with a contact goniometer, which is similar to a protractor.
Parting, sometimes called "false cleavage", is similar in appearance to cleavage but is instead produced by structural defects in the mineral, as opposed to systematic weakness. Parting varies from crystal to crystal of a mineral, whereas all crystals of a given mineral will cleave if the atomic structure allows for that property. In general, parting is caused by some stress applied to a crystal. The sources of the stresses include deformation (e.g. an increase in pressure), exsolution, or twinning. Minerals that often display parting include the pyroxenes, hematite, magnetite, and corundum.
When a mineral is broken in a direction that does not correspond to a plane of cleavage, it is termed to have been fractured. There are several types of uneven fracture. The classic example is conchoidal fracture, like that of quartz; rounded surfaces are created, which are marked by smooth curved lines. This type of fracture occurs only in very homogeneous minerals. Other types of fracture are fibrous, splintery, and hackly. The latter describes a break along a rough, jagged surface; an example of this property is found in native copper.
Tenacity is related to both cleavage and fracture. Whereas fracture and cleavage describes the surfaces that are created when a mineral is broken, tenacity describes how resistant a mineral is to such breaking. Minerals can be described as brittle, ductile, malleable, sectile, flexible, or elastic.
Specific gravity
Specific gravity numerically describes the density of a mineral. The dimensions of density are mass divided by volume with units: kg/m3 or g/cm3. Specific gravity is defined as the density of the mineral divided by the density of water at 4 °C and thus is a dimensionless quantity, identical in all unit systems. It can be measured as the quotient of the mass of the sample and difference between the weight of the sample in air and its corresponding weight in water. Among most minerals, this property is not diagnostic. Rock forming minerals – typically silicates or occasionally carbonates – have a specific gravity of 2.5–3.5.
High specific gravity is a diagnostic property of a mineral. A variation in chemistry (and consequently, mineral class) correlates to a change in specific gravity. Among more common minerals, oxides and sulfides tend to have a higher specific gravity as they include elements with higher atomic mass. A generalization is that minerals with metallic or adamantine lustre tend to have higher specific gravities than those having a non-metallic to dull lustre. For example, hematite, Fe2O3, has a specific gravity of 5.26 while galena, PbS, has a specific gravity of 7.2–7.6, which is a result of their high iron and lead content, respectively. A very high specific gravity is characteristic of native metals; for example, kamacite, an iron-nickel alloy common in iron meteorites has a specific gravity of 7.9, and gold has an observed specific gravity between 15 and 19.3.
Other properties
Other properties can be used to diagnose minerals. These are less general, and apply to specific minerals.
Dropping dilute acid (often 10% HCl) onto a mineral aids in distinguishing carbonates from other mineral classes. The acid reacts with the carbonate ([CO3]2−) group, which causes the affected area to effervesce, giving off carbon dioxide gas. This test can be further expanded to test the mineral in its original crystal form or powdered form. An example of this test is done when distinguishing calcite from dolomite, especially within the rocks (limestone and dolomite respectively). Calcite immediately effervesces in acid, whereas acid must be applied to powdered dolomite (often to a scratched surface in a rock), for it to effervesce. Zeolite minerals will not effervesce in acid; instead, they become frosted after 5–10 minutes, and if left in acid for a day, they dissolve or become a silica gel.
Magnetism is a very conspicuous property of a few minerals. Among common minerals, magnetite exhibits this property strongly, and magnetism is also present, albeit not as strongly, in pyrrhotite and ilmenite. Some minerals exhibit electrical properties – for example, quartz is piezoelectric – but electrical properties are rarely used as diagnostic criteria for minerals because of incomplete data and natural variation.
Minerals can also be tested for taste or smell. Halite, NaCl, is table salt; its potassium-bearing counterpart, sylvite, has a pronounced bitter taste. Sulfides have a characteristic smell, especially as samples are fractured, reacting, or powdered.
Radioactivity is a rare property found in minerals containing radioactive elements. The radioactive elements could be a defining constituent, such as uranium in uraninite, autunite, and carnotite, or present as trace impurities, as in zircon. The decay of a radioactive element damages the mineral crystal structure rendering it locally amorphous (metamict state); the optical result, termed a radioactive halo or pleochroic halo, is observable with various techniques, such as thin-section petrography.
Classification
Earliest classifications
In 315 BCE, Theophrastus presented his classification of minerals in his treatise On Stones. His classification was influenced by the ideas of his teachers Plato and Aristotle. Theophrastus classified minerals as stones, earths or metals.
Georgius Agricola's classification of minerals in his book De Natura Fossilium, published in 1546, divided minerals into three types of substance: simple (stones, earths, metals, and congealed juices), compound (intimately mixed) and composite (separable).
Linnaeus
An early classification of minerals was given by Carl Linnaeus in his seminal 1735 book Systema Naturae. He divided the natural world into three kingdoms – plants, animals, and minerals – and classified each with the same hierarchy. In descending order, these were Phylum, Class, Order, Family, Tribe, Genus, and Species. However, while his system was justified by Charles Darwin's theory of species formation and has been largely adopted and expanded by biologists in the following centuries (who still use his Greek- and Latin-based binomial naming scheme), it had little success among mineralogists (although each distinct mineral is still formally referred to as a mineral species).
Modern classification
Minerals are classified by variety, species, series and group, in order of increasing generality. The basic level of definition is that of mineral species, each of which is distinguished from the others by unique chemical and physical properties. For example, quartz is defined by its formula, SiO2, and a specific crystalline structure that distinguishes it from other minerals with the same chemical formula (termed polymorphs). When there exists a range of composition between two minerals species, a mineral series is defined. For example, the biotite series is represented by variable amounts of the endmembers phlogopite, siderophyllite, annite, and eastonite. In contrast, a mineral group is a grouping of mineral species with some common chemical properties that share a crystal structure. The pyroxene group has a common formula of XY(Si,Al)2O6, where X and Y are both cations, with X typically bigger than Y; the pyroxenes are single-chain silicates that crystallize in either the orthorhombic or monoclinic crystal systems. Finally, a mineral variety is a specific type of mineral species that differs by some physical characteristic, such as colour or crystal habit. An example is amethyst, which is a purple variety of quartz.
Two common classifications, Dana and Strunz, are used for minerals; both rely on composition, specifically with regards to important chemical groups, and structure. James Dwight Dana, a leading geologist of his time, first published his System of Mineralogy in 1837; , it is in its eighth edition. The Dana classification assigns a four-part number to a mineral species. Its class number is based on important compositional groups; the type gives the ratio of cations to anions in the mineral, and the last two numbers group minerals by structural similarity within a given type or class. The less commonly used Strunz classification, named for German mineralogist Karl Hugo Strunz, is based on the Dana system, but combines both chemical and structural criteria, the latter with regards to distribution of chemical bonds.
As the composition of the Earth's crust is dominated by silicon and oxygen, silicates are by far the most important class of minerals in terms of rock formation and diversity. However, non-silicate minerals are of great economic importance, especially as ores. Non-silicate minerals are subdivided into several other classes by their dominant chemistry, which includes native elements, sulfides, halides, oxides and hydroxides, carbonates and nitrates, borates, sulfates, phosphates, and organic compounds. Most non-silicate mineral species are rare (constituting in total 8% of the Earth's crust), although some are relatively common, such as calcite, pyrite, magnetite, and hematite. There are two major structural styles observed in non-silicates: close-packing and silicate-like linked tetrahedra. Close-packed structures are a way to densely pack atoms while minimizing interstitial space. Hexagonal close-packing involves stacking layers where every other layer is the same ("ababab"), whereas cubic close-packing involves stacking groups of three layers ("abcabcabc"). Analogues to linked silica tetrahedra include (sulfate), (phosphate), (arsenate), and (vanadate) structures. The non-silicates have great economic importance, as they concentrate elements more than the silicate minerals do.
The largest grouping of minerals by far are the silicates; most rocks are composed of greater than 95% silicate minerals, and over 90% of the Earth's crust is composed of these minerals. The two main constituents of silicates are silicon and oxygen, which are the two most abundant elements in the Earth's crust. Other common elements in silicate minerals correspond to other common elements in the Earth's crust, such as aluminium, magnesium, iron, calcium, sodium, and potassium. Some important rock-forming silicates include the feldspars, quartz, olivines, pyroxenes, amphiboles, garnets, and micas.
Silicates
The base unit of a silicate mineral is the [SiO4]4− tetrahedron. In the vast majority of cases, silicon is in four-fold or tetrahedral coordination with oxygen. In very high-pressure situations, silicon will be in six-fold or octahedral coordination, such as in the perovskite structure or the quartz polymorph stishovite (SiO2). In the latter case, the mineral no longer has a silicate structure, but that of rutile (TiO2), and its associated group, which are simple oxides. These silica tetrahedra are then polymerized to some degree to create various structures, such as one-dimensional chains, two-dimensional sheets, and three-dimensional frameworks. The basic silicate mineral where no polymerization of the tetrahedra has occurred requires other elements to balance out the base 4- charge. In other silicate structures, different combinations of elements are required to balance out the resultant negative charge. It is common for the Si4+ to be substituted by Al3+ because of similarity in ionic radius and charge; in those cases, the [AlO4]5− tetrahedra form the same structures as do the unsubstituted tetrahedra, but their charge-balancing requirements are different.
The degree of polymerization can be described by both the structure formed and how many tetrahedral corners (or coordinating oxygens) are shared (for aluminium and silicon in tetrahedral sites):
Orthosilicates (or nesosilicates) Have no linking of polyhedra, thus tetrahedra share no corners.
Disilicates (or sorosilicates) Have two tetrahedra sharing one oxygen atom.
Inosilicates are chain silicates Single-chain silicates have two shared corners, whereas double-chain silicates have two or three shared corners.
Phyllosilicates Have a sheet structure which requires three shared oxygens; in the case of double-chain silicates, some tetrahedra must share two corners instead of three as otherwise a sheet structure would result.
Framework silicates (or tectosilicates) Have tetrahedra that share all four corners.
Ring silicates (or cyclosilicates) Only need tetrahedra to share two corners to form the cyclical structure.
The silicate subclasses are described below in order of decreasing polymerization.
Tectosilicates
Tectosilicates, also known as framework silicates, have the highest degree of polymerization. With all corners of a tetrahedra shared, the silicon:oxygen ratio becomes 1:2. Examples are quartz, the feldspars, feldspathoids, and the zeolites. Framework silicates tend to be particularly chemically stable as a result of strong covalent bonds.
Forming 12% of the Earth's crust, quartz (SiO2) is the most abundant mineral species. It is characterized by its high chemical and physical resistivity. Quartz has several polymorphs, including tridymite and cristobalite at high temperatures, high-pressure coesite, and ultra-high pressure stishovite. The latter mineral can only be formed on Earth by meteorite impacts, and its structure has been compressed so much that it has changed from a silicate structure to that of rutile (TiO2). The silica polymorph that is most stable at the Earth's surface is α-quartz. Its counterpart, β-quartz, is present only at high temperatures and pressures (changes to α-quartz below 573 °C at 1 bar). These two polymorphs differ by a "kinking" of bonds; this change in structure gives β-quartz greater symmetry than α-quartz, and they are thus also called high quartz (β) and low quartz (α).
Feldspars are the most abundant group in the Earth's crust, at about 50%. In the feldspars, Al3+ substitutes for Si4+, which creates a charge imbalance that must be accounted for by the addition of cations. The base structure becomes either [AlSi3O8]− or [Al2Si2O8]2− There are 22 mineral species of feldspars, subdivided into two major subgroups – alkali and plagioclase – and two less common groups – celsian and banalsite. The alkali feldspars are most commonly in a series between potassium-rich orthoclase and sodium-rich albite; in the case of plagioclase, the most common series ranges from albite to calcium-rich anorthite. Crystal twinning is common in feldspars, especially polysynthetic twins in plagioclase and Carlsbad twins in alkali feldspars. If the latter subgroup cools slowly from a melt, it forms exsolution lamellae because the two components – orthoclase and albite – are unstable in solid solution. Exsolution can be on a scale from microscopic to readily observable in hand-sample; perthitic texture forms when Na-rich feldspar exsolve in a K-rich host. The opposite texture (antiperthitic), where K-rich feldspar exsolves in a Na-rich host, is very rare.
Feldspathoids are structurally similar to feldspar, but differ in that they form in Si-deficient conditions, which allows for further substitution by Al3+. As a result, feldspathoids are almost never found in association with quartz. A common example of a feldspathoid is nepheline ((Na, K)AlSiO4); compared to alkali feldspar, nepheline has an Al2O3:SiO2 ratio of 1:2, as opposed to 1:6 in alkali feldspar. Zeolites often have distinctive crystal habits, occurring in needles, plates, or blocky masses. They form in the presence of water at low temperatures and pressures, and have channels and voids in their structure. Zeolites have several industrial applications, especially in waste water treatment.
Phyllosilicates
Phyllosilicates consist of sheets of polymerized tetrahedra. They are bound at three oxygen sites, which gives a characteristic silicon:oxygen ratio of 2:5. Important examples include the mica, chlorite, and the kaolinite-serpentine groups. In addition to the tetrahedra, phyllosilicates have a sheet of octahedra (elements in six-fold coordination by oxygen) that balance out the basic tetrahedra, which have a negative charge (e.g. [Si4O10]4−) These tetrahedra (T) and octahedra (O) sheets are stacked in a variety of combinations to create phyllosilicate layers. Within an octahedral sheet, there are three octahedral sites in a unit structure; however, not all of the sites may be occupied. In that case, the mineral is termed dioctahedral, whereas in other case it is termed trioctahedral. The layers are weakly bound by van der Waals forces, hydrogen bonds, or sparse ionic bonds, which causes a crystallographic weakness, in turn leading to a prominent basal cleavage among the phyllosilicates.
The kaolinite-serpentine group consists of T-O stacks (the 1:1 clay minerals); their hardness ranges from 2 to 4, as the sheets are held by hydrogen bonds. The 2:1 clay minerals (pyrophyllite-talc) consist of T-O-T stacks, but they are softer (hardness from 1 to 2), as they are instead held together by van der Waals forces. These two groups of minerals are subgrouped by octahedral occupation; specifically, kaolinite and pyrophyllite are dioctahedral whereas serpentine and talc trioctahedral.
Micas are also T-O-T-stacked phyllosilicates, but differ from the other T-O-T and T-O-stacked subclass members in that they incorporate aluminium into the tetrahedral sheets (clay minerals have Al3+ in octahedral sites). Common examples of micas are muscovite, and the biotite series. Mica T-O-T layers are bonded together by metal ions, giving them a greater hardness than other phyllosilicate minerals, though they retain perfect basal cleavage. The chlorite group is related to mica group, but a brucite-like (Mg(OH)2) layer between the T-O-T stacks.
Because of their chemical structure, phyllosilicates typically have flexible, elastic, transparent layers that are electrical insulators and can be split into very thin flakes. Micas can be used in electronics as insulators, in construction, as optical filler, or even cosmetics. Chrysotile, a species of serpentine, is the most common mineral species in industrial asbestos, as it is less dangerous in terms of health than the amphibole asbestos.
Inosilicates
Inosilicates consist of tetrahedra repeatedly bonded in chains. These chains can be single, where a tetrahedron is bound to two others to form a continuous chain; alternatively, two chains can be merged to create double-chain silicates. Single-chain silicates have a silicon:oxygen ratio of 1:3 (e.g. [Si2O6]4−), whereas the double-chain variety has a ratio of 4:11, e.g. [Si8O22]12−. Inosilicates contain two important rock-forming mineral groups; single-chain silicates are most commonly pyroxenes, while double-chain silicates are often amphiboles. Higher-order chains exist (e.g. three-member, four-member, five-member chains, etc.) but they are rare.
The pyroxene group consists of 21 mineral species. Pyroxenes have a general structure formula of XY(Si2O6), where X is an octahedral site, while Y can vary in coordination number from six to eight. Most varieties of pyroxene consist of permutations of Ca2+, Fe2+ and Mg2+ to balance the negative charge on the backbone. Pyroxenes are common in the Earth's crust (about 10%) and are a key constituent of mafic igneous rocks.
Amphiboles have great variability in chemistry, described variously as a "mineralogical garbage can" or a "mineralogical shark swimming a sea of elements". The backbone of the amphiboles is the [Si8O22]12−; it is balanced by cations in three possible positions, although the third position is not always used, and one element can occupy both remaining ones. Finally, the amphiboles are usually hydrated, that is, they have a hydroxyl group ([OH]−), although it can be replaced by a fluoride, a chloride, or an oxide ion. Because of the variable chemistry, there are over 80 species of amphibole, although variations, as in the pyroxenes, most commonly involve mixtures of Ca2+, Fe2+ and Mg2+. Several amphibole mineral species can have an asbestiform crystal habit. These asbestos minerals form long, thin, flexible, and strong fibres, which are electrical insulators, chemically inert and heat-resistant; as such, they have several applications, especially in construction materials. However, asbestos are known carcinogens, and cause various other illnesses, such as asbestosis; amphibole asbestos (anthophyllite, tremolite, actinolite, grunerite, and riebeckite) are considered more dangerous than chrysotile serpentine asbestos.
Cyclosilicates
Cyclosilicates, or ring silicates, have a ratio of silicon to oxygen of 1:3. Six-member rings are most common, with a base structure of [Si6O18]12−; examples include the tourmaline group and beryl. Other ring structures exist, with 3, 4, 8, 9, 12 having been described. Cyclosilicates tend to be strong, with elongated, striated crystals.
Tourmalines have a very complex chemistry that can be described by a general formula XY3Z6(BO3)3T6O18V3W. The T6O18 is the basic ring structure, where T is usually Si4+, but substitutable by Al3+ or B3+. Tourmalines can be subgrouped by the occupancy of the X site, and from there further subdivided by the chemistry of the W site. The Y and Z sites can accommodate a variety of cations, especially various transition metals; this variability in structural transition metal content gives the tourmaline group greater variability in colour. Other cyclosilicates include beryl, Al2Be3Si6O18, whose varieties include the gemstones emerald (green) and aquamarine (bluish). Cordierite is structurally similar to beryl, and is a common metamorphic mineral.
Sorosilicates
Sorosilicates, also termed disilicates, have tetrahedron-tetrahedron bonding at one oxygen, which results in a 2:7 ratio of silicon to oxygen. The resultant common structural element is the [Si2O7]6− group. The most common disilicates by far are members of the epidote group. Epidotes are found in variety of geologic settings, ranging from mid-ocean ridge to granites to metapelites. Epidotes are built around the structure [(SiO4)(Si2O7)]10− structure; for example, the mineral species epidote has calcium, aluminium, and ferric iron to charge balance: Ca2Al2(Fe3+, Al)(SiO4)(Si2O7)O(OH). The presence of iron as Fe3+ and Fe2+ helps buffer oxygen fugacity, which in turn is a significant factor in petrogenesis.
Other examples of sorosilicates include lawsonite, a metamorphic mineral forming in the blueschist facies (subduction zone setting with low temperature and high pressure), vesuvianite, which takes up a significant amount of calcium in its chemical structure.
Orthosilicates
Orthosilicates consist of isolated tetrahedra that are charge-balanced by other cations. Also termed nesosilicates, this type of silicate has a silicon:oxygen ratio of 1:4 (e.g. SiO4). Typical orthosilicates tend to form blocky equant crystals, and are fairly hard. Several rock-forming minerals are part of this subclass, such as the aluminosilicates, the olivine group, and the garnet group.
The aluminosilicates –bkyanite, andalusite, and sillimanite, all Al2SiO5 – are structurally composed of one [SiO4]4− tetrahedron, and one Al3+ in octahedral coordination. The remaining Al3+ can be in six-fold coordination (kyanite), five-fold (andalusite) or four-fold (sillimanite); which mineral forms in a given environment is depend on pressure and temperature conditions. In the olivine structure, the main olivine series of (Mg, Fe)2SiO4 consist of magnesium-rich forsterite and iron-rich fayalite. Both iron and magnesium are in octahedral by oxygen. Other mineral species having this structure exist, such as tephroite, Mn2SiO4. The garnet group has a general formula of X3Y2(SiO4)3, where X is a large eight-fold coordinated cation, and Y is a smaller six-fold coordinated cation. There are six ideal endmembers of garnet, split into two group. The pyralspite garnets have Al3+ in the Y position: pyrope (Mg3Al2(SiO4)3), almandine (Fe3Al2(SiO4)3), and spessartine (Mn3Al2(SiO4)3). The ugrandite garnets have Ca2+ in the X position: uvarovite (Ca3Cr2(SiO4)3), grossular (Ca3Al2(SiO4)3) and andradite (Ca3Fe2(SiO4)3). While there are two subgroups of garnet, solid solutions exist between all six end-members.
Other orthosilicates include zircon, staurolite, and topaz. Zircon (ZrSiO4) is useful in geochronology as U6+ can substitute for Zr4+; furthermore, because of its very resistant structure, it is difficult to reset it as a chronometer. Staurolite is a common metamorphic intermediate-grade index mineral. It has a particularly complicated crystal structure that was only fully described in 1986. Topaz (Al2SiO4(F, OH)2, often found in granitic pegmatites associated with tourmaline, is a common gemstone mineral.
Non-silicates
Native elements
Native elements are those that are not chemically bonded to other elements. This mineral group includes native metals, semi-metals, and non-metals, and various alloys and solid solutions. The metals are held together by metallic bonding, which confers distinctive physical properties such as their shiny metallic lustre, ductility and malleability, and electrical conductivity. Native elements are subdivided into groups by their structure or chemical attributes.
The gold group, with a cubic close-packed structure, includes metals such as gold, silver, and copper. The platinum group is similar in structure to the gold group. The iron-nickel group is characterized by several iron-nickel alloy species. Two examples are kamacite and taenite, which are found in iron meteorites; these species differ by the amount of Ni in the alloy; kamacite has less than 5–7% nickel and is a variety of native iron, whereas the nickel content of taenite ranges from 7–37%. Arsenic group minerals consist of semi-metals, which have only some metallic traits; for example, they lack the malleability of metals. Native carbon occurs in two allotropes, graphite and diamond; the latter forms at very high pressure in the mantle, which gives it a much stronger structure than graphite.
Sulfides
The sulfide minerals are chemical compounds of one or more metals or semimetals with a chalcogen or pnictogen, of which sulfur is most common. Tellurium, arsenic, or selenium can substitute for the sulfur. Sulfides tend to be soft, brittle minerals with a high specific gravity. Many powdered sulfides, such as pyrite, have a sulfurous smell when powdered. Sulfides are susceptible to weathering, and many readily dissolve in water; these dissolved minerals can be later redeposited, which creates enriched secondary ore deposits. Sulfides are classified by the ratio of the metal or semimetal to the sulfur, such as M:S equal to 2:1, or 1:1. Many sulfide minerals are economically important as metal ores; examples include sphalerite (ZnS), an ore of zinc, galena (PbS), an ore of lead, cinnabar (HgS), an ore of mercury, and molybdenite (MoS2, an ore of molybdenum. Pyrite (FeS2), is the most commonly occurring sulfide, and can be found in most geological environments. It is not, however, an ore of iron, but can be instead oxidized to produce sulfuric acid. Related to the sulfides are the rare sulfosalts, in which a metallic element is bonded to sulfur and a semimetal such as antimony, arsenic, or bismuth. Like the sulfides, sulfosalts are typically soft, heavy, and brittle minerals.
Oxides
Oxide minerals are divided into three categories: simple oxides, hydroxides, and multiple oxides. Simple oxides are characterized by O2− as the main anion and primarily ionic bonding. They can be further subdivided by the ratio of oxygen to the cations. The periclase group consists of minerals with a 1:1 ratio. Oxides with a 2:1 ratio include cuprite (Cu2O) and water ice. Corundum group minerals have a 2:3 ratio, and includes minerals such as corundum (Al2O3), and hematite (Fe2O3). Rutile group minerals have a ratio of 1:2; the eponymous species, rutile (TiO2) is the chief ore of titanium; other examples include cassiterite (SnO2; ore of tin), and pyrolusite (MnO2; ore of manganese). In hydroxides, the dominant anion is the hydroxyl ion, OH−. Bauxites are the chief aluminium ore, and are a heterogeneous mixture of the hydroxide minerals diaspore, gibbsite, and bohmite; they form in areas with a very high rate of chemical weathering (mainly tropical conditions). Finally, multiple oxides are compounds of two metals with oxygen. A major group within this class are the spinels, with a general formula of X2+Y3+2O4. Examples of species include spinel (MgAl2O4), chromite (FeCr2O4), and magnetite (Fe3O4). The latter is readily distinguishable by its strong magnetism, which occurs as it has iron in two oxidation states (Fe2+Fe3+2O4), which makes it a multiple oxide instead of a single oxide.
Halides
The halide minerals are compounds in which a halogen (fluorine, chlorine, iodine, or bromine) is the main anion. These minerals tend to be soft, weak, brittle, and water-soluble. Common examples of halides include halite (NaCl, table salt), sylvite (KCl), and fluorite (CaF2). Halite and sylvite commonly form as evaporites, and can be dominant minerals in chemical sedimentary rocks. Cryolite, Na3AlF6, is a key mineral in the extraction of aluminium from bauxites; however, as the only significant occurrence at Ivittuut, Greenland, in a granitic pegmatite, was depleted, synthetic cryolite can be made from fluorite.
Carbonates
The carbonate minerals are those in which the main anionic group is carbonate, [CO3]2−. Carbonates tend to be brittle, many have rhombohedral cleavage, and all react with acid. Due to the last characteristic, field geologists often carry dilute hydrochloric acid to distinguish carbonates from non-carbonates. The reaction of acid with carbonates, most commonly found as the polymorph calcite and aragonite (CaCO3), relates to the dissolution and precipitation of the mineral, which is a key in the formation of limestone caves, features within them such as stalactite and stalagmites, and karst landforms. Carbonates are most often formed as biogenic or chemical sediments in marine environments. The carbonate group is structurally a triangle, where a central C4+ cation is surrounded by three O2− anions; different groups of minerals form from different arrangements of these triangles. The most common carbonate mineral is calcite, which is the primary constituent of sedimentary limestone and metamorphic marble. Calcite, CaCO3, can have a significant percentage of magnesium substituting for calcium. Under high-Mg conditions, its polymorph aragonite will form instead; the marine geochemistry in this regard can be described as an aragonite or calcite sea, depending on which mineral preferentially forms. Dolomite is a double carbonate, with the formula CaMg(CO3)2. Secondary dolomitization of limestone is common, in which calcite or aragonite are converted to dolomite; this reaction increases pore space (the unit cell volume of dolomite is 88% that of calcite), which can create a reservoir for oil and gas. These two mineral species are members of eponymous mineral groups: the calcite group includes carbonates with the general formula XCO3, and the dolomite group constitutes minerals with the general formula XY(CO3)2.
Sulfates
The sulfate minerals all contain the sulfate anion, [SO4]2−. They tend to be transparent to translucent, soft, and many are fragile. Sulfate minerals commonly form as evaporites, where they precipitate out of evaporating saline waters. Sulfates can also be found in hydrothermal vein systems associated with sulfides, or as oxidation products of sulfides. Sulfates can be subdivided into anhydrous and hydrous minerals. The most common hydrous sulfate by far is gypsum, CaSO4⋅2H2O. It forms as an evaporite, and is associated with other evaporites such as calcite and halite; if it incorporates sand grains as it crystallizes, gypsum can form desert roses. Gypsum has very low thermal conductivity and maintains a low temperature when heated as it loses that heat by dehydrating; as such, gypsum is used as an insulator in materials such as plaster and drywall. The anhydrous equivalent of gypsum is anhydrite; it can form directly from seawater in highly arid conditions. The barite group has the general formula XSO4, where the X is a large 12-coordinated cation. Examples include barite (BaSO4), celestine (SrSO4), and anglesite (PbSO4); anhydrite is not part of the barite group, as the smaller Ca2+ is only in eight-fold coordination.
Phosphates
The phosphate minerals are characterized by the tetrahedral [PO4]3− unit, although the structure can be generalized, and phosphorus is replaced by antimony, arsenic, or vanadium. The most common phosphate is the apatite group; common species within this group are fluorapatite (Ca5(PO4)3F), chlorapatite (Ca5(PO4)3Cl) and hydroxylapatite (Ca5(PO4)3(OH)). Minerals in this group are the main crystalline constituents of teeth and bones in vertebrates. The relatively abundant monazite group has a general structure of ATO4, where T is phosphorus or arsenic, and A is often a rare-earth element (REE). Monazite is important in two ways: first, as a REE "sink", it can sufficiently concentrate these elements to become an ore; secondly, monazite group elements can incorporate relatively large amounts of uranium and thorium, which can be used in monazite geochronology to date the rock based on the decay of the U and Th to lead.
Organic minerals
The Strunz classification includes a class for organic minerals. These rare compounds contain organic carbon, but can be formed by a geologic process. For example, whewellite, CaC2O4⋅H2O is an oxalate that can be deposited in hydrothermal ore veins. While hydrated calcium oxalate can be found in coal seams and other sedimentary deposits involving organic matter, the hydrothermal occurrence is not considered to be related to biological activity.
Recent advances
Mineral classification schemes and their definitions are evolving to match recent advances in mineral science. Recent changes have included the addition of an organic class, in both the new Dana and the Strunz classification schemes. The organic class includes a very rare group of minerals with hydrocarbons. The IMA Commission on New Minerals and Mineral Names adopted in 2009 a hierarchical scheme for the naming and classification of mineral groups and group names and established seven commissions and four working groups to review and classify minerals into an official listing of their published names. According to these new rules, "mineral species can be grouped in a number of different ways, on the basis of chemistry, crystal structure, occurrence, association, genetic history, or resource, for example, depending on the purpose to be served by the classification."
Astrobiology
It has been suggested that biominerals could be important indicators of extraterrestrial life and thus could play an important role in the search for past or present life on Mars. Furthermore, organic components (biosignatures) that are often associated with biominerals are believed to play crucial roles in both pre-biotic and biotic reactions.
In January 2014, NASA reported that studies by the Curiosity and Opportunity rovers on Mars would search for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and/or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on the planet Mars became a primary NASA objective.
See also
References
General references
Further reading
On the creation of new minerals by human activity.
External links
Mindat mineralogical database, largest mineral database on the Internet
"Mineralogy Database" by David Barthelmy (2009)
"Mineral Identification Key II" Mineralogical Society of America
"American Mineralogist Crystal Structure Database"
Minerals and the Origins of Life (Robert Hazen, NASA) (video, 60m, April 2014).
The private lives of minerals: Insights from big-data mineralogy (Robert Hazen, 15 February 2017)
Natural materials | Mineral | [
"Physics"
] | 16,076 | [
"Natural materials",
"Materials",
"Matter"
] |
19,200 | https://en.wikipedia.org/wiki/Molecular%20biology | Molecular biology is a branch of biology that seeks to understand the molecular basis of biological activity in and between cells, including biomolecular synthesis, modification, mechanisms, and interactions.
Though cells and other microscopic structures had been observed in living organisms as early as the 18th century, a detailed understanding of the mechanisms and interactions governing their behavior did not emerge until the 20th century, when technologies used in physics and chemistry had advanced sufficiently to permit their application in the biological sciences. The term 'molecular biology' was first used in 1945 by the English physicist William Astbury, who described it as an approach focused on discerning the underpinnings of biological phenomena—i.e. uncovering the physical and chemical structures and properties of biological molecules, as well as their interactions with other molecules and how these interactions explain observations of so-called classical biology, which instead studies biological processes at larger scales and higher levels of organization. In 1953, Francis Crick, James Watson, Rosalind Franklin, and their colleagues at the Medical Research Council Unit, Cavendish Laboratory, were the first to describe the double helix model for the chemical structure of deoxyribonucleic acid (DNA), which is often considered a landmark event for the nascent field because it provided a physico-chemical basis by which to understand the previously nebulous idea of nucleic acids as the primary substance of biological inheritance. They proposed this structure based on previous research done by Franklin, which was conveyed to them by Maurice Wilkins and Max Perutz. Their work led to the discovery of DNA in other microorganisms, plants, and animals.
The field of molecular biology includes techniques which enable scientists to learn about molecular processes. These techniques are used to efficiently target new drugs, diagnose disease, and better understand cell physiology. Some clinical research and medical therapies arising from molecular biology are covered under gene therapy, whereas the use of molecular biology or molecular cell biology in medicine is now referred to as molecular medicine.
History of molecular biology
Molecular biology sits at the intersection of biochemistry and genetics; as these scientific disciplines emerged and evolved in the 20th century, it became clear that they both sought to determine the molecular mechanisms which underlie vital cellular functions. Advances in molecular biology have been closely related to the development of new technologies and their optimization. Molecular biology has been elucidated by the work of many scientists, and thus the history of the field depends on an understanding of these scientists and their experiments.
The field of genetics arose from attempts to understand the set of rules underlying reproduction and heredity, and the nature of the hypothetical units of heredity known as genes. Gregor Mendel pioneered this work in 1866, when he first described the laws of inheritance he observed in his studies of mating crosses in pea plants. One such law of genetic inheritance is the law of segregation, which states that diploid individuals with two alleles for a particular gene will pass one of these alleles to their offspring. Because of his critical work, the study of genetic inheritance is commonly referred to as Mendelian genetics.
A major milestone in molecular biology was the discovery of the structure of DNA. This work began in 1869 by Friedrich Miescher, a Swiss biochemist who first proposed a structure called nuclein, which we now know to be (deoxyribonucleic acid), or DNA. He discovered this unique substance by studying the components of pus-filled bandages, and noting the unique properties of the "phosphorus-containing substances". Another notable contributor to the DNA model was Phoebus Levene, who proposed the "polynucleotide model" of DNA in 1919 as a result of his biochemical experiments on yeast. In 1950, Erwin Chargaff expanded on the work of Levene and elucidated a few critical properties of nucleic acids: first, the sequence of nucleic acids varies across species. Second, the total concentration of purines (adenine and guanine) is always equal to the total concentration of pyrimidines (cysteine and thymine). This is now known as Chargaff's rule. In 1953, James Watson and Francis Crick published the double helical structure of DNA, based on the X-ray crystallography work done by Rosalind Franklin which was conveyed to them by Maurice Wilkins and Max Perutz. Watson and Crick described the structure of DNA and conjectured about the implications of this unique structure for possible mechanisms of DNA replication. Watson and Crick were awarded the Nobel Prize in Physiology or Medicine in 1962, along with Wilkins, for proposing a model of the structure of DNA.
In 1961, it was demonstrated that when a gene encodes a protein, three sequential bases of a gene's DNA specify each successive amino acid of the protein. Thus the genetic code is a triplet code, where each triplet (called a codon) specifies a particular amino acid. Furthermore, it was shown that the codons do not overlap with each other in the DNA sequence encoding a protein, and that each sequence is read from a fixed starting point.
During 1962–1964, through the use of conditional lethal mutants of a bacterial virus, fundamental advances were made in our understanding of the functions and interactions of the proteins employed in the machinery of DNA replication, DNA repair, DNA recombination, and in the assembly of molecular structures.
Griffith's experiment
In 1928, Frederick Griffith, encountered a virulence property in pneumococcus bacteria, which was killing lab rats. According to Mendel, prevalent at that time, gene transfer could occur only from parent to daughter cells. Griffith advanced another theory, stating that gene transfer occurring in member of same generation is known as horizontal gene transfer (HGT). This phenomenon is now referred to as genetic transformation.
Griffith's experiment addressed the pneumococcus bacteria, which had two different strains, one virulent and smooth and one avirulent and rough. The smooth strain had glistering appearance owing to the presence of a type of specific polysaccharide – a polymer of glucose and glucuronic acid capsule. Due to this polysaccharide layer of bacteria, a host's immune system cannot recognize the bacteria and it kills the host. The other, avirulent, rough strain lacks this polysaccharide capsule and has a dull, rough appearance.
Presence or absence of capsule in the strain, is known to be genetically determined. Smooth and rough strains occur in several different type such as S-I, S-II, S-III, etc. and R-I, R-II, R-III, etc. respectively. All this subtypes of S and R bacteria differ with each other in antigen type they produce.
Avery–MacLeod–McCarty experiment
The Avery–MacLeod–McCarty experiment was a landmark study conducted in 1944 that demonstrated that DNA, not protein as previously thought, carries genetic information in bacteria. Oswald Avery, Colin Munro MacLeod, and Maclyn McCarty used an extract from a strain of pneumococcus that could cause pneumonia in mice. They showed that genetic transformation in the bacteria could be accomplished by injecting them with purified DNA from the extract. They discovered that when they digested the DNA in the extract with DNase, transformation of harmless bacteria into virulent ones was lost. This provided strong evidence that DNA was the genetic material, challenging the prevailing belief that proteins were responsible. It laid the basis for the subsequent discovery of its structure by Watson and Crick.
Hershey–Chase experiment
Confirmation that DNA is the genetic material which is cause of infection came from the Hershey–Chase experiment. They used E.coli and bacteriophage for the experiment. This experiment is also known as blender experiment, as kitchen blender was used as a major piece of apparatus. Alfred Hershey and Martha Chase demonstrated that the DNA injected by a phage particle into a bacterium contains all information required to synthesize progeny phage particles. They used radioactivity to tag the bacteriophage's protein coat with radioactive sulphur and DNA with radioactive phosphorus, into two different test tubes respectively. After mixing bacteriophage and E.coli into the test tube, the incubation period starts in which phage transforms the genetic material in the E.coli cells. Then the mixture is blended or agitated, which separates the phage from E.coli cells. The whole mixture is centrifuged and the pellet which contains E.coli cells was checked and the supernatant was discarded. The E.coli cells showed radioactive phosphorus, which indicated that the transformed material was DNA not the protein coat.
The transformed DNA gets attached to the DNA of E.coli and radioactivity is only seen onto the bacteriophage's DNA. This mutated DNA can be passed to the next generation and the theory of Transduction came into existence. Transduction is a process in which the bacterial DNA carry the fragment of bacteriophages and pass it on the next generation. This is also a type of horizontal gene transfer.
Meselson–Stahl experiment
The Meselson-Stahl experiment was a landmark experiment in molecular biology that provided evidence for the semiconservative replication of DNA. Conducted in 1958 by Matthew Meselson and Franklin Stahl, the experiment involved growing E. coli bacteria in a medium containing heavy isotope of nitrogen (15N) for several generations. This caused all the newly synthesized bacterial DNA to be incorporated with the heavy isotope.
After allowing the bacteria to replicate in a medium containing normal nitrogen (14N), samples were taken at various time points. These samples were then subjected to centrifugation in a density gradient, which separated the DNA molecules based on their density.
The results showed that after one generation of replication in the 14N medium, the DNA formed a band of intermediate density between that of pure 15N DNA and pure 14N DNA. This supported the semiconservative DNA replication proposed by Watson and Crick, where each strand of the parental DNA molecule serves as a template for the synthesis of a new complementary strand, resulting in two daughter DNA molecules, each consisting of one parental and one newly synthesized strand.
The Meselson-Stahl experiment provided compelling evidence for the semiconservative replication of DNA, which is fundamental to the understanding of genetics and molecular biology.
Modern molecular biology
In the early 2020s, molecular biology entered a golden age defined by both vertical and horizontal technical development. Vertically, novel technologies are allowing for real-time monitoring of biological processes at the atomic level. Molecular biologists today have access to increasingly affordable sequencing data at increasingly higher depths, facilitating the development of novel genetic manipulation methods in new non-model organisms. Likewise, synthetic molecular biologists will drive the industrial production of small and macro molecules through the introduction of exogenous metabolic pathways in various prokaryotic and eukaryotic cell lines.
Horizontally, sequencing data is becoming more affordable and used in many different scientific fields. This will drive the development of industries in developing nations and increase accessibility to individual researchers. Likewise, CRISPR-Cas9 gene editing experiments can now be conceived and implemented by individuals for under $10,000 in novel organisms, which will drive the development of industrial and medical applications.
Relationship to other biological sciences
The following list describes a viewpoint on the interdisciplinary relationships between molecular biology and other related fields.
Molecular biology is the study of the molecular underpinnings of the biological phenomena, focusing on molecular synthesis, modification, mechanisms and interactions.
Biochemistry is the study of the chemical substances and vital processes occurring in living organisms. Biochemists focus heavily on the role, function, and structure of biomolecules such as proteins, lipids, carbohydrates and nucleic acids.
Genetics is the study of how genetic differences affect organisms. Genetics attempts to predict how mutations, individual genes and genetic interactions can affect the expression of a phenotype
While researchers practice techniques specific to molecular biology, it is common to combine these with methods from genetics and biochemistry. Much of molecular biology is quantitative, and recently a significant amount of work has been done using computer science techniques such as bioinformatics and computational biology. Molecular genetics, the study of gene structure and function, has been among the most prominent sub-fields of molecular biology since the early 2000s. Other branches of biology are informed by molecular biology, by either directly studying the interactions of molecules in their own right such as in cell biology and developmental biology, or indirectly, where molecular techniques are used to infer historical attributes of populations or species, as in fields in evolutionary biology such as population genetics and phylogenetics. There is also a long tradition of studying biomolecules "from the ground up", or molecularly, in biophysics.
Techniques of molecular biology
Molecular cloning
Molecular cloning is used to isolate and then transfer a DNA sequence of interest into a plasmid vector. This recombinant DNA technology was first developed in the 1960s. In this technique, a DNA sequence coding for a protein of interest is cloned using polymerase chain reaction (PCR), and/or restriction enzymes, into a plasmid (expression vector). The plasmid vector usually has at least 3 distinctive features: an origin of replication, a multiple cloning site (MCS), and a selective marker (usually antibiotic resistance). Additionally, upstream of the MCS are the promoter regions and the transcription start site, which regulate the expression of cloned gene.
This plasmid can be inserted into either bacterial or animal cells. Introducing DNA into bacterial cells can be done by transformation via uptake of naked DNA, conjugation via cell-cell contact or by transduction via viral vector. Introducing DNA into eukaryotic cells, such as animal cells, by physical or chemical means is called transfection. Several different transfection techniques are available, such as calcium phosphate transfection, electroporation, microinjection and liposome transfection. The plasmid may be integrated into the genome, resulting in a stable transfection, or may remain independent of the genome and expressed temporarily, called a transient transfection.
DNA coding for a protein of interest is now inside a cell, and the protein can now be expressed. A variety of systems, such as inducible promoters and specific cell-signaling factors, are available to help express the protein of interest at high levels. Large quantities of a protein can then be extracted from the bacterial or eukaryotic cell. The protein can be tested for enzymatic activity under a variety of situations, the protein may be crystallized so its tertiary structure can be studied, or, in the pharmaceutical industry, the activity of new drugs against the protein can be studied.
Polymerase chain reaction
Polymerase chain reaction (PCR) is an extremely versatile technique for copying DNA. In brief, PCR allows a specific DNA sequence to be copied or modified in predetermined ways. The reaction is extremely powerful and under perfect conditions could amplify one DNA molecule to become 1.07 billion molecules in less than two hours. PCR has many applications, including the study of gene expression, the detection of pathogenic microorganisms, the detection of genetic mutations, and the introduction of mutations to DNA. The PCR technique can be used to introduce restriction enzyme sites to ends of DNA molecules, or to mutate particular bases of DNA, the latter is a method referred to as site-directed mutagenesis. PCR can also be used to determine whether a particular DNA fragment is found in a cDNA library. PCR has many variations, like reverse transcription PCR (RT-PCR) for amplification of RNA, and, more recently, quantitative PCR which allow for quantitative measurement of DNA or RNA molecules.
Gel electrophoresis
Gel electrophoresis is a technique which separates molecules by their size using an agarose or polyacrylamide gel. This technique is one of the principal tools of molecular biology. The basic principle is that DNA fragments can be separated by applying an electric current across the gel - because the DNA backbone contains negatively charged phosphate groups, the DNA will migrate through the agarose gel towards the positive end of the current. Proteins can also be separated on the basis of size using an SDS-PAGE gel, or on the basis of size and their electric charge by using what is known as a 2D gel electrophoresis.
The Bradford protein assay
The Bradford assay is a molecular biology technique which enables the fast, accurate quantitation of protein molecules utilizing the unique properties of a dye called Coomassie Brilliant Blue G-250. Coomassie Blue undergoes a visible color shift from reddish-brown to bright blue upon binding to protein. In its unstable, cationic state, Coomassie Blue has a background wavelength of 465 nm and gives off a reddish-brown color. When Coomassie Blue binds to protein in an acidic solution, the background wavelength shifts to 595 nm and the dye gives off a bright blue color. Proteins in the assay bind Coomassie blue in about 2 minutes, and the protein-dye complex is stable for about an hour, although it is recommended that absorbance readings are taken within 5 to 20 minutes of reaction initiation. The concentration of protein in the Bradford assay can then be measured using a visible light spectrophotometer, and therefore does not require extensive equipment.
This method was developed in 1975 by Marion M. Bradford, and has enabled significantly faster, more accurate protein quantitation compared to previous methods: the Lowry procedure and the biuret assay. Unlike the previous methods, the Bradford assay is not susceptible to interference by several non-protein molecules, including ethanol, sodium chloride, and magnesium chloride. However, it is susceptible to influence by strong alkaline buffering agents, such as sodium dodecyl sulfate (SDS).
Macromolecule blotting and probing
The terms northern, western and eastern blotting are derived from what initially was a molecular biology joke that played on the term Southern blotting, after the technique described by Edwin Southern for the hybridisation of blotted DNA. Patricia Thomas, developer of the RNA blot which then became known as the northern blot, actually did not use the term.
Southern blotting
Named after its inventor, biologist Edwin Southern, the Southern blot is a method for probing for the presence of a specific DNA sequence within a DNA sample. DNA samples before or after restriction enzyme (restriction endonuclease) digestion are separated by gel electrophoresis and then transferred to a membrane by blotting via capillary action. The membrane is then exposed to a labeled DNA probe that has a complement base sequence to the sequence on the DNA of interest. Southern blotting is less commonly used in laboratory science due to the capacity of other techniques, such as PCR, to detect specific DNA sequences from DNA samples. These blots are still used for some applications, however, such as measuring transgene copy number in transgenic mice or in the engineering of gene knockout embryonic stem cell lines.
Northern blotting
The northern blot is used to study the presence of specific RNA molecules as relative comparison among a set of different samples of RNA. It is essentially a combination of denaturing RNA gel electrophoresis, and a blot. In this process RNA is separated based on size and is then transferred to a membrane that is then probed with a labeled complement of a sequence of interest. The results may be visualized through a variety of ways depending on the label used; however, most result in the revelation of bands representing the sizes of the RNA detected in sample. The intensity of these bands is related to the amount of the target RNA in the samples analyzed. The procedure is commonly used to study when and how much gene expression is occurring by measuring how much of that RNA is present in different samples, assuming that no post-transcriptional regulation occurs and that the levels of mRNA reflect proportional levels of the corresponding protein being produced. It is one of the most basic tools for determining at what time, and under what conditions, certain genes are expressed in living tissues.
Western blotting
A western blot is a technique by which specific proteins can be detected from a mixture of proteins. Western blots can be used to determine the size of isolated proteins, as well as to quantify their expression. In western blotting, proteins are first separated by size, in a thin gel sandwiched between two glass plates in a technique known as SDS-PAGE. The proteins in the gel are then transferred to a polyvinylidene fluoride (PVDF), nitrocellulose, nylon, or other support membrane. This membrane can then be probed with solutions of antibodies. Antibodies that specifically bind to the protein of interest can then be visualized by a variety of techniques, including colored products, chemiluminescence, or autoradiography. Often, the antibodies are labeled with enzymes. When a chemiluminescent substrate is exposed to the enzyme it allows detection. Using western blotting techniques allows not only detection but also quantitative analysis. Analogous methods to western blotting can be used to directly stain specific proteins in live cells or tissue sections.
Eastern blotting
The eastern blotting technique is used to detect post-translational modification of proteins. Proteins blotted on to the PVDF or nitrocellulose membrane are probed for modifications using specific substrates.
Microarrays
A DNA microarray is a collection of spots attached to a solid support such as a microscope slide where each spot contains one or more single-stranded DNA oligonucleotide fragments. Arrays make it possible to put down large quantities of very small (100 micrometre diameter) spots on a single slide. Each spot has a DNA fragment molecule that is complementary to a single DNA sequence. A variation of this technique allows the gene expression of an organism at a particular stage in development to be qualified (expression profiling). In this technique the RNA in a tissue is isolated and converted to labeled complementary DNA (cDNA). This cDNA is then hybridized to the fragments on the array and visualization of the hybridization can be done. Since multiple arrays can be made with exactly the same position of fragments, they are particularly useful for comparing the gene expression of two different tissues, such as a healthy and cancerous tissue. Also, one can measure what genes are expressed and how that expression changes with time or with other factors.
There are many different ways to fabricate microarrays; the most common are silicon chips, microscope slides with spots of ~100 micrometre diameter, custom arrays, and arrays with larger spots on porous membranes (macroarrays). There can be anywhere from 100 spots to more than 10,000 on a given array. Arrays can also be made with molecules other than DNA.
Allele-specific oligonucleotide
Allele-specific oligonucleotide (ASO) is a technique that allows detection of single base mutations without the need for PCR or gel electrophoresis. Short (20–25 nucleotides in length), labeled probes are exposed to the non-fragmented target DNA, hybridization occurs with high specificity due to the short length of the probes and even a single base change will hinder hybridization. The target DNA is then washed and the unhybridized probes are removed. The target DNA is then analyzed for the presence of the probe via radioactivity or fluorescence. In this experiment, as in most molecular biology techniques, a control must be used to ensure successful experimentation.
In molecular biology, procedures and technologies are continually being developed and older technologies abandoned. For example, before the advent of DNA gel electrophoresis (agarose or polyacrylamide), the size of DNA molecules was typically determined by rate sedimentation in sucrose gradients, a slow and labor-intensive technique requiring expensive instrumentation; prior to sucrose gradients, viscometry was used. Aside from their historical interest, it is often worth knowing about older technology, as it is occasionally useful to solve another new problem for which the newer technique is inappropriate.
See also
References
Further reading
External links
Applied geometry | Molecular biology | [
"Chemistry",
"Mathematics",
"Biology"
] | 5,023 | [
"Cell biology",
"Applied mathematics",
"Geometry",
"Molecular biology",
"Biochemistry",
"Applied geometry"
] |
19,446 | https://en.wikipedia.org/wiki/Magnetic%20resonance%20imaging | Magnetic resonance imaging (MRI) is a medical imaging technique used in radiology to generate pictures of the anatomy and the physiological processes inside the body. MRI scanners use strong magnetic fields, magnetic field gradients, and radio waves to form images of the organs in the body. MRI does not involve X-rays or the use of ionizing radiation, which distinguishes it from computed tomography (CT) and positron emission tomography (PET) scans. MRI is a medical application of nuclear magnetic resonance (NMR) which can also be used for imaging in other NMR applications, such as NMR spectroscopy.
MRI is widely used in hospitals and clinics for medical diagnosis, staging and follow-up of disease. Compared to CT, MRI provides better contrast in images of soft tissues, e.g. in the brain or abdomen. However, it may be perceived as less comfortable by patients, due to the usually longer and louder measurements with the subject in a long, confining tube, although "open" MRI designs mostly relieve this. Additionally, implants and other non-removable metal in the body can pose a risk and may exclude some patients from undergoing an MRI examination safely.
MRI was originally called NMRI (nuclear magnetic resonance imaging), but "nuclear" was dropped to avoid negative associations. Certain atomic nuclei are able to absorb radio frequency (RF) energy when placed in an external magnetic field; the resultant evolving spin polarization can induce an RF signal in a radio frequency coil and thereby be detected. In other words, the nuclear magnetic spin of protons in the hydrogen nuclei resonates with the RF incident waves and emit coherent radiation with compact direction, energy (frequency) and phase. This coherent amplified radiation is easily detected by RF antennas close to the subject being examined. It is a process similar to masers. In clinical and research MRI, hydrogen atoms are most often used to generate a macroscopic polarized radiation that is detected by the antennas. Hydrogen atoms are naturally abundant in humans and other biological organisms, particularly in water and fat. For this reason, most MRI scans essentially map the location of water and fat in the body. Pulses of radio waves excite the nuclear spin energy transition, and magnetic field gradients localize the polarization in space. By varying the parameters of the pulse sequence, different contrasts may be generated between tissues based on the relaxation properties of the hydrogen atoms therein.
Since its development in the 1970s and 1980s, MRI has proven to be a versatile imaging technique. While MRI is most prominently used in diagnostic medicine and biomedical research, it also may be used to form images of non-living objects, such as mummies. Diffusion MRI and functional MRI extend the utility of MRI to capture neuronal tracts and blood flow respectively in the nervous system, in addition to detailed spatial images. The sustained increase in demand for MRI within health systems has led to concerns about cost effectiveness and overdiagnosis.
Mechanism
Construction and physics
In most medical applications, hydrogen nuclei, which consist solely of a proton, that are in tissues create a signal that is processed to form an image of the body in terms of the density of those nuclei in a specific region. Given that the protons are affected by fields from other atoms to which they are bonded, it is possible to separate responses from hydrogen in specific compounds. To perform a study, the person is positioned within an MRI scanner that forms a strong magnetic field around the area to be imaged. First, energy from an oscillating magnetic field is temporarily applied to the patient at the appropriate resonance frequency. Scanning with X and Y gradient coils causes a selected region of the patient to experience the exact magnetic field required for the energy to be absorbed. The atoms are excited by a RF pulse and the resultant signal is measured by a receiving coil. The RF signal may be processed to deduce position information by looking at the changes in RF level and phase caused by varying the local magnetic field using gradient coils. As these coils are rapidly switched during the excitation and response to perform a moving line scan, they create the characteristic repetitive noise of an MRI scan as the windings move slightly due to magnetostriction. The contrast between different tissues is determined by the rate at which excited atoms return to the equilibrium state. Exogenous contrast agents may be given to the person to make the image clearer.
The major components of an MRI scanner are the main magnet, which polarizes the sample, the shim coils for correcting shifts in the homogeneity of the main magnetic field, the gradient system which is used to localize the region to be scanned and the RF system, which excites the sample and detects the resulting NMR signal. The whole system is controlled by one or more computers.
MRI requires a magnetic field that is both strong and uniform to a few parts per million across the scan volume. The field strength of the magnet is measured in teslas – and while the majority of systems operate at 1.5 T, commercial systems are available between 0.2 and 7 T. 3T MRI systems, also called 3 Tesla MRIs, have stronger magnets than 1.5 systems and are considered better for images of organs and soft tissue. Whole-body MRI systems for research applications operate in e.g. 9.4T, 10.5T, 11.7T. Even higher field whole-body MRI systems e.g. 14 T and beyond are in conceptual proposal or in engineering design. Most clinical magnets are superconducting magnets, which require liquid helium to keep them at low temperatures. Lower field strengths can be achieved with permanent magnets, which are often used in "open" MRI scanners for claustrophobic patients. Lower field strengths are also used in a portable MRI scanner approved by the FDA in 2020. Recently, MRI has been demonstrated also at ultra-low fields, i.e., in the microtesla-to-millitesla range, where sufficient signal quality is made possible by prepolarization (on the order of 10–100 mT) and by measuring the Larmor precession fields at about 100 microtesla with highly sensitive superconducting quantum interference devices (SQUIDs).
T1 and T2
Each tissue returns to its equilibrium state after excitation by the independent relaxation processes of T1 (spin-lattice; that is, magnetization in the same direction as the static magnetic field) and T2 (spin-spin; transverse to the static magnetic field).
To create a T1-weighted image, magnetization is allowed to recover before measuring the MR signal by changing the repetition time (TR). This image weighting is useful for assessing the cerebral cortex, identifying fatty tissue, characterizing focal liver lesions, and in general, obtaining morphological information, as well as for post-contrast imaging.
To create a T2-weighted image, magnetization is allowed to decay before measuring the MR signal by changing the echo time (TE). This image weighting is useful for detecting edema and inflammation, revealing white matter lesions, and assessing zonal anatomy in the prostate and uterus.
The information from MRI scans comes in the form of image contrasts based on differences in the rate of relaxation of nuclear spins following their perturbation by an oscillating magnetic field (in the form of radiofrequency pulses through the sample). The relaxation rates are a measure of the time it takes for a signal to decay back to an equilibrium state from either the longitudinal or transverse plane.
Magnetization builds up along the z-axis in the presence of a magnetic field, B0, such that the magnetic dipoles in the sample will, on average, align with the z-axis summing to a total magnetization Mz. This magnetization along z is defined as the equilibrium magnetization; magnetization is defined as the sum of all magnetic dipoles in a sample. Following the equilibrium magnetization, a 90° radiofrequency (RF) pulse flips the direction of the magnetization vector in the xy-plane, and is then switched off. The initial magnetic field B0, however, is still applied. Thus, the spin magnetization vector will slowly return from the xy-plane back to the equilibrium state. The time it takes for the magnetization vector to return to its equilibrium value, Mz, is referred to as the longitudinal relaxation time, T1. Subsequently, the rate at which this happens is simply the reciprocal of the relaxation time: . Similarly, the time in which it takes for Mxy to return to zero is T2, with the rate . Magnetization as a function of time is defined by the Bloch equations.
T1 and T2 values are dependent on the chemical environment of the sample; hence their utility in MRI. Soft tissue and muscle tissue relax at different rates, yielding the image contrast in a typical scan.
The standard display of MR images is to represent fluid characteristics in black-and-white images, where different tissues turn out as follows:
Diagnostics
Usage by organ or system
MRI has a wide range of applications in medical diagnosis and around 50,000 scanners are estimated to be in use worldwide. MRI affects diagnosis and treatment in many specialties although the effect on improved health outcomes is disputed in certain cases.
MRI is the investigation of choice in the preoperative staging of rectal and prostate cancer and has a role in the diagnosis, staging, and follow-up of other tumors, as well as for determining areas of tissue for sampling in biobanking.
Neuroimaging
MRI is the investigative tool of choice for neurological cancers over CT, as it offers better visualization of the posterior cranial fossa, containing the brainstem and the cerebellum. The contrast provided between grey and white matter makes MRI the best choice for many conditions of the central nervous system, including demyelinating diseases, dementia, cerebrovascular disease, infectious diseases, Alzheimer's disease and epilepsy. Since many images are taken milliseconds apart, it shows how the brain responds to different stimuli, enabling researchers to study both the functional and structural brain abnormalities in psychological disorders. MRI also is used in guided stereotactic surgery and radiosurgery for treatment of intracranial tumors, arteriovenous malformations, and other surgically treatable conditions using a device known as the N-localizer. New tools that implement artificial intelligence in healthcare have demonstrated higher image quality and morphometric analysis in neuroimaging with the application of a denoising system.
The record for the highest spatial resolution of a whole intact brain (postmortem) is 100 microns, from Massachusetts General Hospital. The data was published in NATURE on 30 October 2019.
Though MRI is used widely in research on mental disabilities, based on a 2024 systematic literature review and meta analysis commissioned by the Patient-Centered Outcomes Research Institute (PCORI), available research using MRI scans to diagnose ADHD showed great variability. The authors conclude that MRI cannot be reliably used to assist in making a clinical diagnosis of ADHD.
Cardiovascular
Cardiac MRI is complementary to other imaging techniques, such as echocardiography, cardiac CT, and nuclear medicine. It can be used to assess the structure and the function of the heart. Its applications include assessment of myocardial ischemia and viability, cardiomyopathies, myocarditis, iron overload, vascular diseases, and congenital heart disease.
Musculoskeletal
Applications in the musculoskeletal system include spinal imaging, assessment of joint disease, and soft tissue tumors. Also, MRI techniques can be used for diagnostic imaging of
systemic muscle diseases including genetic muscle diseases.
Swallowing movement of throat and oesophagus can cause motion artifact over the imaged spine. Therefore, a saturation pulse applied over this region the throat and oesophagus can help to avoid this artifact. Motion artifact arising due to pumping of the heart can be reduced by timing the MRI pulse according to heart cycles. Blood vessels flow artifacts can be reduced by applying saturation pulses above and below the region of interest.
Liver and gastrointestinal
Hepatobiliary MR is used to detect and characterize lesions of the liver, pancreas, and bile ducts. Focal or diffuse disorders of the liver may be evaluated using diffusion-weighted, opposed-phase imaging and dynamic contrast enhancement sequences. Extracellular contrast agents are used widely in liver MRI, and newer hepatobiliary contrast agents also provide the opportunity to perform functional biliary imaging. Anatomical imaging of the bile ducts is achieved by using a heavily T2-weighted sequence in magnetic resonance cholangiopancreatography (MRCP). Functional imaging of the pancreas is performed following administration of secretin. MR enterography provides non-invasive assessment of inflammatory bowel disease and small bowel tumors. MR-colonography may play a role in the detection of large polyps in patients at increased risk of colorectal cancer.
Angiography
Magnetic resonance angiography (MRA) generates pictures of the arteries to evaluate them for stenosis (abnormal narrowing) or aneurysms (vessel wall dilatations, at risk of rupture). MRA is often used to evaluate the arteries of the neck and brain, the thoracic and abdominal aorta, the renal arteries, and the legs (called a "run-off"). A variety of techniques can be used to generate the pictures, such as administration of a paramagnetic contrast agent (gadolinium) or using a technique known as "flow-related enhancement" (e.g., 2D and 3D time-of-flight sequences), where most of the signal on an image is due to blood that recently moved into that plane (see also FLASH MRI).
Techniques involving phase accumulation (known as phase contrast angiography) can also be used to generate flow velocity maps easily and accurately. Magnetic resonance venography (MRV) is a similar procedure that is used to image veins. In this method, the tissue is now excited inferiorly, while the signal is gathered in the plane immediately superior to the excitation plane—thus imaging the venous blood that recently moved from the excited plane.
Contrast agents
MRI for imaging anatomical structures or blood flow do not require contrast agents since the varying properties of the tissues or blood provide natural contrasts. However, for more specific types of imaging, exogenous contrast agents may be given intravenously, orally, or intra-articularly. Most contrast agents are either paramagnetic (e.g.: gadolinium, manganese, europium), and are used to shorten T1 in the tissue they accumulate in, or super-paramagnetic (SPIONs), and are used to shorten T2 and T2* in healthy tissue reducing its signal intensity (negative contrast agents). The most commonly used intravenous contrast agents are based on chelates of gadolinium, which is highly paramagnetic. In general, these agents have proved safer than the iodinated contrast agents used in X-ray radiography or CT. Anaphylactoid reactions are rare, occurring in approx. 0.03–0.1%. Of particular interest is the lower incidence of nephrotoxicity, compared with iodinated agents, when given at usual doses—this has made contrast-enhanced MRI scanning an option for patients with renal impairment, who would otherwise not be able to undergo contrast-enhanced CT.
Gadolinium-based contrast reagents are typically octadentate complexes of gadolinium(III). The complex is very stable (log K > 20) so that, in use, the concentration of the un-complexed Gd3+ ions should be below the toxicity limit. The 9th place in the metal ion's coordination sphere is occupied by a water molecule which exchanges rapidly with water molecules in the reagent molecule's immediate environment, affecting the magnetic resonance relaxation time.
In December 2017, the Food and Drug Administration (FDA) in the United States announced in a drug safety communication that new warnings were to be included on all gadolinium-based contrast agents (GBCAs). The FDA also called for increased patient education and requiring gadolinium contrast vendors to conduct additional animal and clinical studies to assess the safety of these agents.
Although gadolinium agents have proved useful for patients with kidney impairment, in patients with severe kidney failure requiring dialysis there is a risk of a rare but serious illness, nephrogenic systemic fibrosis, which may be linked to the use of certain gadolinium-containing agents. The most frequently linked is gadodiamide, but other agents have been linked too. Although a causal link has not been definitively established, current guidelines in the United States are that dialysis patients should only receive gadolinium agents where essential and that dialysis should be performed as soon as possible after the scan to remove the agent from the body promptly.
In Europe, where more gadolinium-containing agents are available, a classification of agents according to potential risks has been released. In 2008, a new contrast agent named gadoxetate, brand name Eovist (US) or Primovist (EU), was approved for diagnostic use: This has the theoretical benefit of a dual excretion path.
Sequences
An MRI sequence is a particular setting of radiofrequency pulses and gradients, resulting in a particular image appearance. The T1 and T2 weighting can also be described as MRI sequences.
Specialized configurations
Magnetic resonance spectroscopy
Magnetic resonance spectroscopy (MRS) is used to measure the levels of different metabolites in body tissues, which can be achieved through a variety of single voxel or imaging-based techniques. The MR signal produces a spectrum of resonances that corresponds to different molecular arrangements of the isotope being "excited". This signature is used to diagnose certain metabolic disorders, especially those affecting the brain, and to provide information on tumor metabolism.
Magnetic resonance spectroscopic imaging (MRSI) combines both spectroscopic and imaging methods to produce spatially localized spectra from within the sample or patient. The spatial resolution is much lower (limited by the available SNR), but the spectra in each voxel contains information about many metabolites. Because the available signal is used to encode spatial and spectral information, MRSI requires high SNR achievable only at higher field strengths (3 T and above). The high procurement and maintenance costs of MRI with extremely high field strengths inhibit their popularity. However, recent compressed sensing-based software algorithms (e.g., SAMV) have been proposed to achieve super-resolution without requiring such high field strengths.
Real-time
Interventional MRI
The lack of harmful effects on the patient and the operator make MRI well-suited for interventional radiology, where the images produced by an MRI scanner guide minimally invasive procedures. Such procedures use no ferromagnetic instruments.
A specialized growing subset of interventional MRI is intraoperative MRI, in which an MRI is used in surgery. Some specialized MRI systems allow imaging concurrent with the surgical procedure. More typically, the surgical procedure is temporarily interrupted so that MRI can assess the success of the procedure or guide subsequent surgical work.
Magnetic resonance guided focused ultrasound
In guided therapy, high-intensity focused ultrasound (HIFU) beams are focused on a tissue, that are controlled using MR thermal imaging. Due to the high energy at the focus, the temperature rises to above 65 °C (150 °F) which completely destroys the tissue. This technology can achieve precise ablation of diseased tissue. MR imaging provides a three-dimensional view of the target tissue, allowing for the precise focusing of ultrasound energy. The MR imaging provides quantitative, real-time, thermal images of the treated area. This allows the physician to ensure that the temperature generated during each cycle of ultrasound energy is sufficient to cause thermal ablation within the desired tissue and if not, to adapt the parameters to ensure effective treatment.
Multinuclear imaging
Hydrogen has the most frequently imaged nucleus in MRI because it is present in biological tissues in great abundance, and because its high gyromagnetic ratio gives a strong signal. However, any nucleus with a net nuclear spin could potentially be imaged with MRI. Such nuclei include helium-3, lithium-7, carbon-13, fluorine-19, oxygen-17, sodium-23, phosphorus-31 and xenon-129. 23Na and 31P are naturally abundant in the body, so they can be imaged directly. Gaseous isotopes such as 3He or 129Xe must be hyperpolarized and then inhaled as their nuclear density is too low to yield a useful signal under normal conditions. 17O and 19F can be administered in sufficient quantities in liquid form (e.g. 17O-water) that hyperpolarization is not a necessity. Using helium or xenon has the advantage of reduced background noise, and therefore increased contrast for the image itself, because these elements are not normally present in biological tissues.
Moreover, the nucleus of any atom that has a net nuclear spin and that is bonded to a hydrogen atom could potentially be imaged via heteronuclear magnetization transfer MRI that would image the high-gyromagnetic-ratio hydrogen nucleus instead of the low-gyromagnetic-ratio nucleus that is bonded to the hydrogen atom. In principle, heteronuclear magnetization transfer MRI could be used to detect the presence or absence of specific chemical bonds.
Multinuclear imaging is primarily a research technique at present. However, potential applications include functional imaging and imaging of organs poorly seen on 1H MRI (e.g., lungs and bones) or as alternative contrast agents. Inhaled hyperpolarized 3He can be used to image the distribution of air spaces within the lungs. Injectable solutions containing 13C or stabilized bubbles of hyperpolarized 129Xe have been studied as contrast agents for angiography and perfusion imaging. 31P can potentially provide information on bone density and structure, as well as functional imaging of the brain. Multinuclear imaging holds the potential to chart the distribution of lithium in the human brain, this element finding use as an important drug for those with conditions such as bipolar disorder.
Molecular imaging by MRI
MRI has the advantages of having very high spatial resolution and is very adept at morphological imaging and functional imaging. MRI does have several disadvantages though. First, MRI has a sensitivity of around 10−3 mol/L to 10−5 mol/L, which, compared to other types of imaging, can be very limiting. This problem stems from the fact that the population difference between the nuclear spin states is very small at room temperature. For example, at 1.5 teslas, a typical field strength for clinical MRI, the difference between high and low energy states is approximately 9 molecules per 2 million. Improvements to increase MR sensitivity include increasing magnetic field strength and hyperpolarization via optical pumping or dynamic nuclear polarization. There are also a variety of signal amplification schemes based on chemical exchange that increase sensitivity.
To achieve molecular imaging of disease biomarkers using MRI, targeted MRI contrast agents with high specificity and high relaxivity (sensitivity) are required. To date, many studies have been devoted to developing targeted-MRI contrast agents to achieve molecular imaging by MRI. Commonly, peptides, antibodies, or small ligands, and small protein domains, such as HER-2 affibodies, have been applied to achieve targeting. To enhance the sensitivity of the contrast agents, these targeting moieties are usually linked to high payload MRI contrast agents or MRI contrast agents with high relaxivities. A new class of gene targeting MR contrast agents has been introduced to show gene action of unique mRNA and gene transcription factor proteins. These new contrast agents can trace cells with unique mRNA, microRNA and virus; tissue response to inflammation in living brains. The MR reports change in gene expression with positive correlation to TaqMan analysis, optical and electron microscopy.
Parallel MRI
It takes time to gather MRI data using sequential applications of magnetic field gradients. Even for the most streamlined of MRI sequences, there are physical and physiologic limits to the rate of gradient switching. Parallel MRI circumvents these limits by gathering some portion of the data simultaneously, rather than in a traditional sequential fashion. This is accomplished using arrays of radiofrequency (RF) detector coils, each with a different 'view' of the body. A reduced set of gradient steps is applied, and the remaining spatial information is filled in by combining signals from various coils, based on their known spatial sensitivity patterns. The resulting acceleration is limited by the number of coils and by the signal to noise ratio (which decreases with increasing acceleration), but two- to four-fold accelerations may commonly be achieved with suitable coil array configurations, and substantially higher accelerations have been demonstrated with specialized coil arrays. Parallel MRI may be used with most MRI sequences.
After a number of early suggestions for using arrays of detectors to accelerate imaging went largely unremarked in the MRI field, parallel imaging saw widespread development and application following the introduction of the SiMultaneous Acquisition of Spatial Harmonics (SMASH) technique in 1996–7. The SENSitivity Encoding (SENSE) and Generalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) techniques are the parallel imaging methods in most common use today. The advent of parallel MRI resulted in extensive research and development in image reconstruction and RF coil design, as well as in a rapid expansion of the number of receiver channels available on commercial MR systems. Parallel MRI is now used routinely for MRI examinations in a wide range of body areas and clinical or research applications.
Quantitative MRI
Most MRI focuses on qualitative interpretation of MR data by acquiring spatial maps of relative variations in signal strength which are "weighted" by certain parameters. Quantitative methods instead attempt to determine spatial maps of accurate tissue relaxometry parameter values or magnetic field, or to measure the size of certain spatial features.
Examples of quantitative MRI methods are:
T1-mapping (notably used in cardiac magnetic resonance imaging)
T2-mapping
Quantitative susceptibility mapping (QSM)
Quantitative fluid flow MRI (i.e. some cerebrospinal fluid flow MRI)
Magnetic resonance elastography (MRE)
Quantitative MRI aims to increase the reproducibility of MR images and interpretations, but has historically require longer scan times.
Quantitative MRI (or qMRI) sometimes more specifically refers to multi-parametric quantitative MRI, the mapping of multiple tissue relaxometry parameters in a single imaging session.
Efforts to make multi-parametric quantitative MRI faster have produced sequences which map multiple parameters simultaneously, either by building separate encoding methods for each parameter into the sequence,
or by fitting MR signal evolution to a multi-parameter model.
Hyperpolarized gas MRI
Traditional MRI generates poor images of lung tissue because there are fewer water molecules with protons that can be excited by the magnetic field. Using hyperpolarized gas an MRI scan can identify ventilation defects in the lungs. Before the scan, a patient is asked to inhale hyperpolarized xenon mixed with a buffer gas of helium or nitrogen. The resulting lung images are much higher quality than with traditional MRI.
Safety
MRI is, in general, a safe technique, although injuries may occur as a result of failed safety procedures or human error. Contraindications to MRI include most cochlear implants and cardiac pacemakers, shrapnel, and metallic foreign bodies in the eyes. Magnetic resonance imaging in pregnancy appears to be safe, at least during the second and third trimesters if done without contrast agents. Since MRI does not use any ionizing radiation, its use is generally favored in preference to CT when either modality could yield the same information. Some patients experience claustrophobia and may require sedation or shorter MRI protocols. Amplitude and rapid switching of gradient coils during image acquisition may cause peripheral nerve stimulation.
MRI uses powerful magnets and can therefore cause magnetic materials to move at great speeds, posing a projectile risk, and may cause fatal accidents. However, as millions of MRIs are performed globally each year, fatalities are extremely rare.
MRI machines can produce loud noise, up to 120 dB(A). This can cause hearing loss, tinnitus and hyperacusis, so appropriate hearing protection is essential for anyone inside the MRI scanner room during the examination.
Overuse
Medical societies issue guidelines for when physicians should use MRI on patients and recommend against overuse. MRI can detect health problems or confirm a diagnosis, but medical societies often recommend that MRI not be the first procedure for creating a plan to diagnose or manage a patient's complaint. A common case is to use MRI to seek a cause of low back pain; the American College of Physicians, for example, recommends against imaging (including MRI) as unlikely to result in a positive outcome for the patient.
Artifacts
An MRI artifact is a visual artifact, that is, an anomaly during visual representation. Many different artifacts can occur during magnetic resonance imaging (MRI), some affecting the diagnostic quality, while others may be confused with pathology. Artifacts can be classified as patient-related, signal processing-dependent and hardware (machine)-related.
Non-medical use
MRI is used industrially mainly for routine analysis of chemicals. The nuclear magnetic resonance technique is also used, for example, to measure the ratio between water and fat in foods, monitoring of flow of corrosive fluids in pipes, or to study molecular structures such as catalysts.
Being non-invasive and non-damaging, MRI can be used to study the anatomy of plants, their water transportation processes and water balance. It is also applied to veterinary radiology for diagnostic purposes. Outside this, its use in zoology is limited due to the high cost; but it can be used on many species.
In palaeontology it is used to examine the structure of fossils.
Forensic imaging provides graphic documentation of an autopsy, which manual autopsy does not. CT scanning provides quick whole-body imaging of skeletal and parenchymal alterations, whereas MR imaging gives better representation of soft tissue pathology. All that being said, MRI is more expensive, and more time-consuming to utilize. Moreover, the quality of MR imaging deteriorates below 10 °C.
History
In 1971 at Stony Brook University, Paul Lauterbur applied magnetic field gradients in all three dimensions and a back-projection technique to create NMR images. He published the first images of two tubes of water in 1973 in the journal Nature, followed by the picture of a living animal, a clam, and in 1974 by the image of the thoracic cavity of a mouse. Lauterbur called his imaging method zeugmatography, a term which was replaced by (N)MR imaging. In the late 1970s, physicists Peter Mansfield and Paul Lauterbur developed MRI-related techniques, like the echo-planar imaging (EPI) technique.
Raymond Damadian's work into nuclear magnetic resonance (NMR) has been incorporated into MRI, having built one of the first scanners.
Advances in semiconductor technology were crucial to the development of practical MRI, which requires a large amount of computational power. This was made possible by the rapidly increasing number of transistors on a single integrated circuit chip. Mansfield and Lauterbur were awarded the 2003 Nobel Prize in Physiology or Medicine for their "discoveries concerning magnetic resonance imaging".
See also
Amplified magnetic resonance imaging
Cerebrospinal fluid flow MRI
Electron paramagnetic resonance
High-definition fiber tracking
High-resolution computed tomography
History of neuroimaging
International Society for Magnetic Resonance in Medicine
Jemris
List of neuroimaging software
Magnetic immunoassay
Magnetic particle imaging
Magnetic resonance elastography
Magnetic Resonance Imaging (journal)
Magnetic resonance microscopy
Nobel Prize controversies – Physiology or medicine
Rabi cycle
Robinson oscillator
Sodium MRI
Virtopsy
References
Further reading
External links
A Guided Tour of MRI: An introduction for laypeople National High Magnetic Field Laboratory
The Basics of MRI. Underlying physics and technical aspects.
Video: What to Expect During Your MRI Exam from the Institute for Magnetic Resonance Safety, Education, and Research (IMRSER)
Royal Institution Lecture – MRI: A Window on the Human Body
A Short History of Magnetic Resonance Imaging from a European Point of View
How MRI works explained simply using diagrams
Real-time MRI videos: Biomedizinische NMR Forschungs GmbH.
Paul C. Lauterbur, Genesis of the MRI (Magnetic Resonance Imaging) notebook, September 1971 (all pages freely available for download in variety of formats from Science History Institute Digital Collections at digital.sciencehistory.org)
1973 introductions
20th-century inventions
American inventions
Articles containing video clips
Biomagnetics
Cryogenics
Discovery and invention controversies
Radiology | Magnetic resonance imaging | [
"Physics",
"Chemistry",
"Biology"
] | 6,765 | [
"Applied and interdisciplinary physics",
"Nuclear magnetic resonance",
"Magnetic resonance imaging",
"Biomagnetics",
"Cryogenics"
] |
19,528 | https://en.wikipedia.org/wiki/Mechanical%20engineering | Mechanical engineering is the study of physical machines that may involve force and movement. It is an engineering branch that combines engineering physics and mathematics principles with materials science, to design, analyze, manufacture, and maintain mechanical systems. It is one of the oldest and broadest of the engineering branches.
Mechanical engineering requires an understanding of core areas including mechanics, dynamics, thermodynamics, materials science, design, structural analysis, and electricity. In addition to these core principles, mechanical engineers use tools such as computer-aided design (CAD), computer-aided manufacturing (CAM), computer-aided engineering (CAE), and product lifecycle management to design and analyze manufacturing plants, industrial equipment and machinery, heating and cooling systems, transport systems, motor vehicles, aircraft, watercraft, robotics, medical devices, weapons, and others.
Mechanical engineering emerged as a field during the Industrial Revolution in Europe in the 18th century; however, its development can be traced back several thousand years around the world. In the 19th century, developments in physics led to the development of mechanical engineering science. The field has continually evolved to incorporate advancements; today mechanical engineers are pursuing developments in such areas as composites, mechatronics, and nanotechnology. It also overlaps with aerospace engineering, metallurgical engineering, civil engineering, structural engineering, electrical engineering, manufacturing engineering, chemical engineering, industrial engineering, and other engineering disciplines to varying amounts. Mechanical engineers may also work in the field of biomedical engineering, specifically with biomechanics, transport phenomena, biomechatronics, bionanotechnology, and modelling of biological systems.
History
The application of mechanical engineering can be seen in the archives of various ancient and medieval societies. The six classic simple machines were known in the ancient Near East. The wedge and the inclined plane (ramp) were known since prehistoric times. Mesopotamian civilization is credited with the invention of the wheel by several, mainly old sources. However, some recent sources either suggest that it was invented independently in both Mesopotamia and Eastern Europe or credit prehistoric Eastern Europeans with the invention of the wheel The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale, and to move large objects in ancient Egyptian technology. The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia circa 3000 BC. The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC.
The Saqiyah was developed in the Kingdom of Kush during the 4th century BC. It relied on animal power reducing the tow on the requirement of human energy. Reservoirs in the form of Hafirs were developed in Kush to store water and boost irrigation. Bloomeries and blast furnaces were developed during the seventh century BC in Meroe. Kushite sundials applied mathematics in the form of advanced trigonometry.
The earliest practical water-powered machines, the water wheel and watermill, first appeared in the Persian Empire, in what are now Iraq and Iran, by the early 4th century BC. In ancient Greece, the works of Archimedes (287–212 BC) influenced mechanics in the Western tradition. The geared Antikythera mechanisms was an Analog computer invented around the 2nd century BC.
In Roman Egypt, Heron of Alexandria (c. 10–70 AD) created the first steam-powered device (Aeolipile). In China, Zhang Heng (78–139 AD) improved a water clock and invented a seismometer, and Ma Jun (200–265 AD) invented a chariot with differential gears. The medieval Chinese horologist and engineer Su Song (1020–1101 AD) incorporated an escapement mechanism into his astronomical clock tower two centuries before escapement devices were found in medieval European clocks. He also invented the world's first known endless power-transmitting chain drive.
The cotton gin was invented in India by the 6th century AD, and the spinning wheel was invented in the Islamic world by the early 11th century, Dual-roller gins appeared in India and China between the 12th and 14th centuries. The worm gear roller gin appeared in the Indian subcontinent during the early Delhi Sultanate era of the 13th to 14th centuries.
During the Islamic Golden Age (7th to 15th century), Muslim inventors made remarkable contributions in the field of mechanical technology. Al-Jazari, who was one of them, wrote his famous Book of Knowledge of Ingenious Mechanical Devices in 1206 and presented many mechanical designs.
In the 17th century, important breakthroughs in the foundations of mechanical engineering occurred in England and the Continent. The Dutch mathematician and physicist Christiaan Huygens invented the pendulum clock in 1657, which was the first reliable timekeeper for almost 300 years, and published a work dedicated to clock designs and the theory behind them. In England, Isaac Newton formulated Newton's Laws of Motion and developed the calculus, which would become the mathematical basis of physics. Newton was reluctant to publish his works for years, but he was finally persuaded to do so by his colleagues, such as Edmond Halley. Gottfried Wilhelm Leibniz, who earlier designed a mechanical calculator, is also credited with developing the calculus during the same time period.
During the early 19th century Industrial Revolution, machine tools were developed in England, Germany, and Scotland. This allowed mechanical engineering to develop as a separate field within engineering. They brought with them manufacturing machines and the engines to power them. The first British professional society of mechanical engineers was formed in 1847 Institution of Mechanical Engineers, thirty years after the civil engineers formed the first such professional society Institution of Civil Engineers. On the European continent, Johann von Zimmermann (1820–1901) founded the first factory for grinding machines in Chemnitz, Germany in 1848.
In the United States, the American Society of Mechanical Engineers (ASME) was formed in 1880, becoming the third such professional engineering society, after the American Society of Civil Engineers (1852) and the American Institute of Mining Engineers (1871). The first schools in the United States to offer an engineering education were the United States Military Academy in 1817, an institution now known as Norwich University in 1819, and Rensselaer Polytechnic Institute in 1825. Education in mechanical engineering has historically been based on a strong foundation in mathematics and science.
Education
Degrees in mechanical engineering are offered at various universities worldwide. Mechanical engineering programs typically take four to five years of study depending on the place and university and result in a Bachelor of Engineering (B.Eng. or B.E.), Bachelor of Science (B.Sc. or B.S.), Bachelor of Science Engineering (B.Sc.Eng.), Bachelor of Technology (B.Tech.), Bachelor of Mechanical Engineering (B.M.E.), or Bachelor of Applied Science (B.A.Sc.) degree, in or with emphasis in mechanical engineering. In Spain, Portugal and most of South America, where neither B.S. nor B.Tech. programs have been adopted, the formal name for the degree is "Mechanical Engineer", and the course work is based on five or six years of training. In Italy the course work is based on five years of education, and training, but in order to qualify as an Engineer one has to pass a state exam at the end of the course. In Greece, the coursework is based on a five-year curriculum.
In the United States, most undergraduate mechanical engineering programs are accredited by the Accreditation Board for Engineering and Technology (ABET) to ensure similar course requirements and standards among universities. The ABET web site lists 302 accredited mechanical engineering programs as of 11 March 2014. Mechanical engineering programs in Canada are accredited by the Canadian Engineering Accreditation Board (CEAB), and most other countries offering engineering degrees have similar accreditation societies.
In Australia, mechanical engineering degrees are awarded as Bachelor of Engineering (Mechanical) or similar nomenclature, although there are an increasing number of specialisations. The degree takes four years of full-time study to achieve. To ensure quality in engineering degrees, Engineers Australia accredits engineering degrees awarded by Australian universities in accordance with the global Washington Accord. Before the degree can be awarded, the student must complete at least 3 months of on the job work experience in an engineering firm. Similar systems are also present in South Africa and are overseen by the Engineering Council of South Africa (ECSA).
In India, to become an engineer, one needs to have an engineering degree like a B.Tech. or B.E., have a diploma in engineering, or by completing a course in an engineering trade like fitter from the Industrial Training Institute (ITIs) to receive a "ITI Trade Certificate" and also pass the All India Trade Test (AITT) with an engineering trade conducted by the National Council of Vocational Training (NCVT) by which one is awarded a "National Trade Certificate". A similar system is used in Nepal.
Some mechanical engineers go on to pursue a postgraduate degree such as a Master of Engineering, Master of Technology, Master of Science, Master of Engineering Management (M.Eng.Mgt. or M.E.M.), a Doctor of Philosophy in engineering (Eng.D. or Ph.D.) or an engineer's degree. The master's and engineer's degrees may or may not include research. The Doctor of Philosophy includes a significant research component and is often viewed as the entry point to academia. The Engineer's degree exists at a few institutions at an intermediate level between the master's degree and the doctorate.
Coursework
Standards set by each country's accreditation society are intended to provide uniformity in fundamental subject material, promote competence among graduating engineers, and to maintain confidence in the engineering profession as a whole. Engineering programs in the U.S., for example, are required by ABET to show that their students can "work professionally in both thermal and mechanical systems areas." The specific courses required to graduate, however, may differ from program to program. Universities and institutes of technology will often combine multiple subjects into a single class or split a subject into multiple classes, depending on the faculty available and the university's major area(s) of research.
The fundamental subjects required for mechanical engineering usually include:
Mathematics (in particular, calculus, differential equations, and linear algebra)
Basic physical sciences (including physics and chemistry)
Statics and dynamics
Strength of materials and solid mechanics
Materials engineering, composites
Thermodynamics, heat transfer, energy conversion, and HVAC
Fuels, combustion, internal combustion engine
Fluid mechanics (including fluid statics and fluid dynamics)
Mechanism and Machine design (including kinematics and dynamics)
Instrumentation and measurement
Manufacturing engineering, technology, or processes
Vibration, control theory and control engineering
Hydraulics and Pneumatics
Mechatronics and robotics
Engineering design and product design
Drafting, computer-aided design (CAD) and computer-aided manufacturing (CAM)
Mechanical engineers are also expected to understand and be able to apply basic concepts from chemistry, physics, tribology, chemical engineering, civil engineering, and electrical engineering. All mechanical engineering programs include multiple semesters of mathematical classes including calculus, and advanced mathematical concepts including differential equations, partial differential equations, linear algebra, differential geometry, and statistics, among others.
In addition to the core mechanical engineering curriculum, many mechanical engineering programs offer more specialized programs and classes, such as control systems, robotics, transport and logistics, cryogenics, fuel technology, automotive engineering, biomechanics, vibration, optics and others, if a separate department does not exist for these subjects.
Most mechanical engineering programs also require varying amounts of research or community projects to gain practical problem-solving experience. In the United States it is common for mechanical engineering students to complete one or more internships while studying, though this is not typically mandated by the university. Cooperative education is another option. Future work skills research puts demand on study components that feed student's creativity and innovation.
Job duties
Mechanical engineers research, design, develop, build, and test mechanical and thermal devices, including tools, engines, and machines.
Mechanical engineers typically do the following:
Analyze problems to see how mechanical and thermal devices might help solve the problem.
Design or redesign mechanical and thermal devices using analysis and computer-aided design.
Develop and test prototypes of devices they design.
Analyze the test results and change the design as needed.
Oversee the manufacturing process for the device.
Manage a team of professionals in specialized fields like mechanical drafting and designing, prototyping, 3D printing or/and CNC Machines specialists.
Mechanical engineers design and oversee the manufacturing of many products ranging from medical devices to new batteries. They also design power-producing machines such as electric generators, internal combustion engines, and steam and gas turbines as well as power-using machines, such as refrigeration and air-conditioning systems.
Like other engineers, mechanical engineers use computers to help create and analyze designs, run simulations and test how a machine is likely to work.
License and regulation
Engineers may seek license by a state, provincial, or national government. The purpose of this process is to ensure that engineers possess the necessary technical knowledge, real-world experience, and knowledge of the local legal system to practice engineering at a professional level. Once certified, the engineer is given the title of Professional Engineer United States, Canada, Japan, South Korea, Bangladesh and South Africa), Chartered Engineer (in the United Kingdom, Ireland, India and Zimbabwe), Chartered Professional Engineer (in Australia and New Zealand) or European Engineer (much of the European Union).
In the U.S., to become a licensed Professional Engineer (PE), an engineer must pass the comprehensive FE (Fundamentals of Engineering) exam, work a minimum of 4 years as an Engineering Intern (EI) or Engineer-in-Training (EIT), and pass the "Principles and Practice" or PE (Practicing Engineer or Professional Engineer) exams. The requirements and steps of this process are set forth by the National Council of Examiners for Engineering and Surveying (NCEES), composed of engineering and land surveying licensing boards representing all U.S. states and territories.
In Australia (Queensland and Victoria) an engineer must be registered as a Professional Engineer within the State in which they practice, for example Registered Professional Engineer of Queensland or Victoria, RPEQ or RPEV. respectively.
In the UK, current graduates require a BEng plus an appropriate master's degree or an integrated MEng degree, a minimum of 4 years post graduate on the job competency development and a peer-reviewed project report to become a Chartered Mechanical Engineer (CEng, MIMechE) through the Institution of Mechanical Engineers. CEng MIMechE can also be obtained via an examination route administered by the City and Guilds of London Institute.
In most developed countries, certain engineering tasks, such as the design of bridges, electric power plants, and chemical plants, must be approved by a professional engineer or a chartered engineer. "Only a licensed engineer, for instance, may prepare, sign, seal and submit engineering plans and drawings to a public authority for approval, or to seal engineering work for public and private clients." This requirement can be written into state and provincial legislation, such as in the Canadian provinces, for example the Ontario or Quebec's Engineer Act.
In other countries, such as the UK, no such legislation exists; however, practically all certifying bodies maintain a code of ethics independent of legislation, that they expect all members to abide by or risk expulsion.
Salaries and workforce statistics
The total number of engineers employed in the U.S. in 2015 was roughly 1.6 million. Of these, 278,340 were mechanical engineers (17.28%), the largest discipline by size. In 2012, the median annual income of mechanical engineers in the U.S. workforce was $80,580. The median income was highest when working for the government ($92,030), and lowest in education ($57,090). In 2014, the total number of mechanical engineering jobs was projected to grow 5% over the next decade. As of 2009, the average starting salary was $58,800 with a bachelor's degree.
Subdisciplines
The field of mechanical engineering can be thought of as a collection of many mechanical engineering science disciplines. Several of these subdisciplines which are typically taught at the undergraduate level are listed below, with a brief explanation and the most common application of each. Some of these subdisciplines are unique to mechanical engineering, while others are a combination of mechanical engineering and one or more other disciplines. Most work that a mechanical engineer does uses skills and techniques from several of these subdisciplines, as well as specialized subdisciplines. Specialized subdisciplines, as used in this article, are more likely to be the subject of graduate studies or on-the-job training than undergraduate research. Several specialized subdisciplines are discussed in this section.
Mechanics
Mechanics is, in the most general sense, the study of forces and their effect upon matter. Typically, engineering mechanics is used to analyze and predict the acceleration and deformation (both elastic and plastic) of objects under known forces (also called loads) or stresses. Subdisciplines of mechanics include
Statics, the study of non-moving bodies under known loads, how forces affect static bodies
Dynamics, the study of how forces affect moving bodies. Dynamics includes kinematics (about movement, velocity, and acceleration) and kinetics (about forces and resulting accelerations).
Mechanics of materials, the study of how different materials deform under various types of stress
Fluid mechanics, the study of how fluids react to forces
Kinematics, the study of the motion of bodies (objects) and systems (groups of objects), while ignoring the forces that cause the motion. Kinematics is often used in the design and analysis of mechanisms.
Continuum mechanics, a method of applying mechanics that assumes that objects are continuous (rather than discrete)
Mechanical engineers typically use mechanics in the design or analysis phases of engineering. If the engineering project were the design of a vehicle, statics might be employed to design the frame of the vehicle, in order to evaluate where the stresses will be most intense. Dynamics might be used when designing the car's engine, to evaluate the forces in the pistons and cams as the engine cycles. Mechanics of materials might be used to choose appropriate materials for the frame and engine. Fluid mechanics might be used to design a ventilation system for the vehicle (see HVAC), or to design the intake system for the engine.
Mechatronics and robotics
Mechatronics is a combination of mechanics and electronics. It is an interdisciplinary branch of mechanical engineering, electrical engineering and software engineering that is concerned with integrating electrical and mechanical engineering to create hybrid automation systems. In this way, machines can be automated through the use of electric motors, servo-mechanisms, and other electrical systems in conjunction with special software. A common example of a mechatronics system is a CD-ROM drive. Mechanical systems open and close the drive, spin the CD and move the laser, while an optical system reads the data on the CD and converts it to bits. Integrated software controls the process and communicates the contents of the CD to the computer.
Robotics is the application of mechatronics to create robots, which are often used in industry to perform tasks that are dangerous, unpleasant, or repetitive. These robots may be of any shape and size, but all are preprogrammed and interact physically with the world. To create a robot, an engineer typically employs kinematics (to determine the robot's range of motion) and mechanics (to determine the stresses within the robot).
Robots are used extensively in industrial automation engineering. They allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform them economically, and to ensure better quality. Many companies employ assembly lines of robots, especially in Automotive Industries and some factories are so robotized that they can run by themselves. Outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. Robots are also sold for various residential applications, from recreation to domestic applications.
Structural analysis
Structural analysis is the branch of mechanical engineering (and also civil engineering) devoted to examining why and how objects fail and to fix the objects and their performance. Structural failures occur in two general modes: static failure, and fatigue failure. Static structural failure occurs when, upon being loaded (having a force applied) the object being analyzed either breaks or is deformed plastically, depending on the criterion for failure. Fatigue failure occurs when an object fails after a number of repeated loading and unloading cycles. Fatigue failure occurs because of imperfections in the object: a microscopic crack on the surface of the object, for instance, will grow slightly with each cycle (propagation) until the crack is large enough to cause ultimate failure.
Failure is not simply defined as when a part breaks, however; it is defined as when a part does not operate as intended. Some systems, such as the perforated top sections of some plastic bags, are designed to break. If these systems do not break, failure analysis might be employed to determine the cause.
Structural analysis is often used by mechanical engineers after a failure has occurred, or when designing to prevent failure. Engineers often use online documents and books such as those published by ASM to aid them in determining the type of failure and possible causes.
Once theory is applied to a mechanical design, physical testing is often performed to verify calculated results. Structural analysis may be used in an office when designing parts, in the field to analyze failed parts, or in laboratories where parts might undergo controlled failure tests.
Thermodynamics and thermo-science
Thermodynamics is an applied science used in several branches of engineering, including mechanical and chemical engineering. At its simplest, thermodynamics is the study of energy, its use and transformation through a system. Typically, engineering thermodynamics is concerned with changing energy from one form to another. As an example, automotive engines convert chemical energy (enthalpy) from the fuel into heat, and then into mechanical work that eventually turns the wheels.
Thermodynamics principles are used by mechanical engineers in the fields of heat transfer, thermofluids, and energy conversion. Mechanical engineers use thermo-science to design engines and power plants, heating, ventilation, and air-conditioning (HVAC) systems, heat exchangers, heat sinks, radiators, refrigeration, insulation, and others.
Design and drafting
Drafting or technical drawing is the means by which mechanical engineers design products and create instructions for manufacturing parts. A technical drawing can be a computer model or hand-drawn schematic showing all the dimensions necessary to manufacture a part, as well as assembly notes, a list of required materials, and other pertinent information. A U.S. mechanical engineer or skilled worker who creates technical drawings may be referred to as a drafter or draftsman. Drafting has historically been a two-dimensional process, but computer-aided design (CAD) programs now allow the designer to create in three dimensions.
Instructions for manufacturing a part must be fed to the necessary machinery, either manually, through programmed instructions, or through the use of a computer-aided manufacturing (CAM) or combined CAD/CAM program. Optionally, an engineer may also manually manufacture a part using the technical drawings. However, with the advent of computer numerically controlled (CNC) manufacturing, parts can now be fabricated without the need for constant technician input. Manually manufactured parts generally consist of spray coatings, surface finishes, and other processes that cannot economically or practically be done by a machine.
Drafting is used in nearly every subdiscipline of mechanical engineering, and by many other branches of engineering and architecture. Three-dimensional models created using CAD software are also commonly used in finite element analysis (FEA) and computational fluid dynamics (CFD).
Modern tools
Many mechanical engineering companies, especially those in industrialized nations, have incorporated computer-aided engineering (CAE) programs into their existing design and analysis processes, including 2D and 3D solid modeling computer-aided design (CAD). This method has many benefits, including easier and more exhaustive visualization of products, the ability to create virtual assemblies of parts, and the ease of use in designing mating interfaces and tolerances.
Other CAE programs commonly used by mechanical engineers include product lifecycle management (PLM) tools and analysis tools used to perform complex simulations. Analysis tools may be used to predict product response to expected loads, including fatigue life and manufacturability. These tools include finite element analysis (FEA), computational fluid dynamics (CFD), and computer-aided manufacturing (CAM).
Using CAE programs, a mechanical design team can quickly and cheaply iterate the design process to develop a product that better meets cost, performance, and other constraints. No physical prototype need be created until the design nears completion, allowing hundreds or thousands of designs to be evaluated, instead of a relative few. In addition, CAE analysis programs can model complicated physical phenomena which cannot be solved by hand, such as viscoelasticity, complex contact between mating parts, or non-Newtonian flows.
As mechanical engineering begins to merge with other disciplines, as seen in mechatronics, multidisciplinary design optimization (MDO) is being used with other CAE programs to automate and improve the iterative design process. MDO tools wrap around existing CAE processes, allowing product evaluation to continue even after the analyst goes home for the day. They also use sophisticated optimization algorithms to more intelligently explore possible designs, often finding better, innovative solutions to difficult multidisciplinary design problems.
Areas of research
Mechanical engineers are constantly pushing the boundaries of what is physically possible in order to produce safer, cheaper, and more efficient machines and mechanical systems. Some technologies at the cutting edge of mechanical engineering are listed below (see also exploratory engineering).
Micro electro-mechanical systems (MEMS)
Micron-scale mechanical components such as springs, gears, fluidic and heat transfer devices are fabricated from a variety of substrate materials such as silicon, glass and polymers like SU8. Examples of MEMS components are the accelerometers that are used as car airbag sensors, modern cell phones, gyroscopes for precise positioning and microfluidic devices used in biomedical applications.
Friction stir welding (FSW)
Friction stir welding, a new type of welding, was discovered in 1991 by The Welding Institute (TWI). The innovative steady state (non-fusion) welding technique joins materials previously un-weldable, including several aluminum alloys. It plays an important role in the future construction of airplanes, potentially replacing rivets. Current uses of this technology to date include welding the seams of the aluminum main Space Shuttle external tank, Orion Crew Vehicle, Boeing Delta II and Delta IV Expendable Launch Vehicles and the SpaceX Falcon 1 rocket, armor plating for amphibious assault ships, and welding the wings and fuselage panels of the new Eclipse 500 aircraft from Eclipse Aviation among an increasingly growing pool of uses.
Composites
Composites or composite materials are a combination of materials which provide different physical characteristics than either material separately. Composite material research within mechanical engineering typically focuses on designing (and, subsequently, finding applications for) stronger or more rigid materials while attempting to reduce weight, susceptibility to corrosion, and other undesirable factors. Carbon fiber reinforced composites, for instance, have been used in such diverse applications as spacecraft and fishing rods.
Mechatronics
Mechatronics is the synergistic combination of mechanical engineering, electronic engineering, and software engineering. The discipline of mechatronics began as a way to combine mechanical principles with electrical engineering. Mechatronic concepts are used in the majority of electro-mechanical systems. Typical electro-mechanical sensors used in mechatronics are strain gauges, thermocouples, and pressure transducers.
Nanotechnology
At the smallest scales, mechanical engineering becomes nanotechnology—one speculative goal of which is to create a molecular assembler to build molecules and materials via mechanosynthesis. For now that goal remains within exploratory engineering. Areas of current mechanical engineering research in nanotechnology include nanofilters, nanofilms, and nanostructures, among others.
Finite element analysis
Finite Element Analysis is a computational tool used to estimate stress, strain, and deflection of solid bodies. It uses a mesh setup with user-defined sizes to measure physical quantities at a node. The more nodes there are, the higher the precision. This field is not new, as the basis of Finite Element Analysis (FEA) or Finite Element Method (FEM) dates back to 1941. But the evolution of computers has made FEA/FEM a viable option for analysis of structural problems. Many commercial software applications such as NASTRAN, ANSYS, and ABAQUS are widely used in industry for research and the design of components. Some 3D modeling and CAD software packages have added FEA modules. In the recent times, cloud simulation platforms like SimScale are becoming more common.
Other techniques such as finite difference method (FDM) and finite-volume method (FVM) are employed to solve problems relating heat and mass transfer, fluid flows, fluid surface interaction, etc.
Biomechanics
Biomechanics is the application of mechanical principles to biological systems, such as humans, animals, plants, organs, and cells. Biomechanics also aids in creating prosthetic limbs and artificial organs for humans. Biomechanics is closely related to engineering, because it often uses traditional engineering sciences to analyze biological systems. Some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems.
In the past decade, reverse engineering of materials found in nature such as bone matter has gained funding in academia. The structure of bone matter is optimized for its purpose of bearing a large amount of compressive stress per unit weight. The goal is to replace crude steel with bio-material for structural design.
Over the past decade the Finite element method (FEM) has also entered the Biomedical sector highlighting further engineering aspects of Biomechanics. FEM has since then established itself as an alternative to in vivo surgical assessment and gained the wide acceptance of academia. The main advantage of Computational Biomechanics lies in its ability to determine the endo-anatomical response of an anatomy, without being subject to ethical restrictions. This has led FE modelling to the point of becoming ubiquitous in several fields of Biomechanics while several projects have even adopted an open source philosophy (e.g. BioSpine).
Computational fluid dynamics
Computational fluid dynamics, usually abbreviated as CFD, is a branch of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the calculations required to simulate the interaction of liquids and gases with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as turbulent flows. Initial validation of such software is performed using a wind tunnel with the final validation coming in full-scale testing, e.g. flight tests.
Acoustical engineering
Acoustical engineering is one of many other sub-disciplines of mechanical engineering and is the application of acoustics. Acoustical engineering is the study of Sound and Vibration. These engineers work effectively to reduce noise pollution in mechanical devices and in buildings by soundproofing or removing sources of unwanted noise. The study of acoustics can range from designing a more efficient hearing aid, microphone, headphone, or recording studio to enhancing the sound quality of an orchestra hall. Acoustical engineering also deals with the vibration of different mechanical systems.
Related fields
Manufacturing engineering, aerospace engineering, automotive engineering and marine engineering are grouped with mechanical engineering at times. A bachelor's degree in these areas will typically have a difference of a few specialized classes.
See also
Automobile engineering
Index of mechanical engineering articles
Lists
Glossary of mechanical engineering
List of historic mechanical engineering landmarks
List of inventors
List of mechanical engineering topics
List of mechanical engineers
List of related journals
List of mechanical, electrical and electronic equipment manufacturing companies by revenue
Associations
American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE)
American Society of Mechanical Engineers (ASME)
Pi Tau Sigma (Mechanical Engineering honor society)
Society of Automotive Engineers (SAE)
Society of Women Engineers (SWE)
Institution of Mechanical Engineers (IMechE) (British)
Chartered Institution of Building Services Engineers (CIBSE) (British)
Verein Deutscher Ingenieure (VDI) (Germany)
Wikibooks
Engineering Mechanics
Engineering Thermodynamics
Engineering Acoustics
Fluid Mechanics
Heat Transfer
Microtechnology
Nanotechnology
Pro/Engineer (ProE CAD)
Strength of Materials/Solid Mechanics
References
Further reading
External links
Mechanical engineering at MTU.edu
Engineering disciplines
Mechanical designers | Mechanical engineering | [
"Physics",
"Engineering"
] | 6,793 | [
"Design engineering",
"Applied and interdisciplinary physics",
"Mechanical designers",
"nan",
"Mechanical engineering"
] |
19,559 | https://en.wikipedia.org/wiki/Mechanics | Mechanics () is the area of physics concerned with the relationships between force, matter, and motion among physical objects. Forces applied to objects may result in displacements, which are changes of an object's position relative to its environment.
Theoretical expositions of this branch of physics has its origins in Ancient Greece, for instance, in the writings of Aristotle and Archimedes (see History of classical mechanics and Timeline of classical mechanics). During the early modern period, scientists such as Galileo Galilei, Johannes Kepler, Christiaan Huygens, and Isaac Newton laid the foundation for what is now known as classical mechanics.
As a branch of classical physics, mechanics deals with bodies that are either at rest or are moving with velocities significantly less than the speed of light. It can also be defined as the physical science that deals with the motion of and forces on bodies not in the quantum realm.
History
Antiquity
The ancient Greek philosophers were among the first to propose that abstract principles govern nature. The main theory of mechanics in antiquity was Aristotelian mechanics, though an alternative theory is exposed in the pseudo-Aristotelian Mechanical Problems, often attributed to one of his successors.
There is another tradition that goes back to the ancient Greeks where mathematics is used more extensively to analyze bodies statically or dynamically, an approach that may have been stimulated by prior work of the Pythagorean Archytas. Examples of this tradition include pseudo-Euclid (On the Balance), Archimedes (On the Equilibrium of Planes, On Floating Bodies), Hero (Mechanica), and Pappus (Collection, Book VIII).
Medieval age
In the Middle Ages, Aristotle's theories were criticized and modified by a number of figures, beginning with John Philoponus in the 6th century. A central problem was that of projectile motion, which was discussed by Hipparchus and Philoponus.
Persian Islamic polymath Ibn Sīnā published his theory of motion in The Book of Healing (1020). He said that an impetus is imparted to a projectile by the thrower, and viewed it as persistent, requiring external forces such as air resistance to dissipate it. Ibn Sina made distinction between 'force' and 'inclination' (called "mayl"), and argued that an object gained mayl when the object is in opposition to its natural motion. So he concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. He also claimed that a projectile in a vacuum would not stop unless it is acted upon, consistent with Newton's first law of motion.
On the question of a body subject to a constant (uniform) force, the 12th-century Jewish-Arab scholar Hibat Allah Abu'l-Barakat al-Baghdaadi (born Nathanel, Iraqi, of Baghdad) stated that constant force imparts constant acceleration. According to Shlomo Pines, al-Baghdaadi's theory of motion was "the oldest negation of Aristotle's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration]."
Influenced by earlier writers such as Ibn Sina and al-Baghdaadi, the 14th-century French priest Jean Buridan developed the theory of impetus, which later developed into the modern theories of inertia, velocity, acceleration and momentum. This work and others was developed in 14th-century England by the Oxford Calculators such as Thomas Bradwardine, who studied and formulated various laws regarding falling bodies. The concept that the main properties of a body are uniformly accelerated motion (as of falling bodies) was worked out by the 14th-century Oxford Calculators.
Early modern age
Two central figures in the early modern age are Galileo Galilei and Isaac Newton. Galileo's final statement of his mechanics, particularly of falling bodies, is his Two New Sciences (1638). Newton's 1687 Philosophiæ Naturalis Principia Mathematica provided a detailed mathematical account of mechanics, using the newly developed mathematics of calculus and providing the basis of Newtonian mechanics.
There is some dispute over priority of various ideas: Newton's Principia is certainly the seminal work and has been tremendously influential, and many of the mathematics results therein could not have been stated earlier without the development of the calculus. However, many of the ideas, particularly as pertain to inertia and falling bodies, had been developed by prior scholars such as Christiaan Huygens and the less-known medieval predecessors. Precise credit is at times difficult or contentious because scientific language and standards of proof changed, so whether medieval statements are equivalent to modern statements or sufficient proof, or instead similar to modern statements and hypotheses is often debatable.
Modern age
Two main modern developments in mechanics are general relativity of Einstein, and quantum mechanics, both developed in the 20th century based in part on earlier 19th-century ideas. The development in the modern continuum mechanics, particularly in the areas of elasticity, plasticity, fluid dynamics, electrodynamics, and thermodynamics of deformable media, started in the second half of the 20th century.
Types of mechanical bodies
The often-used term body needs to stand for a wide assortment of objects, including particles, projectiles, spacecraft, stars, parts of machinery, parts of solids, parts of fluids (gases and liquids), etc.
Other distinctions between the various sub-disciplines of mechanics concern the nature of the bodies being described. Particles are bodies with little (known) internal structure, treated as mathematical points in classical mechanics. Rigid bodies have size and shape, but retain a simplicity close to that of the particle, adding just a few so-called degrees of freedom, such as orientation in space.
Otherwise, bodies may be semi-rigid, i.e. elastic, or non-rigid, i.e. fluid. These subjects have both classical and quantum divisions of study.
For instance, the motion of a spacecraft, regarding its orbit and attitude (rotation), is described by the relativistic theory of classical mechanics, while the analogous movements of an atomic nucleus are described by quantum mechanics.
Sub-disciplines
The following are the three main designations consisting of various subjects that are studied in mechanics.
Note that there is also the "theory of fields" which constitutes a separate discipline in physics, formally treated as distinct from mechanics, whether it be classical fields or quantum fields. But in actual practice, subjects belonging to mechanics and fields are closely interwoven. Thus, for instance, forces that act on particles are frequently derived from fields (electromagnetic or gravitational), and particles generate fields by acting as sources. In fact, in quantum mechanics, particles themselves are fields, as described theoretically by the wave function.
Classical
The following are described as forming classical mechanics:
Newtonian mechanics, the original theory of motion (kinematics) and forces (dynamics)
Analytical mechanics is a reformulation of Newtonian mechanics with an emphasis on system energy, rather than on forces. There are two main branches of analytical mechanics:
Hamiltonian mechanics, a theoretical formalism, based on the principle of conservation of energy
Lagrangian mechanics, another theoretical formalism, based on the principle of the least action
Classical statistical mechanics generalizes ordinary classical mechanics to consider systems in an unknown state; often used to derive thermodynamic properties.
Celestial mechanics, the motion of bodies in space: planets, comets, stars, galaxies, etc.
Astrodynamics, spacecraft navigation, etc.
Solid mechanics, elasticity, plasticity, or viscoelasticity exhibited by deformable solids
Fracture mechanics
Acoustics, sound (density, variation, propagation) in solids, fluids and gases
Statics, semi-rigid bodies in mechanical equilibrium
Fluid mechanics, the motion of fluids
Soil mechanics, mechanical behavior of soils
Continuum mechanics, mechanics of continua (both solid and fluid)
Hydraulics, mechanical properties of liquids
Fluid statics, liquids in equilibrium
Applied mechanics (also known as engineering mechanics)
Biomechanics, solids, fluids, etc. in biology
Biophysics, physical processes in living organisms
Relativistic or Einsteinian mechanics
Quantum
The following are categorized as being part of quantum mechanics:
Schrödinger wave mechanics, used to describe the movements of the wavefunction of a single particle.
Matrix mechanics is an alternative formulation that allows considering systems with a finite-dimensional state space.
Quantum statistical mechanics generalizes ordinary quantum mechanics to consider systems in an unknown state; often used to derive thermodynamic properties.
Particle physics, the motion, structure, and behavior of fundamental particles
Nuclear physics, the motion, structure, and reactions of nuclei
Condensed matter physics, quantum gases, solids, liquids, etc.
Historically, classical mechanics had been around for nearly a quarter millennium before quantum mechanics developed. Classical mechanics originated with Isaac Newton's laws of motion in Philosophiæ Naturalis Principia Mathematica, developed over the seventeenth century. Quantum mechanics developed later, over the nineteenth century, precipitated by Planck's postulate and Albert Einstein's explanation of the photoelectric effect. Both fields are commonly held to constitute the most certain knowledge that exists about physical nature.
Classical mechanics has especially often been viewed as a model for other so-called exact sciences. Essential in this respect is the extensive use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them.
Quantum mechanics is of a bigger scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the correspondence principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of large quantum numbers, i.e. if quantum mechanics is applied to large systems (for e.g. a baseball), the result would almost be the same if classical mechanics had been applied. Quantum mechanics has superseded classical mechanics at the foundation level and is indispensable for the explanation and prediction of processes at the molecular, atomic, and sub-atomic level. However, for macroscopic processes classical mechanics is able to solve problems which are unmanageably difficult (mainly due to computational limits) in quantum mechanics and hence remains useful and well used.
Modern descriptions of such behavior begin with a careful definition of such quantities as displacement (distance moved), time, velocity, acceleration, mass, and force. Until about 400 years ago, however, motion was explained from a very different point of view. For example, following the ideas of Greek philosopher and scientist Aristotle, scientists reasoned that a cannonball falls down because its natural position is in the Earth; the Sun, the Moon, and the stars travel in circles around the Earth because it is the nature of heavenly objects to travel in perfect circles.
Often cited as father to modern science, Galileo brought together the ideas of other great thinkers of his time and began to calculate motion in terms of distance travelled from some starting position and the time that it took. He showed that the speed of falling objects increases steadily during the time of their fall. This acceleration is the same for heavy objects as for light ones, provided air friction (air resistance) is discounted. The English mathematician and physicist Isaac Newton improved this analysis by defining force and mass and relating these to acceleration. For objects traveling at speeds close to the speed of light, Newton's laws were superseded by Albert Einstein's theory of relativity. [A sentence illustrating the computational complication of Einstein's theory of relativity.] For atomic and subatomic particles, Newton's laws were superseded by quantum theory. For everyday phenomena, however, Newton's three laws of motion remain the cornerstone of dynamics, which is the study of what causes motion.
Relativistic
Akin to the distinction between quantum and classical mechanics, Albert Einstein's general and special theories of relativity have expanded the scope of Newton and Galileo's formulation of mechanics. The differences between relativistic and Newtonian mechanics become significant and even dominant as the velocity of a body approaches the speed of light. For instance, in Newtonian mechanics, the kinetic energy of a free particle is , whereas in relativistic mechanics, it is (where is the Lorentz factor; this formula reduces to the Newtonian expression in the low energy limit).
For high-energy processes, quantum mechanics must be adjusted to account for special relativity; this has led to the development of quantum field theory.
Professional organizations
Applied Mechanics Division, American Society of Mechanical Engineers
Fluid Dynamics Division, American Physical Society
Society for Experimental Mechanics
Institution of Mechanical Engineers is the United Kingdom's qualifying body for mechanical engineers and has been the home of Mechanical Engineers for over 150 years.
International Union of Theoretical and Applied Mechanics
See also
Action principles
Applied mechanics
Dynamics
Engineering
Index of engineering science and mechanics articles
Kinematics
Kinetics
Non-autonomous mechanics
Statics
Wiesen Test of Mechanical Aptitude (WTMA)
References
Further reading
Robert Stawell Ball (1871) Experimental Mechanics from Google books.
Practical Mechanics for Boys (1914) by James Slough Zerbe.
External links
Physclips: Mechanics with animations and video clips from the University of New South Wales
The Archimedes Project
Articles containing video clips | Mechanics | [
"Physics",
"Engineering"
] | 2,762 | [
"Mechanics",
"Mechanical engineering"
] |
19,737 | https://en.wikipedia.org/wiki/Maxwell%27s%20equations | Maxwell's equations, or Maxwell–Heaviside equations, are a set of coupled partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, electric and magnetic circuits.
The equations provide a mathematical model for electric, optical, and radio technologies, such as power generation, electric motors, wireless communication, lenses, radar, etc. They describe how electric and magnetic fields are generated by charges, currents, and changes of the fields. The equations are named after the physicist and mathematician James Clerk Maxwell, who, in 1861 and 1862, published an early form of the equations that included the Lorentz force law. Maxwell first used the equations to propose that light is an electromagnetic phenomenon. The modern form of the equations in their most common formulation is credited to Oliver Heaviside.
Maxwell's equations may be combined to demonstrate how fluctuations in electromagnetic fields (waves) propagate at a constant speed in vacuum, c (). Known as electromagnetic radiation, these waves occur at various wavelengths to produce a spectrum of radiation from radio waves to gamma rays.
In partial differential equation form and a coherent system of units, Maxwell's microscopic equations can be written as
With the electric field, the magnetic field, the electric charge density and the current density. is the vacuum permittivity and the vacuum permeability.
The equations have two major variants:
The microscopic equations have universal applicability but are unwieldy for common calculations. They relate the electric and magnetic fields to total charge and total current, including the complicated charges and currents in materials at the atomic scale.
The macroscopic equations define two new auxiliary fields that describe the large-scale behaviour of matter without having to consider atomic-scale charges and quantum phenomena like spins. However, their use requires experimentally determined parameters for a phenomenological description of the electromagnetic response of materials.
The term "Maxwell's equations" is often also used for equivalent alternative formulations. Versions of Maxwell's equations based on the electric and magnetic scalar potentials are preferred for explicitly solving the equations as a boundary value problem, analytical mechanics, or for use in quantum mechanics. The covariant formulation (on spacetime rather than space and time separately) makes the compatibility of Maxwell's equations with special relativity manifest. Maxwell's equations in curved spacetime, commonly used in high-energy and gravitational physics, are compatible with general relativity. In fact, Albert Einstein developed special and general relativity to accommodate the invariant speed of light, a consequence of Maxwell's equations, with the principle that only relative movement has physical consequences.
The publication of the equations marked the unification of a theory for previously separately described phenomena: magnetism, electricity, light, and associated radiation.
Since the mid-20th century, it has been understood that Maxwell's equations do not give an exact description of electromagnetic phenomena, but are instead a classical limit of the more precise theory of quantum electrodynamics.
History of the equations
Conceptual descriptions
Gauss's law
Gauss's law describes the relationship between an electric field and electric charges: an electric field points away from positive charges and towards negative charges, and the net outflow of the electric field through a closed surface is proportional to the enclosed charge, including bound charge due to polarization of material. The coefficient of the proportion is the permittivity of free space.
Gauss's law for magnetism
Gauss's law for magnetism states that electric charges have no magnetic analogues, called magnetic monopoles; no north or south magnetic poles exist in isolation. Instead, the magnetic field of a material is attributed to a dipole, and the net outflow of the magnetic field through a closed surface is zero. Magnetic dipoles may be represented as loops of current or inseparable pairs of equal and opposite "magnetic charges". Precisely, the total magnetic flux through a Gaussian surface is zero, and the magnetic field is a solenoidal vector field.
Faraday's law
The Maxwell–Faraday version of Faraday's law of induction describes how a time-varying magnetic field corresponds to curl of an electric field. In integral form, it states that the work per unit charge required to move a charge around a closed loop equals the rate of change of the magnetic flux through the enclosed surface.
The electromagnetic induction is the operating principle behind many electric generators: for example, a rotating bar magnet creates a changing magnetic field and generates an electric field in a nearby wire.
Ampère–Maxwell law
The original law of Ampère states that magnetic fields relate to electric current. Maxwell's addition states that magnetic fields also relate to changing electric fields, which Maxwell called displacement current. The integral form states that electric and displacement currents are associated with a proportional magnetic field along any enclosing curve.
Maxwell's modification of Ampère's circuital law is important because the laws of Ampère and Gauss must otherwise be adjusted for static fields. As a consequence, it predicts that a rotating magnetic field occurs with a changing electric field. A further consequence is the existence of self-sustaining electromagnetic waves which travel through empty space.
The speed calculated for electromagnetic waves, which could be predicted from experiments on charges and currents, matches the speed of light; indeed, light is one form of electromagnetic radiation (as are X-rays, radio waves, and others). Maxwell understood the connection between electromagnetic waves and light in 1861, thereby unifying the theories of electromagnetism and optics.
Formulation in terms of electric and magnetic fields (microscopic or in vacuum version)
In the electric and magnetic field formulation there are four equations that determine the fields for given charge and current distribution. A separate law of nature, the Lorentz force law, describes how the electric and magnetic fields act on charged particles and currents. By convention, a version of this law in the original equations by Maxwell is no longer included. The vector calculus formalism below, the work of Oliver Heaviside, has become standard. It is rotationally invariant, and therefore mathematically more transparent than Maxwell's original 20 equations in x, y and z components. The relativistic formulations are more symmetric and Lorentz invariant. For the same equations expressed using tensor calculus or differential forms (see ).
The differential and integral formulations are mathematically equivalent; both are useful. The integral formulation relates fields within a region of space to fields on the boundary and can often be used to simplify and directly calculate fields from symmetric distributions of charges and currents. On the other hand, the differential equations are purely local and are a more natural starting point for calculating the fields in more complicated (less symmetric) situations, for example using finite element analysis.
Key to the notation
Symbols in bold represent vector quantities, and symbols in italics represent scalar quantities, unless otherwise indicated.
The equations introduce the electric field, , a vector field, and the magnetic field, , a pseudovector field, each generally having a time and location dependence.
The sources are
the total electric charge density (total charge per unit volume), , and
the total electric current density (total current per unit area), .
The universal constants appearing in the equations (the first two ones explicitly only in the SI formulation) are:
the permittivity of free space, , and
the permeability of free space, , and
the speed of light,
Differential equations
In the differential equations,
the nabla symbol, , denotes the three-dimensional gradient operator, del,
the symbol (pronounced "del dot") denotes the divergence operator,
the symbol (pronounced "del cross") denotes the curl operator.
Integral equations
In the integral equations,
is any volume with closed boundary surface , and
is any surface with closed boundary curve ,
The equations are a little easier to interpret with time-independent surfaces and volumes. Time-independent surfaces and volumes are "fixed" and do not change over a given time interval. For example, since the surface is time-independent, we can bring the differentiation under the integral sign in Faraday's law:
Maxwell's equations can be formulated with possibly time-dependent surfaces and volumes by using the differential version and using Gauss' and Stokes' theorems as appropriate.
is a surface integral over the boundary surface , with the loop indicating the surface is closed
is a volume integral over the volume ,
is a line integral around the boundary curve , with the loop indicating the curve is closed.
is a surface integral over the surface ,
The total electric charge enclosed in is the volume integral over of the charge density (see the "macroscopic formulation" section below): where is the volume element.
The net magnetic flux is the surface integral of the magnetic field passing through a fixed surface, :
The net electric flux is the surface integral of the electric field passing through :
The net electric current is the surface integral of the electric current density passing through : where denotes the differential vector element of surface area , normal to surface . (Vector area is sometimes denoted by rather than , but this conflicts with the notation for magnetic vector potential).
Formulation in the SI
Formulation in the Gaussian system
The definitions of charge, electric field, and magnetic field can be altered to simplify theoretical calculation, by absorbing dimensioned factors of and into the units (and thus redefining these). With a corresponding change in the values of the quantities for the Lorentz force law this yields the same physics, i.e. trajectories of charged particles, or work done by an electric motor. These definitions are often preferred in theoretical and high energy physics where it is natural to take the electric and magnetic field with the same units, to simplify the appearance of the electromagnetic tensor: the Lorentz covariant object unifying electric and magnetic field would then contain components with uniform unit and dimension. Such modified definitions are conventionally used with the Gaussian (CGS) units. Using these definitions, colloquially "in Gaussian units",
the Maxwell equations become:
The equations simplify slightly when a system of quantities is chosen in the speed of light, c, is used for nondimensionalization, so that, for example, seconds and lightseconds are interchangeable, and c = 1.
Further changes are possible by absorbing factors of . This process, called rationalization, affects whether Coulomb's law or Gauss's law includes such a factor (see Heaviside–Lorentz units, used mainly in particle physics).
Relationship between differential and integral formulations
The equivalence of the differential and integral formulations are a consequence of the Gauss divergence theorem and the Kelvin–Stokes theorem.
Flux and divergence
According to the (purely mathematical) Gauss divergence theorem, the electric flux through the
boundary surface can be rewritten as
The integral version of Gauss's equation can thus be rewritten as
Since is arbitrary (e.g. an arbitrary small ball with arbitrary center), this is satisfied if and only if the integrand is zero everywhere. This is
the differential equations formulation of Gauss equation up to a trivial rearrangement.
Similarly rewriting the magnetic flux in Gauss's law for magnetism in integral form gives
which is satisfied for all if and only if everywhere.
Circulation and curl
By the Kelvin–Stokes theorem we can rewrite the line integrals of the fields around the closed boundary curve to an integral of the "circulation of the fields" (i.e. their curls) over a surface it bounds, i.e.
Hence the Ampère–Maxwell law, the modified version of Ampère's circuital law, in integral form can be rewritten as
Since can be chosen arbitrarily, e.g. as an arbitrary small, arbitrary oriented, and arbitrary centered disk, we conclude that the integrand is zero if and only if the Ampère–Maxwell law in differential equations form is satisfied.
The equivalence of Faraday's law in differential and integral form follows likewise.
The line integrals and curls are analogous to quantities in classical fluid dynamics: the circulation of a fluid is the line integral of the fluid's flow velocity field around a closed loop, and the vorticity of the fluid is the curl of the velocity field.
Charge conservation
The invariance of charge can be derived as a corollary of Maxwell's equations. The left-hand side of the Ampère–Maxwell law has zero divergence by the div–curl identity. Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives:
i.e.,
By the Gauss divergence theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary:
In particular, in an isolated system the total charge is conserved.
Vacuum equations, electromagnetic waves and speed of light
In a region with no charges () and no currents (), such as in vacuum, Maxwell's equations reduce to:
Taking the curl of the curl equations, and using the curl of the curl identity we obtain
The quantity has the dimension (T/L)2. Defining , the equations above have the form of the standard wave equations
Already during Maxwell's lifetime, it was found that the known values for and give , then already known to be the speed of light in free space. This led him to propose that light and radio waves were propagating electromagnetic waves, since amply confirmed. In the old SI system of units, the values of and are defined constants, (which means that by definition ) that define the ampere and the metre. In the new SI system, only c keeps its defined value, and the electron charge gets a defined value.
In materials with relative permittivity, , and relative permeability, , the phase velocity of light becomes
which is usually less than .
In addition, and are perpendicular to each other and to the direction of wave propagation, and are in phase with each other. A sinusoidal plane wave is one special solution of these equations. Maxwell's equations explain how these waves can physically propagate through space. The changing magnetic field creates a changing electric field through Faraday's law. In turn, that electric field creates a changing magnetic field through Maxwell's modification of Ampère's circuital law. This perpetual cycle allows these waves, now known as electromagnetic radiation, to move through space at velocity .
Macroscopic formulation
The above equations are the microscopic version of Maxwell's equations, expressing the electric and the magnetic fields in terms of the (possibly atomic-level) charges and currents present. This is sometimes called the "general" form, but the macroscopic version below is equally general, the difference being one of bookkeeping.
The microscopic version is sometimes called "Maxwell's equations in vacuum": this refers to the fact that the material medium is not built into the structure of the equations, but appears only in the charge and current terms. The microscopic version was introduced by Lorentz, who tried to use it to derive the macroscopic properties of bulk matter from its microscopic constituents.
"Maxwell's macroscopic equations", also known as Maxwell's equations in matter, are more similar to those that Maxwell introduced himself.
In the macroscopic equations, the influence of bound charge and bound current is incorporated into the displacement field and the magnetizing field , while the equations depend only on the free charges and free currents . This reflects a splitting of the total electric charge Q and current I (and their densities and J) into free and bound parts:
The cost of this splitting is that the additional fields and need to be determined through phenomenological constituent equations relating these fields to the electric field and the magnetic field , together with the bound charge and current.
See below for a detailed description of the differences between the microscopic equations, dealing with total charge and current including material contributions, useful in air/vacuum;
and the macroscopic equations, dealing with free charge and current, practical to use within materials.
Bound charge and current
When an electric field is applied to a dielectric material its molecules respond by forming microscopic electric dipoles – their atomic nuclei move a tiny distance in the direction of the field, while their electrons move a tiny distance in the opposite direction. This produces a macroscopic bound charge in the material even though all of the charges involved are bound to individual molecules. For example, if every molecule responds the same, similar to that shown in the figure, these tiny movements of charge combine to produce a layer of positive bound charge on one side of the material and a layer of negative charge on the other side. The bound charge is most conveniently described in terms of the polarization of the material, its dipole moment per unit volume. If is uniform, a macroscopic separation of charge is produced only at the surfaces where enters and leaves the material. For non-uniform , a charge is also produced in the bulk.
Somewhat similarly, in all materials the constituent atoms exhibit magnetic moments that are intrinsically linked to the angular momentum of the components of the atoms, most notably their electrons. The connection to angular momentum suggests the picture of an assembly of microscopic current loops. Outside the material, an assembly of such microscopic current loops is not different from a macroscopic current circulating around the material's surface, despite the fact that no individual charge is traveling a large distance. These bound currents can be described using the magnetization .
The very complicated and granular bound charges and bound currents, therefore, can be represented on the macroscopic scale in terms of and , which average these charges and currents on a sufficiently large scale so as not to see the granularity of individual atoms, but also sufficiently small that they vary with location in the material. As such, Maxwell's macroscopic equations ignore many details on a fine scale that can be unimportant to understanding matters on a gross scale by calculating fields that are averaged over some suitable volume.
Auxiliary fields, polarization and magnetization
The definitions of the auxiliary fields are:
where is the polarization field and is the magnetization field, which are defined in terms of microscopic bound charges and bound currents respectively. The macroscopic bound charge density and bound current density in terms of polarization and magnetization are then defined as
If we define the total, bound, and free charge and current density by
and use the defining relations above to eliminate , and , the "macroscopic" Maxwell's equations reproduce the "microscopic" equations.
Constitutive relations
In order to apply 'Maxwell's macroscopic equations', it is necessary to specify the relations between displacement field and the electric field , as well as the magnetizing field and the magnetic field . Equivalently, we have to specify the dependence of the polarization (hence the bound charge) and the magnetization (hence the bound current) on the applied electric and magnetic field. The equations specifying this response are called constitutive relations. For real-world materials, the constitutive relations are rarely simple, except approximately, and usually determined by experiment. See the main article on constitutive relations for a fuller description.
For materials without polarization and magnetization, the constitutive relations are (by definition)
where is the permittivity of free space and the permeability of free space. Since there is no bound charge, the total and the free charge and current are equal.
An alternative viewpoint on the microscopic equations is that they are the macroscopic equations together with the statement that vacuum behaves like a perfect linear "material" without additional polarization and magnetization.
More generally, for linear materials the constitutive relations are
where is the permittivity and the permeability of the material. For the displacement field the linear approximation is usually excellent because for all but the most extreme electric fields or temperatures obtainable in the laboratory (high power pulsed lasers) the interatomic electric fields of materials of the order of 1011 V/m are much higher than the external field. For the magnetizing field , however, the linear approximation can break down in common materials like iron leading to phenomena like hysteresis. Even the linear case can have various complications, however.
For homogeneous materials, and are constant throughout the material, while for inhomogeneous materials they depend on location within the material (and perhaps time).
For isotropic materials, and are scalars, while for anisotropic materials (e.g. due to crystal structure) they are tensors.
Materials are generally dispersive, so and depend on the frequency of any incident EM waves.
Even more generally, in the case of non-linear materials (see for example nonlinear optics), and are not necessarily proportional to , similarly or is not necessarily proportional to . In general and depend on both and , on location and time, and possibly other physical quantities.
In applications one also has to describe how the free currents and charge density behave in terms of and possibly coupled to other physical quantities like pressure, and the mass, number density, and velocity of charge-carrying particles. E.g., the original equations given by Maxwell (see History of Maxwell's equations) included Ohm's law in the form
Alternative formulations
Following are some of the several other mathematical formalisms of Maxwell's equations, with the columns separating the two homogeneous Maxwell equations from the two inhomogeneous ones. Each formulation has versions directly in terms of the electric and magnetic fields, and indirectly in terms of the electrical potential and the vector potential . Potentials were introduced as a convenient way to solve the homogeneous equations, but it was thought that all observable physics was contained in the electric and magnetic fields (or relativistically, the Faraday tensor). The potentials play a central role in quantum mechanics, however, and act quantum mechanically with observable consequences even when the electric and magnetic fields vanish (Aharonov–Bohm effect).
Each table describes one formalism. See the main article for details of each formulation.
The direct spacetime formulations make manifest that the Maxwell equations are relativistically invariant, where space and time are treated on equal footing. Because of this symmetry, the electric and magnetic fields are treated on equal footing and are recognized as components of the Faraday tensor. This reduces the four Maxwell equations to two, which simplifies the equations, although we can no longer use the familiar vector formulation. Maxwell equations in formulation that do not treat space and time manifestly on the same footing have Lorentz invariance as a hidden symmetry. This was a major source of inspiration for the development of relativity theory. Indeed, even the formulation that treats space and time separately is not a non-relativistic approximation and describes the same physics by simply renaming variables. For this reason the relativistic invariant equations are usually called the Maxwell equations as well.
Each table below describes one formalism.
In the tensor calculus formulation, the electromagnetic tensor is an antisymmetric covariant order 2 tensor; the four-potential, , is a covariant vector; the current, , is a vector; the square brackets, , denote antisymmetrization of indices; is the partial derivative with respect to the coordinate, . In Minkowski space coordinates are chosen with respect to an inertial frame; , so that the metric tensor used to raise and lower indices is . The d'Alembert operator on Minkowski space is as in the vector formulation. In general spacetimes, the coordinate system is arbitrary, the covariant derivative , the Ricci tensor, and raising and lowering of indices are defined by the Lorentzian metric, and the d'Alembert operator is defined as . The topological restriction is that the second real cohomology group of the space vanishes (see the differential form formulation for an explanation). This is violated for Minkowski space with a line removed, which can model a (flat) spacetime with a point-like monopole on the complement of the line.
In the differential form formulation on arbitrary space times, is the electromagnetic tensor considered as a 2-form, is the potential 1-form, is the current 3-form, is the exterior derivative, and is the Hodge star on forms defined (up to its orientation, i.e. its sign) by the Lorentzian metric of spacetime. In the special case of 2-forms such as F, the Hodge star depends on the metric tensor only for its local scale. This means that, as formulated, the differential form field equations are conformally invariant, but the Lorenz gauge condition breaks conformal invariance. The operator is the d'Alembert–Laplace–Beltrami operator on 1-forms on an arbitrary Lorentzian spacetime. The topological condition is again that the second real cohomology group is 'trivial' (meaning that its form follows from a definition). By the isomorphism with the second de Rham cohomology this condition means that every closed 2-form is exact.
Other formalisms include the geometric algebra formulation and a matrix representation of Maxwell's equations. Historically, a quaternionic formulation was used.
Solutions
Maxwell's equations are partial differential equations that relate the electric and magnetic fields to each other and to the electric charges and currents. Often, the charges and currents are themselves dependent on the electric and magnetic fields via the Lorentz force equation and the constitutive relations. These all form a set of coupled partial differential equations which are often very difficult to solve: the solutions encompass all the diverse phenomena of classical electromagnetism. Some general remarks follow.
As for any differential equation, boundary conditions and initial conditions are necessary for a unique solution. For example, even with no charges and no currents anywhere in spacetime, there are the obvious solutions for which E and B are zero or constant, but there are also non-trivial solutions corresponding to electromagnetic waves. In some cases, Maxwell's equations are solved over the whole of space, and boundary conditions are given as asymptotic limits at infinity. In other cases, Maxwell's equations are solved in a finite region of space, with appropriate conditions on the boundary of that region, for example an artificial absorbing boundary representing the rest of the universe, or periodic boundary conditions, or walls that isolate a small region from the outside world (as with a waveguide or cavity resonator).
Jefimenko's equations (or the closely related Liénard–Wiechert potentials) are the explicit solution to Maxwell's equations for the electric and magnetic fields created by any given distribution of charges and currents. It assumes specific initial conditions to obtain the so-called "retarded solution", where the only fields present are the ones created by the charges. However, Jefimenko's equations are unhelpful in situations when the charges and currents are themselves affected by the fields they create.
Numerical methods for differential equations can be used to compute approximate solutions of Maxwell's equations when exact solutions are impossible. These include the finite element method and finite-difference time-domain method. For more details, see Computational electromagnetics.
Overdetermination of Maxwell's equations
Maxwell's equations seem overdetermined, in that they involve six unknowns (the three components of and ) but eight equations (one for each of the two Gauss's laws, three vector components each for Faraday's and Ampère's circuital laws). (The currents and charges are not unknowns, being freely specifiable subject to charge conservation.) This is related to a certain limited kind of redundancy in Maxwell's equations: It can be proven that any system satisfying Faraday's law and Ampère's circuital law automatically also satisfies the two Gauss's laws, as long as the system's initial condition does, and assuming conservation of charge and the nonexistence of magnetic monopoles.
This explanation was first introduced by Julius Adams Stratton in 1941.
Although it is possible to simply ignore the two Gauss's laws in a numerical algorithm (apart from the initial conditions), the imperfect precision of the calculations can lead to ever-increasing violations of those laws. By introducing dummy variables characterizing these violations, the four equations become not overdetermined after all. The resulting formulation can lead to more accurate algorithms that take all four laws into account.
Both identities , which reduce eight equations to six independent ones, are the true reason of overdetermination.
Equivalently, the overdetermination can be viewed as implying conservation of electric and magnetic charge, as they are required in the derivation described above but implied by the two Gauss's laws.
For linear algebraic equations, one can make 'nice' rules to rewrite the equations and unknowns. The equations can be linearly dependent. But in differential equations, and especially partial differential equations (PDEs), one needs appropriate boundary conditions, which depend in not so obvious ways on the equations. Even more, if one rewrites them in terms of vector and scalar potential, then the equations are underdetermined because of gauge fixing.
Maxwell's equations as the classical limit of QED
Maxwell's equations and the Lorentz force law (along with the rest of classical electromagnetism) are extraordinarily successful at explaining and predicting a variety of phenomena. However they do not account for quantum effects and so their domain of applicability is limited. Maxwell's equations are thought of as the classical limit of quantum electrodynamics (QED).
Some observed electromagnetic phenomena are incompatible with Maxwell's equations. These include photon–photon scattering and many other phenomena related to photons or virtual photons, "nonclassical light" and quantum entanglement of electromagnetic fields (see Quantum optics). E.g. quantum cryptography cannot be described by Maxwell theory, not even approximately. The approximate nature of Maxwell's equations becomes more and more apparent when going into the extremely strong field regime (see Euler–Heisenberg Lagrangian) or to extremely small distances.
Finally, Maxwell's equations cannot explain any phenomenon involving individual photons interacting with quantum matter, such as the photoelectric effect, Planck's law, the Duane–Hunt law, and single-photon light detectors. However, many such phenomena may be approximated using a halfway theory of quantum matter coupled to a classical electromagnetic field, either as external field or with the expected value of the charge current and density on the right hand side of Maxwell's equations.
Variations
Popular variations on the Maxwell equations as a classical theory of electromagnetic fields are relatively scarce because the standard equations have stood the test of time remarkably well.
Magnetic monopoles
Maxwell's equations posit that there is electric charge, but no magnetic charge (also called magnetic monopoles), in the universe. Indeed, magnetic charge has never been observed, despite extensive searches, and may not exist. If they did exist, both Gauss's law for magnetism and Faraday's law would need to be modified, and the resulting four equations would be fully symmetric under the interchange of electric and magnetic fields.
See also
Explanatory notes
References
Further reading
Historical publications
On Faraday's Lines of Force – 1855/56. Maxwell's first paper (Part 1 & 2) – Compiled by Blaze Labs Research (PDF).
On Physical Lines of Force – 1861. Maxwell's 1861 paper describing magnetic lines of force – Predecessor to 1873 Treatise.
James Clerk Maxwell, "A Dynamical Theory of the Electromagnetic Field", Philosophical Transactions of the Royal Society of London 155, 459–512 (1865). (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.)
A Dynamical Theory Of The Electromagnetic Field – 1865. Maxwell's 1865 paper describing his 20 equations, link from Google Books.
J. Clerk Maxwell (1873), "A Treatise on Electricity and Magnetism":
Maxwell, J. C., "A Treatise on Electricity And Magnetism" – Volume 1 – 1873 – Posner Memorial Collection – Carnegie Mellon University.
Maxwell, J. C., "A Treatise on Electricity And Magnetism" – Volume 2 – 1873 – Posner Memorial Collection – Carnegie Mellon University.
Developments before the theory of relativity
Henri Poincaré (1900) "La théorie de Lorentz et le Principe de Réaction" , Archives Néerlandaises, V, 253–278.
Henri Poincaré (1902) "La Science et l'Hypothèse" .
Henri Poincaré (1905) "Sur la dynamique de l'électron" , Comptes Rendus de l'Académie des Sciences, 140, 1504–1508.
Catt, Walton and Davidson. "The History of Displacement Current" . Wireless World, March 1979.
External links
maxwells-equations.com — An intuitive tutorial of Maxwell's equations.
The Feynman Lectures on Physics Vol. II Ch. 18: The Maxwell Equations
Wikiversity Page on Maxwell's Equations
Modern treatments
Electromagnetism (ch. 11), B. Crowell, Fullerton College
Lecture series: Relativity and electromagnetism, R. Fitzpatrick, University of Texas at Austin
Electromagnetic waves from Maxwell's equations on Project PHYSNET.
MIT Video Lecture Series (36 × 50 minute lectures) (in .mp4 format) – Electricity and Magnetism Taught by Professor Walter Lewin.
Other
Nature Milestones: Photons – Milestone 2 (1861) Maxwell's equations
Electromagnetism
Equations of physics
Functions of space and time
James Clerk Maxwell
Partial differential equations
Scientific laws | Maxwell's equations | [
"Physics",
"Mathematics"
] | 6,907 | [
"Electromagnetism",
"Physical phenomena",
"Equations of physics",
"Functions of space and time",
"Eponymous equations of physics",
"Mathematical objects",
"Equations",
"Scientific laws",
"Fundamental interactions",
"Spacetime",
"Maxwell's equations"
] |
19,830 | https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann%20distribution | In physics (in particular in statistical mechanics), the Maxwell–Boltzmann distribution, or Maxwell(ian) distribution, is a particular probability distribution named after James Clerk Maxwell and Ludwig Boltzmann.
It was first defined and used for describing particle speeds in idealized gases, where the particles move freely inside a stationary container without interacting with one another, except for very brief collisions in which they exchange energy and momentum with each other or with their thermal environment. The term "particle" in this context refers to gaseous particles only (atoms or molecules), and the system of particles is assumed to have reached thermodynamic equilibrium. The energies of such particles follow what is known as Maxwell–Boltzmann statistics, and the statistical distribution of speeds is derived by equating particle energies with kinetic energy.
Mathematically, the Maxwell–Boltzmann distribution is the chi distribution with three degrees of freedom (the components of the velocity vector in Euclidean space), with a scale parameter measuring speeds in units proportional to the square root of (the ratio of temperature and particle mass).
The Maxwell–Boltzmann distribution is a result of the kinetic theory of gases, which provides a simplified explanation of many fundamental gaseous properties, including pressure and diffusion. The Maxwell–Boltzmann distribution applies fundamentally to particle velocities in three dimensions, but turns out to depend only on the speed (the magnitude of the velocity) of the particles. A particle speed probability distribution indicates which speeds are more likely: a randomly chosen particle will have a speed selected randomly from the distribution, and is more likely to be within one range of speeds than another. The kinetic theory of gases applies to the classical ideal gas, which is an idealization of real gases. In real gases, there are various effects (e.g., van der Waals interactions, vortical flow, relativistic speed limits, and quantum exchange interactions) that can make their speed distribution different from the Maxwell–Boltzmann form. However, rarefied gases at ordinary temperatures behave very nearly like an ideal gas and the Maxwell speed distribution is an excellent approximation for such gases. This is also true for ideal plasmas, which are ionized gases of sufficiently low density.
The distribution was first derived by Maxwell in 1860 on heuristic grounds. Boltzmann later, in the 1870s, carried out significant investigations into the physical origins of this distribution. The distribution can be derived on the ground that it maximizes the entropy of the system. A list of derivations are:
Maximum entropy probability distribution in the phase space, with the constraint of conservation of average energy
Canonical ensemble.
Distribution function
For a system containing a large number of identical non-interacting, non-relativistic classical particles in thermodynamic equilibrium, the fraction of the particles within an infinitesimal element of the three-dimensional velocity space , centered on a velocity vector of magnitude , is given by
where:
is the particle mass;
is the Boltzmann constant;
is thermodynamic temperature;
is a probability distribution function, properly normalized so that over all velocities is unity.
One can write the element of velocity space as , for velocities in a standard Cartesian coordinate system, or as in a standard spherical coordinate system, where is an element of solid angle and .
The Maxwellian distribution function for particles moving in only one direction, if this direction is , is
which can be obtained by integrating the three-dimensional form given above over and .
Recognizing the symmetry of , one can integrate over solid angle and write a probability distribution of speeds as the function
This probability density function gives the probability, per unit speed, of finding the particle with a speed near . This equation is simply the Maxwell–Boltzmann distribution (given in the infobox) with distribution parameter
The Maxwell–Boltzmann distribution is equivalent to the chi distribution with three degrees of freedom and scale parameter
The simplest ordinary differential equation satisfied by the distribution is:
or in unitless presentation:
With the Darwin–Fowler method of mean values, the Maxwell–Boltzmann distribution is obtained as an exact result.
Relaxation to the 2D Maxwell–Boltzmann distribution
For particles confined to move in a plane, the speed distribution is given by
This distribution is used for describing systems in equilibrium. However, most systems do not start out in their equilibrium state. The evolution of a system towards its equilibrium state is governed by the Boltzmann equation. The equation predicts that for short range interactions, the equilibrium velocity distribution will follow a Maxwell–Boltzmann distribution. To the right is a molecular dynamics (MD) simulation in which 900 hard sphere particles are constrained to move in a rectangle. They interact via perfectly elastic collisions. The system is initialized out of equilibrium, but the velocity distribution (in blue) quickly converges to the 2D Maxwell–Boltzmann distribution (in orange).
Typical speeds
The mean speed , most probable speed (mode) , and root-mean-square speed can be obtained from properties of the Maxwell distribution.
This works well for nearly ideal, monatomic gases like helium, but also for molecular gases like diatomic oxygen. This is because despite the larger heat capacity (larger internal energy at the same temperature) due to their larger number of degrees of freedom, their translational kinetic energy (and thus their speed) is unchanged.
In summary, the typical speeds are related as follows:
The root mean square speed is directly related to the speed of sound in the gas, by
where is the adiabatic index, is the number of degrees of freedom of the individual gas molecule. For the example above, diatomic nitrogen (approximating air) at , and
the true value for air can be approximated by using the average molar weight of air (), yielding at (corrections for variable humidity are of the order of 0.1% to 0.6%).
The average relative velocity
where the three-dimensional velocity distribution is
The integral can easily be done by changing to coordinates and
Limitations
The Maxwell–Boltzmann distribution assumes that the velocities of individual particles are much less than the speed of light, i.e. that . For electrons, the temperature of electrons must be .
Derivation and related distributions
Maxwell–Boltzmann statistics
The original derivation in 1860 by James Clerk Maxwell was an argument based on molecular collisions of the Kinetic theory of gases as well as certain symmetries in the speed distribution function; Maxwell also gave an early argument that these molecular collisions entail a tendency towards equilibrium. After Maxwell, Ludwig Boltzmann in 1872 also derived the distribution on mechanical grounds and argued that gases should over time tend toward this distribution, due to collisions (see H-theorem). He later (1877) derived the distribution again under the framework of statistical thermodynamics. The derivations in this section are along the lines of Boltzmann's 1877 derivation, starting with result known as Maxwell–Boltzmann statistics (from statistical thermodynamics). Maxwell–Boltzmann statistics gives the average number of particles found in a given single-particle microstate. Under certain assumptions, the logarithm of the fraction of particles in a given microstate is linear in the ratio of the energy of that state to the temperature of the system: there are constants and such that, for all ,
The assumptions of this equation are that the particles do not interact, and that they are classical; this means that each particle's state can be considered independently from the other particles' states. Additionally, the particles are assumed to be in thermal equilibrium.
This relation can be written as an equation by introducing a normalizing factor:
where:
is the expected number of particles in the single-particle microstate ,
is the total number of particles in the system,
is the energy of microstate ,
the sum over index takes into account all microstates,
is the equilibrium temperature of the system,
is the Boltzmann constant.
The denominator in is a normalizing factor so that the ratios add up to unity — in other words it is a kind of partition function (for the single-particle system, not the usual partition function of the entire system).
Because velocity and speed are related to energy, Equation () can be used to derive relationships between temperature and the speeds of gas particles. All that is needed is to discover the density of microstates in energy, which is determined by dividing up momentum space into equal sized regions.
Distribution for the momentum vector
The potential energy is taken to be zero, so that all energy is in the form of kinetic energy.
The relationship between kinetic energy and momentum for massive non-relativistic particles is
where is the square of the momentum vector . We may therefore rewrite Equation () as:
where:
is the partition function, corresponding to the denominator in ;
is the molecular mass of the gas;
is the thermodynamic temperature;
is the Boltzmann constant.
This distribution of is proportional to the probability density function for finding a molecule with these values of momentum components, so:
The normalizing constant can be determined by recognizing that the probability of a molecule having some momentum must be 1.
Integrating the exponential in over all , , and yields a factor of
So that the normalized distribution function is:
The distribution is seen to be the product of three independent normally distributed variables , , and , with variance . Additionally, it can be seen that the magnitude of momentum will be distributed as a Maxwell–Boltzmann distribution, with . The Maxwell–Boltzmann distribution for the momentum (or equally for the velocities) can be obtained more fundamentally using the H-theorem at equilibrium within the Kinetic theory of gases framework.
Distribution for the energy
The energy distribution is found imposing
where is the infinitesimal phase-space volume of momenta corresponding to the energy interval .
Making use of the spherical symmetry of the energy-momentum dispersion relation this can be expressed in terms of as
Using then () in (), and expressing everything in terms of the energy , we get
and finally
Since the energy is proportional to the sum of the squares of the three normally distributed momentum components, this energy distribution can be written equivalently as a gamma distribution, using a shape parameter, and a scale parameter,
Using the equipartition theorem, given that the energy is evenly distributed among all three degrees of freedom in equilibrium, we can also split into a set of chi-squared distributions, where the energy per degree of freedom, is distributed as a chi-squared distribution with one degree of freedom,
At equilibrium, this distribution will hold true for any number of degrees of freedom. For example, if the particles are rigid mass dipoles of fixed dipole moment, they will have three translational degrees of freedom and two additional rotational degrees of freedom. The energy in each degree of freedom will be described according to the above chi-squared distribution with one degree of freedom, and the total energy will be distributed according to a chi-squared distribution with five degrees of freedom. This has implications in the theory of the specific heat of a gas.
Distribution for the velocity vector
Recognizing that the velocity probability density is proportional to the momentum probability density function by
and using we get
which is the Maxwell–Boltzmann velocity distribution. The probability of finding a particle with velocity in the infinitesimal element about velocity is
Like the momentum, this distribution is seen to be the product of three independent normally distributed variables , , and , but with variance .
It can also be seen that the Maxwell–Boltzmann velocity distribution for the vector velocity
is the product of the distributions for each of the three directions:
where the distribution for a single direction is
Each component of the velocity vector has a normal distribution with mean and standard deviation , so the vector has a 3-dimensional normal distribution, a particular kind of multivariate normal distribution, with mean and covariance , where is the identity matrix.
Distribution for the speed
The Maxwell–Boltzmann distribution for the speed follows immediately from the distribution of the velocity vector, above. Note that the speed is
and the volume element in spherical coordinates
where and are the spherical coordinate angles of the velocity vector. Integration of the probability density function of the velocity over the solid angles yields an additional factor of .
The speed distribution with substitution of the speed for the sum of the squares of the vector components:
In n-dimensional space
In -dimensional space, Maxwell–Boltzmann distribution becomes:
Speed distribution becomes:
where is a normalizing constant.
The following integral result is useful:
where is the Gamma function. This result can be used to calculate the moments of speed distribution function:
which is the mean speed itself
which gives root-mean-square speed
The derivative of speed distribution function:
This yields the most probable speed (mode)
See also
Quantum Boltzmann equation
Maxwell–Boltzmann statistics
Maxwell–Jüttner distribution
Boltzmann distribution
Rayleigh distribution
Kinetic theory of gases
Notes
References
Further reading
External links
"The Maxwell Speed Distribution" from The Wolfram Demonstrations Project at Mathworld
Continuous distributions
Gases
Ludwig Boltzmann
James Clerk Maxwell
Normal distribution
Particle distributions | Maxwell–Boltzmann distribution | [
"Physics",
"Chemistry"
] | 2,676 | [
"Statistical mechanics",
"Phases of matter",
"Gases",
"Matter"
] |
19,833 | https://en.wikipedia.org/wiki/Metastability | In chemistry and physics, metastability is an intermediate energetic state within a dynamical system other than the system's state of least energy.
A ball resting in a hollow on a slope is a simple example of metastability. If the ball is only slightly pushed, it will settle back into its hollow, but a stronger push may start the ball rolling down the slope. Bowling pins show similar metastability by either merely wobbling for a moment or tipping over completely. A common example of metastability in science is isomerisation. Higher energy isomers are long lived because they are prevented from rearranging to their preferred ground state by (possibly large) barriers in the potential energy.
During a metastable state of finite lifetime, all state-describing parameters reach and hold stationary values. In isolation:
the state of least energy is the only one the system will inhabit for an indefinite length of time, until more external energy is added to the system (unique "absolutely stable" state);
the system will spontaneously leave any other state (of higher energy) to eventually return (after a sequence of transitions) to the least energetic state.
The metastability concept originated in the physics of first-order phase transitions. It then acquired new meaning in the study of aggregated subatomic particles (in atomic nuclei or in atoms) or in molecules, macromolecules or clusters of atoms and molecules. Later, it was borrowed for the study of decision-making and information transmission systems.
Metastability is common in physics and chemistry – from an atom (many-body assembly) to statistical ensembles of molecules (viscous fluids, amorphous solids, liquid crystals, minerals, etc.) at molecular levels or as a whole (see Metastable states of matter and grain piles below). The abundance of states is more prevalent as the systems grow larger and/or if the forces of their mutual interaction are spatially less uniform or more diverse.
In dynamic systems (with feedback) like electronic circuits, signal trafficking, decisional, neural and immune systems, the time-invariance of the active or reactive patterns with respect to the external influences defines stability and metastability (see brain metastability below). In these systems, the equivalent of thermal fluctuations in molecular systems is the "white noise" that affects signal propagation and the decision-making.
Statistical physics and thermodynamics
Non-equilibrium thermodynamics is a branch of physics that studies the dynamics of statistical ensembles of molecules via unstable states. Being "stuck" in a thermodynamic trough without being at the lowest energy state is known as having kinetic stability or being kinetically persistent. The particular motion or kinetics of the atoms involved has resulted in getting stuck, despite there being preferable (lower-energy) alternatives.
States of matter
Metastable states of matter (also referred as metastates) range from melting solids (or freezing liquids), boiling liquids (or condensing gases) and sublimating solids to supercooled liquids or superheated liquid-gas mixtures. Extremely pure, supercooled water stays liquid below 0 °C and remains so until applied vibrations or condensing seed doping initiates crystallization centers. This is a common situation for the droplets of atmospheric clouds.
Condensed matter and macromolecules
Metastable phases are common in condensed matter and crystallography. This is the case for anatase, a metastable polymorph of titanium dioxide, which despite commonly being the first phase to form in many synthesis processes due to its lower surface energy, is always metastable, with rutile being the most stable phase at all temperatures and pressures.
As another example, diamond is a stable phase only at very high pressures, but is a metastable form of carbon at standard temperature and pressure. It can be converted to graphite (plus leftover kinetic energy), but only after overcoming an activation energy – an intervening hill. Martensite is a metastable phase used to control the hardness of most steel. Metastable polymorphs of silica are commonly observed. In some cases, such as in the allotropes of solid boron, acquiring a sample of the stable phase is difficult.
The bonds between the building blocks of polymers such as DNA, RNA, and proteins are also metastable. Adenosine triphosphate (ATP) is a highly metastable molecule, colloquially described as being "full of energy" that can be used in many ways in biology.
Generally speaking, emulsions/colloidal systems and glasses are metastable. The metastability of silica glass, for example, is characterised by lifetimes on the order of 1098 years (as compared with the lifetime of the universe, which is thought to be around years).
Sandpiles are one system which can exhibit metastability if a steep slope or tunnel is present. Sand grains form a pile due to friction. It is possible for an entire large sand pile to reach a point where it is stable, but the addition of a single grain causes large parts of it to collapse.
The avalanche is a well-known problem with large piles of snow and ice crystals on steep slopes. In dry conditions, snow slopes act similarly to sandpiles. An entire mountainside of snow can suddenly slide due to the presence of a skier, or even a loud noise or vibration.
Quantum mechanics
Aggregated systems of subatomic particles described by quantum mechanics (quarks inside nucleons, nucleons inside atomic nuclei, electrons inside atoms, molecules, or atomic clusters) are found to have many distinguishable states. Of these, one (or a small degenerate set) is indefinitely stable: the ground state or global minimum.
All other states besides the ground state (or those degenerate with it) have higher energies. Of all these other states, the metastable states are the ones having lifetimes lasting at least 102 to 103 times longer than the shortest lived states of the set.
A metastable state is then long-lived (locally stable with respect to configurations of 'neighbouring' energies) but not eternal (as the global minimum is). Being excited – of an energy above the ground state – it will eventually decay to a more stable state, releasing energy. Indeed, above absolute zero, all states of a system have a non-zero probability to decay; that is, to spontaneously fall into another state (usually lower in energy). One mechanism for this to happen is through tunnelling.
Nuclear physics
Some energetic states of an atomic nucleus (having distinct spatial mass, charge, spin, isospin distributions) are much longer-lived than others (nuclear isomers of the same isotope), e.g. technetium-99m. The isotope tantalum-180m, although being a metastable excited state, is long-lived enough that it has never been observed to decay, with a half-life calculated to be least years, over 3 million times the current age of the universe.
Atomic and molecular physics
Some atomic energy levels are metastable. Rydberg atoms are an example of metastable excited atomic states. Transitions from metastable excited levels are typically those forbidden by electric dipole selection rules. This means that any transitions from this level are relatively unlikely to occur. In a sense, an electron that happens to find itself in a metastable configuration is trapped there. Since transitions from a metastable state are not impossible (merely less likely), the electron will eventually decay to a less energetic state, typically by an electric quadrupole transition, or often by non-radiative de-excitation (e.g., collisional de-excitation).
This slow-decay property of a metastable state is apparent in phosphorescence, the kind of photoluminescence seen in glow-in-the-dark toys that can be charged by first being exposed to bright light. Whereas spontaneous emission in atoms has a typical timescale on the order of 10−8 seconds, the decay of metastable states can typically take milliseconds to minutes, and so light emitted in phosphorescence is usually both weak and long-lasting.
Chemistry
In chemical systems, a system of atoms or molecules involving a change in chemical bond can be in a metastable state, which lasts for a relatively long period of time. Molecular vibrations and thermal motion make chemical species at the energetic equivalent of the top of a round hill very short-lived. Metastable states that persist for many seconds (or years) are found in energetic valleys which are not the lowest possible valley (point 1 in illustration). A common type of metastability is isomerism.
The stability or metastability of a given chemical system depends on its environment, particularly temperature and pressure. The difference between producing a stable vs. metastable entity can have important consequences. For instances, having the wrong crystal polymorph can result in failure of a drug while in storage between manufacture and administration. The map of which state is the most stable as a function of pressure, temperature and/or composition is known as a phase diagram. In regions where a particular state is not the most stable, it may still be metastable.
Reaction intermediates are relatively short-lived, and are usually thermodynamically unstable rather than metastable. The IUPAC recommends referring to these as transient rather than metastable.
Metastability is also used to refer to specific situations in mass spectrometry and spectrochemistry.
Electronic circuits
A digital circuit is supposed to be found in a small number of stable digital states within a certain amount of time after an input change. However, if an input changes at the wrong moment a digital circuit which employs feedback (even a simple circuit such as a flip-flop) can enter a metastable state and take an unbounded length of time to finally settle into a fully stable digital state.
Computational neuroscience
Metastability in the brain is a phenomenon studied in computational neuroscience to elucidate how the human brain recognizes patterns. Here, the term metastability is used rather loosely. There is no lower-energy state, but there are semi-transient signals in the brain that persist for a while and are different than the usual equilibrium state.
In philosophy
Gilbert Simondon invokes a notion of metastability for his understanding of systems that rather than resolve their tensions and potentials for transformation into a single final state rather, 'conserves the tensions in the equilibrium of metastability instead of nullifying them in the equilibrium of stability' as a critique of cybernetic notions of homeostasis.
See also
False vacuum
Hysteresis
Metastate
References
Chemical properties
Dynamical systems | Metastability | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,223 | [
"Mechanics",
"nan",
"Dynamical systems"
] |
19,836 | https://en.wikipedia.org/wiki/Molecular%20mass | The molecular mass () is the mass of a given molecule. Units of daltons (Da) are often used. Different molecules of the same compound may have different molecular masses because they contain different isotopes of an element. The derived quantity relative molecular mass is the unitless ratio of the mass of a molecule to the atomic mass constant (which is equal to one dalton).
The molecular mass and relative molecular mass are distinct from but related to the molar mass. The molar mass is defined as the mass of a given substance divided by the amount of the substance, and is expressed in grams per mol (g/mol). That makes the molar mass an average of many particles or molecules (potentially containing different isotopes), and the molecular mass the mass of one specific particle or molecule. The molar mass is usually the more appropriate quantity when dealing with macroscopic (weigh-able) quantities of a substance.
The definition of molecular weight is most authoritatively synonymous with relative molecular mass; however, in common practice, use of this terminology is highly variable. When the molecular weight is given with the unit Da, it is frequently as a weighted average similar to the molar mass but with different units. In molecular biology, the mass of macromolecules is referred to as their molecular weight and is expressed in kDa, although the numerical value is often approximate and representative of an average.
The terms "molecular mass", "molecular weight", and "molar mass" may be used interchangeably in less formal contexts where unit- and quantity-correctness is not needed. The molecular mass is more commonly used when referring to the mass of a single or specific well-defined molecule and less commonly than molecular weight when referring to a weighted average of a sample. Prior to the 2019 revision of the SI quantities expressed in daltons (Da) were by definition numerically equivalent to molar mass expressed in the units g/mol and were thus strictly numerically interchangeable. After the 2019 revision, this relationship is only nearly equivalent, although the difference is negligible for all practical purposes.
The molecular mass of small to medium size molecules, measured by mass spectrometry, can be used to determine the composition of elements in the molecule. The molecular masses of macromolecules, such as proteins, can also be determined by mass spectrometry; however, methods based on viscosity and light-scattering are also used to determine molecular mass when crystallographic or mass spectrometric data are not available.
Calculation
Molecular masses are calculated from the atomic masses of each nuclide present in the molecule, while molar masses and relative molecular masses (molecular weights) are calculated from the standard atomic weights of each element. The standard atomic weight takes into account the isotopic distribution of the element in a given sample (usually assumed to be "normal"). For example, water has a molar mass of 18.0153(3) g/mol, but individual water molecules have molecular masses which range between 18.010 564 6863(15) Da (1H16O) and 22.027 7364(9) Da (2H18O).
Atomic and molecular masses are usually reported in daltons, which is defined in terms of the mass of the isotope 12C (carbon-12). However, the name unified atomic mass unit (u) is still used in common practice. Relative atomic and molecular masses as defined are dimensionless. Molar masses when expressed in g/mol have almost identical numerical values as relative atomic and molecular masses. For example, the molar mass and molecular mass of methane, whose molecular formula is CH4, are calculated respectively as follows:
The uncertainty in molecular mass reflects variance (error) in measurement not the natural variance in isotopic abundances across the globe. In high-resolution mass spectrometry the mass isotopomers 12C1H4 and 13C1H4 are observed as distinct molecules, with molecular masses of approximately 16.031 Da and 17.035 Da, respectively. The intensity of the mass-spectrometry peaks is proportional to the isotopic abundances in the molecular species. 12C 2H 1H3 can also be observed with molecular mass of 17 Da.
Determination
Mass spectrometry
In mass spectrometry, the molecular mass of a small molecule is usually reported as the monoisotopic mass: that is, the mass of the molecule containing only the most common isotope of each element. This also differs subtly from the molecular mass in that the choice of isotopes is defined and thus is a single specific molecular mass out of the (perhaps many) possibilities. The masses used to compute the monoisotopic molecular mass are found in a table of isotopic masses and are not found in a typical periodic table. The average molecular mass is often used for larger molecules, since molecules with many atoms are often unlikely to be composed exclusively of the most abundant isotope of each element. A theoretical average molecular mass can be calculated using the standard atomic weights found in a typical periodic table. The average molecular mass of a very small sample, however, might differ substantially from this since a single sample average is not the same as the average of many geographically distributed samples.
Mass photometry
Mass photometry (MP) is a rapid, in-solution, label-free method of obtaining the molecular mass of proteins, lipids, sugars and nucleic acids at the single-molecule level. The technique is based on interferometric scattered light microscopy. Contrast from scattered light by a single binding event at the interface between the protein solution and glass slide is detected and is linearly proportional to the mass of the molecule. This technique can also be used to measure sample homogeneity, to detect protein oligomerisation states, and to identify complex macromolecular assemblies (ribosomes, GroEL, AAV) and protein interactions such as protein-protein interactions. Mass photometry can accurately measure molecular mass over a wide range of molecular masses (40 kDa – 5 MDa).
Hydrodynamic methods
To a first approximation, the basis for determination of molecular mass according to Mark–Houwink relations is the fact that the intrinsic viscosity of solutions (or suspensions) of macromolecules depends on volumetric proportion of the dispersed particles in a particular solvent. Specifically, the hydrodynamic size as related to molecular mass depends on a conversion factor, describing the shape of a particular molecule. This allows the apparent molecular mass to be described from a range of techniques sensitive to hydrodynamic effects, including DLS, SEC (also known as GPC when the eluent is an organic solvent), viscometry, and diffusion ordered nuclear magnetic resonance spectroscopy (DOSY). The apparent hydrodynamic size can then be used to approximate molecular mass using a series of macromolecule-specific standards. As this requires calibration, it's frequently described as a "relative" molecular mass determination method.
Static light scattering
It is also possible to determine absolute molecular mass directly from light scattering, traditionally using the Zimm method. This can be accomplished either via classical static light scattering or via multi-angle light scattering detectors. Molecular masses determined by this method do not require calibration, hence the term "absolute". The only external measurement required is refractive index increment, which describes the change in refractive index with concentration.
See also
Cryoscopy and cryoscopic constant
Ebullioscopy and ebullioscopic constant
Dumas method of molecular weight determination
François-Marie Raoult
Standard atomic weight
Mass number
Absolute molar mass
Molar mass distribution
Dalton (unit)
SDS-PAGE
References
External links
A Free Android application for molecular and reciprocal weight calculation of any chemical formula
Stoichiometry Add-In for Microsoft Excel for calculation of molecular weights, reaction coefficients and stoichiometry.
Molar quantities
Mass | Molecular mass | [
"Physics",
"Mathematics"
] | 1,649 | [
"Scalar physical quantities",
"Chemical reaction engineering",
"Stoichiometry",
"Physical quantities",
"Quantity",
"Mass",
"Size",
"nan",
"Wikipedia categories named after physical quantities",
"Matter"
] |
19,838 | https://en.wikipedia.org/wiki/Metallic%20bonding | Metallic bonding is a type of chemical bonding that arises from the electrostatic attractive force between conduction electrons (in the form of an electron cloud of delocalized electrons) and positively charged metal ions. It may be described as the sharing of free electrons among a structure of positively charged ions (cations). Metallic bonding accounts for many physical properties of metals, such as strength, ductility, thermal and electrical resistivity and conductivity, opacity, and lustre.
Metallic bonding is not the only type of chemical bonding a metal can exhibit, even as a pure substance. For example, elemental gallium consists of covalently-bound pairs of atoms in both liquid and solid-state—these pairs form a crystal structure with metallic bonding between them. Another example of a metal–metal covalent bond is the mercurous ion ().
History
As chemistry developed into a science, it became clear that metals formed the majority of the periodic table of the elements, and great progress was made in the description of the salts that can be formed in reactions with acids. With the advent of electrochemistry, it became clear that metals generally go into solution as positively charged ions, and the oxidation reactions of the metals became well understood in their electrochemical series. A picture emerged of metals as positive ions held together by an ocean of negative electrons.
With the advent of quantum mechanics, this picture was given a more formal interpretation in the form of the free electron model and its further extension, the nearly free electron model. In both models, the electrons are seen as a gas traveling through the structure of the solid with an energy that is essentially isotropic, in that it depends on the square of the magnitude, not the direction of the momentum vector k. In three-dimensional k-space, the set of points of the highest filled levels (the Fermi surface) should therefore be a sphere. In the nearly-free model, box-like Brillouin zones are added to k-space by the periodic potential experienced from the (ionic) structure, thus mildly breaking the isotropy.
The advent of X-ray diffraction and thermal analysis made it possible to study the structure of crystalline solids, including metals and their alloys; and phase diagrams were developed. Despite all this progress, the nature of intermetallic compounds and alloys largely remained a mystery and their study was often merely empirical. Chemists generally steered away from anything that did not seem to follow Dalton's laws of multiple proportions; and the problem was considered the domain of a different science, metallurgy.
The nearly-free electron model was eagerly taken up by some researchers in metallurgy, notably Hume-Rothery, in an attempt to explain why intermetallic alloys with certain compositions would form and others would not. Initially Hume-Rothery's attempts were quite successful. His idea was to add electrons to inflate the spherical Fermi-balloon inside the series of Brillouin-boxes and determine when a certain box would be full. This predicted a fairly large number of alloy compositions that were later observed. As soon as cyclotron resonance became available and the shape of the balloon could be determined, it was found that the balloon was not spherical as the Hume-Rothery believed, except perhaps in the case of caesium. This revealed how a model can sometimes give a whole series of correct predictions, yet still be wrong in its basic assumptions.
The nearly-free electron debacle compelled researchers to modify the assumpition that ions flowed in a sea of free electrons. A number of quantum mechanical models were developed, such as band structure calculations based on molecular orbitals, and the density functional theory. These models either depart from the atomic orbitals of neutral atoms that share their electrons, or (in the case of density functional theory) departs from the total electron density. The free-electron picture has, nevertheless, remained a dominant one in introductory courses on metallurgy.
The electronic band structure model became a major focus for the study of metals and even more of semiconductors. Together with the electronic states, the vibrational states were also shown to form bands. Rudolf Peierls showed that, in the case of a one-dimensional row of metallic atoms—say, hydrogen—an inevitable instability would break such a chain into individual molecules. This sparked an interest in the general question: when is collective metallic bonding stable, and when will a localized bonding take its place? Much research went into the study of clustering of metal atoms.
As powerful as the band structure model proved to be in describing metallic bonding, it remains a one-electron approximation of a many-body problem: the energy states of an individual electron are described as if all the other electrons form a homogeneous background. Researchers such as Mott and Hubbard realized that the one-electron treatment was perhaps appropriate for strongly delocalized s- and p-electrons; but for d-electrons, and even more for f-electrons, the interaction with nearby individual electrons (and atomic displacements) may become stronger than the delocalized interaction that leads to broad bands. This gave a better explanation for the transition from localized unpaired electrons to itinerant ones partaking in metallic bonding.
The nature of metallic bonding
The combination of two phenomena gives rise to metallic bonding: delocalization of electrons and the availability of a far larger number of delocalized energy states than of delocalized electrons. The latter could be called electron deficiency.
In 2D
Graphene is an example of two-dimensional metallic bonding. Its metallic bonds are similar to aromatic bonding in benzene, naphthalene, anthracene, ovalene, etc.
In 3D
Metal aromaticity in metal clusters is another example of delocalization, this time often in three-dimensional arrangements. Metals take the delocalization principle to its extreme, and one could say that a crystal of a metal represents a single molecule over which all conduction electrons are delocalized in all three dimensions. This means that inside the metal one can generally not distinguish molecules, so that the metallic bonding is neither intra- nor inter-molecular. 'Nonmolecular' would perhaps be a better term. Metallic bonding is mostly non-polar, because even in alloys there is little difference among the electronegativities of the atoms participating in the bonding interaction (and, in pure elemental metals, none at all). Thus, metallic bonding is an extremely delocalized communal form of covalent bonding. In a sense, metallic bonding is not a 'new' type of bonding at all. It describes the bonding only as present in a chunk of condensed matter: be it crystalline solid, liquid, or even glass. Metallic vapors, in contrast, are often atomic (Hg) or at times contain molecules, such as Na2, held together by a more conventional covalent bond. This is why it is not correct to speak of a single 'metallic bond'.
Delocalization is most pronounced for s- and p-electrons. Delocalization in caesium is so strong that the electrons are virtually freed from the caesium atoms to form a gas constrained only by the surface of the metal. For caesium, therefore, the picture of Cs+ ions held together by a negatively charged electron gas is very close to accurate (though not perfectly so). For other elements the electrons are less free, in that they still experience the potential of the metal atoms, sometimes quite strongly. They require a more intricate quantum mechanical treatment (e.g., tight binding) in which the atoms are viewed as neutral, much like the carbon atoms in benzene. For d- and especially f-electrons the delocalization is not strong at all and this explains why these electrons are able to continue behaving as unpaired electrons that retain their spin, adding interesting magnetic properties to these metals.
Electron deficiency and mobility
Metal atoms contain few electrons in their valence shells relative to their periods or energy levels. They are electron-deficient elements and the communal sharing does not change that. There remain far more available energy states than there are shared electrons. Both requirements for conductivity are therefore fulfilled: strong delocalization and partly filled energy bands. Such electrons can therefore easily change from one energy state to a slightly different one. Thus, not only do they become delocalized, forming a sea of electrons permeating the structure, but they are also able to migrate through the structure when an external electrical field is applied, leading to electrical conductivity. Without the field, there are electrons moving equally in all directions. Within such a field, some electrons will adjust their state slightly, adopting a different wave vector. Consequently, there will be more moving one way than another and a net current will result.
The freedom of electrons to migrate also gives metal atoms, or layers of them, the capacity to slide past each other. Locally, bonds can easily be broken and replaced by new ones after a deformation. This process does not affect the communal metallic bonding very much, which gives rise to metals' characteristic malleability and ductility. This is particularly true for pure elements. In the presence of dissolved impurities, the normally easily formed cleavages may be blocked and the material become harder. Gold, for example, is very soft in pure form (24-karat), which is why alloys are preferred in jewelry.
Metals are typically also good conductors of heat, but the conduction electrons only contribute partly to this phenomenon. Collective (i.e., delocalized) vibrations of the atoms, known as phonons that travel through the solid as a wave, are bigger contributors.
However, a substance such as diamond, which conducts heat quite well, is not an electrical conductor. This is not a consequence of delocalization being absent in diamond, but simply that carbon is not electron deficient.
Electron deficiency is important in distinguishing metallic from more conventional covalent bonding. Thus, we should amend the expression given above to: Metallic bonding is an extremely delocalized communal form of electron-deficient covalent bonding.
Metallic radius
The metallic radius is defined as one-half of the distance between the two adjacent metal ions in the metallic structure. This radius depends on the nature of the atom as well as its environment—specifically, on the coordination number (CN), which in turn depends on the temperature and applied pressure.
When comparing periodic trends in the size of atoms it is often desirable to apply the so-called Goldschmidt correction, which converts atomic radii to the values the atoms would have if they were 12-coordinated. Since metallic radii are largest for the highest coordination number, correction for less dense coordinations involves multiplying by , where 0 < < 1. Specifically, for CN = 4, = 0.88; for CN = 6, = 0.96, and for CN = 8, = 0.97. The correction is named after Victor Goldschmidt who obtained the numerical values quoted above.
The radii follow general periodic trends: they decrease across the period due to the increase in the effective nuclear charge, which is not offset by the increased number of valence electrons; but the radii increase down the group due to an increase in the principal quantum number. Between the 4d and 5d elements, the lanthanide contraction is observed—there is very little increase of the radius down the group due to the presence of poorly shielding f orbitals.
Strength of the bond
The atoms in metals have a strong attractive force between them. Much energy is required to overcome it. Therefore, metals often have high boiling points, with tungsten (5828 K) being extremely high. A remarkable exception is the elements of the zinc group: Zn, Cd, and Hg. Their electron configurations end in ...ns2, which resembles a noble gas configuration, like that of helium, more and more when going down the periodic table, because the energy differential to the empty np orbitals becomes larger. These metals are therefore relatively volatile, and are avoided in ultra-high vacuum systems.
Otherwise, metallic bonding can be very strong, even in molten metals, such as gallium. Even though gallium will melt from the heat of one's hand just above room temperature, its boiling point is not far from that of copper. Molten gallium is, therefore, a very nonvolatile liquid, thanks to its strong metallic bonding.
The strong bonding of metals in liquid form demonstrates that the energy of a metallic bond is not highly dependent on the direction of the bond; this lack of bond directionality is a direct consequence of electron delocalization, and is best understood in contrast to the directional bonding of covalent bonds. The energy of a metallic bond is thus mostly a function of the number of electrons which surround the metallic atom, as exemplified by the embedded atom model. This typically results in metals assuming relatively simple, close-packed crystal structures, such as FCC, BCC, and HCP.
Given high enough cooling rates and appropriate alloy composition, metallic bonding can occur even in glasses, which have amorphous structures.
Much biochemistry is mediated by the weak interaction of metal ions and biomolecules. Such interactions, and their associated conformational changes, have been measured using dual polarisation interferometry.
Solubility and compound formation
Metals are insoluble in water or organic solvents, unless they undergo a reaction with them. Typically, this is an oxidation reaction that robs the metal atoms of their itinerant electrons, destroying the metallic bonding. However metals are often readily soluble in each other while retaining the metallic character of their bonding. Gold, for example, dissolves easily in mercury, even at room temperature. Even in solid metals, the solubility can be extensive. If the structures of the two metals are the same, there can even be complete solid solubility, as in the case of electrum, an alloy of silver and gold. At times, however, two metals will form alloys with different structures than either of the two parents. One could call these materials metal compounds. But, because materials with metallic bonding are typically not molecular, Dalton's law of integral proportions is not valid; and often a range of stoichiometric ratios can be achieved. It is better to abandon such concepts as 'pure substance' or 'solute' in such cases and speak of phases instead. The study of such phases has traditionally been more the domain of metallurgy than of chemistry, although the two fields overlap considerably.
Localization and clustering: from bonding to bonds
The metallic bonding in complex compounds does not necessarily involve all constituent elements equally. It is quite possible to have one or more elements that do not partake at all. One could picture the conduction electrons flowing around them like a river around an island or a big rock. It is possible to observe which elements do partake: e.g., by looking at the core levels in an X-ray photoelectron spectroscopy (XPS) spectrum. If an element partakes, its peaks tend to be skewed.
Some intermetallic materials, e.g., do exhibit metal clusters reminiscent of molecules; and these compounds are more a topic of chemistry than of metallurgy. The formation of the clusters could be seen as a way to 'condense out' (localize) the electron-deficient bonding into bonds of a more localized nature. Hydrogen is an extreme example of this form of condensation. At high pressures it is a metal. The core of the planet Jupiter could be said to be held together by a combination of metallic bonding and high pressure induced by gravity. At lower pressures, however, the bonding becomes entirely localized into a regular covalent bond. The localization is so complete that the (more familiar) H2 gas results. A similar argument holds for an element such as boron. Though it is electron-deficient compared to carbon, it does not form a metal. Instead it has a number of complex structures in which icosahedral B12 clusters dominate. Charge density waves are a related phenomenon.
As these phenomena involve the movement of the atoms toward or away from each other, they can be interpreted as the coupling between the electronic and the vibrational states (i.e. the phonons) of the material. A different such electron-phonon interaction is thought to lead to a very different result at low temperatures, that of superconductivity. Rather than blocking the mobility of the charge carriers by forming electron pairs in localized bonds, Cooper pairs are formed that no longer experience any resistance to their mobility.
Optical properties
The presence of an ocean of mobile charge carriers has profound effects on the optical properties of metals, which can only be understood by considering the electrons as a collective, rather than considering the states of individual electrons involved in more conventional covalent bonds.
Light consists of a combination of an electrical and a magnetic field. The electrical field is usually able to excite an elastic response from the electrons involved in the metallic bonding. The result is that photons cannot penetrate very far into the metal and are typically reflected, although some may also be absorbed. This holds equally for all photons in the visible spectrum, which is why metals are often silvery white or grayish with the characteristic specular reflection of metallic lustre. The balance between reflection and absorption determines how white or how gray a metal is, although surface tarnish can obscure the lustre. Silver, a metal with high conductivity, is one of the whitest.
Notable exceptions are reddish copper and yellowish gold. The reason for their color is that there is an upper limit to the frequency of the light that metallic electrons can readily respond to: the plasmon frequency. At the plasmon frequency, the frequency-dependent dielectric function of the free electron gas goes from negative (reflecting) to positive (transmitting); higher frequency photons are not reflected at the surface, and do not contribute to the color of the metal. There are some materials, such as indium tin oxide (ITO), that are metallic conductors (actually degenerate semiconductors) for which this threshold is in the infrared, which is why they are transparent in the visible, but good reflectors in the infrared.
For silver the limiting frequency is in the far ultraviolet, but for copper and gold it is closer to the visible. This explains the colors of these two metals. At the surface of a metal, resonance effects known as surface plasmons can result. They are collective oscillations of the conduction electrons, like a ripple in the electronic ocean. However, even if photons have enough energy, they usually do not have enough momentum to set the ripple in motion. Therefore, plasmons are hard to excite on a bulk metal. This is why gold and copper look like lustrous metals albeit with a dash of color. However, in colloidal gold the metallic bonding is confined to a tiny metallic particle, which prevents the oscillation wave of the plasmon from 'running away'. The momentum selection rule is therefore broken, and the plasmon resonance causes an extremely intense absorption in the green, with a resulting purple-red color. Such colors are orders of magnitude more intense than ordinary absorptions seen in dyes and the like, which involve individual electrons and their energy states.
See also
Notes
References
Chemical bonding
Metals | Metallic bonding | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,964 | [
"nan",
"Chemical bonding",
"Condensed matter physics",
"Metals"
] |
19,870 | https://en.wikipedia.org/wiki/Meson | In particle physics, a meson () is a type of hadronic subatomic particle composed of an equal number of quarks and antiquarks, usually one of each, bound together by the strong interaction. Because mesons are composed of quark subparticles, they have a meaningful physical size, a diameter of roughly one femtometre (10 m), which is about 0.6 times the size of a proton or neutron. All mesons are unstable, with the longest-lived lasting for only a few tenths of a nanosecond. Heavier mesons decay to lighter mesons and ultimately to stable electrons, neutrinos and photons.
Outside the nucleus, mesons appear in nature only as short-lived products of very high-energy collisions between particles made of quarks, such as cosmic rays (high-energy protons and neutrons) and baryonic matter. Mesons are routinely produced artificially in cyclotrons or other particle accelerators in the collisions of protons, antiprotons, or other particles.
Higher-energy (more massive) mesons were created momentarily in the Big Bang, but are not thought to play a role in nature today. However, such heavy mesons are regularly created in particle accelerator experiments that explore the nature of the heavier quarks that compose the heavier mesons.
Mesons are part of the hadron particle family, which are defined simply as particles composed of two or more quarks. The other members of the hadron family are the baryons: subatomic particles composed of odd numbers of valence quarks (at least three), and some experiments show evidence of exotic mesons, which do not have the conventional valence quark content of two quarks (one quark and one antiquark), but four or more.
Because quarks have a spin , the difference in quark number between mesons and baryons results in conventional two-quark mesons being bosons, whereas baryons are fermions.
Each type of meson has a corresponding antiparticle (antimeson) in which quarks are replaced by their corresponding antiquarks and vice versa. For example, a positive pion () is made of one up quark and one down antiquark; and its corresponding antiparticle, the negative pion (), is made of one up antiquark and one down quark.
Because mesons are composed of quarks, they participate in both the weak interaction and strong interaction. Mesons with net electric charge also participate in the electromagnetic interaction. Mesons are classified according to their quark content, total angular momentum, parity and various other properties, such as C-parity and G-parity. Although no meson is stable, those of lower mass are nonetheless more stable than the more massive, and hence are easier to observe and study in particle accelerators or in cosmic ray experiments. The lightest group of mesons is less massive than the lightest group of baryons, meaning that they are more easily produced in experiments, and thus exhibit certain higher-energy phenomena more readily than do baryons. But mesons can be quite massive: for example, the J/Psi meson () containing the charm quark, first seen 1974, is about three times as massive as a proton, and the upsilon meson () containing the bottom quark, first seen in 1977, is about ten times as massive as a proton.
History
From theoretical considerations, in 1934 Hideki Yukawa predicted the existence and the approximate mass of the "meson" as the carrier of the nuclear force that holds atomic nuclei together. If there were no nuclear force, all nuclei with two or more protons would fly apart due to electromagnetic repulsion. Yukawa called his carrier particle the meson, from μέσος mesos, the Greek word for "intermediate", because its predicted mass was between that of the electron and that of the proton, which has about 1,836 times the mass of the electron. Yukawa or Carl David Anderson, who discovered the muon, had originally named the particle the "mesotron", but he was corrected by the physicist Werner Heisenberg (whose father was a professor of Greek at the University of Munich). Heisenberg pointed out that there is no "tr" in the Greek word "mesos".
The first candidate for Yukawa's meson, in modern terminology known as the muon, was discovered in 1936 by Carl David Anderson and others in the decay products of cosmic ray interactions. The "mu meson" had about the right mass to be Yukawa's carrier of the strong nuclear force, but over the course of the next decade, it became evident that it was not the right particle. It was eventually found that the "mu meson" did not participate in the strong nuclear interaction at all, but rather behaved like a heavy version of the electron, and was eventually classed as a lepton like the electron, rather than a meson. Physicists in making this choice decided that properties other than particle mass should control their classification.
There were years of delays in the subatomic particle research during World War II (1939–1945), with most physicists working in applied projects for wartime necessities. When the war ended in August 1945, many physicists gradually returned to peacetime research. The first true meson to be discovered was what would later be called the "pi meson" (or pion). During 1939–1942, Debendra Mohan Bose and Bibha Chowdhuri exposed Ilford half-tone photographic plates in the high altitude mountainous regions of Darjeeling, and observed long curved ionizing tracks that appeared to be different from the tracks of alpha particles or protons. In a series of articles published in Nature, they identified a cosmic particle having an average mass close to 200 times the mass of electron. This discovery was made in 1947 with improved full-tone photographic emulsion plates, by Cecil Powell, Hugh Muirhead, César Lattes, and Giuseppe Occhialini, who were investigating cosmic ray products at the University of Bristol in England, based on photographic films placed in the Andes mountains. Some of those mesons had about the same mass as the already-known mu "meson", yet seemed to decay into it, leading physicist Robert Marshak to hypothesize in 1947 that it was actually a new and different meson. Over the next few years, more experiments showed that the pion was indeed involved in strong interactions. The pion (as a virtual particle) is also used as force carrier to model the nuclear force in atomic nuclei (between protons and neutrons). This is an approximation, as the actual carrier of the strong force is believed to be the gluon, which is explicitly used to model strong interaction between quarks. Other mesons, such as the virtual rho mesons are used to model this force as well, but to a lesser extent. Following the discovery of the pion, Yukawa was awarded the 1949 Nobel Prize in Physics for his predictions.
For a while in the past, the word meson was sometimes used to mean any force carrier, such as "the Z meson", which is involved in mediating the weak interaction. However, this use has fallen out of favor, and mesons are now defined as particles composed of pairs of quarks and antiquarks.
Overview
Spin, orbital angular momentum, and total angular momentum
Spin (quantum number ) is a vector quantity that represents the "intrinsic" angular momentum of a particle. It comes in increments of .
Quarks are fermions—specifically in this case, particles having spin Because spin projections vary in increments of 1 (that is 1 ), a single quark has a spin vector of length , and has two spin projections, either or Two quarks can have their spins aligned, in which case the two spin vectors add to make a vector of length with three possible spin projections and and their combination is called a vector meson or spin-1 triplet. If two quarks have oppositely aligned spins, the spin vectors add up to make a vector of length and only one spin projection called a scalar meson or spin-0 singlet. Because mesons are made of one quark and one antiquark, they are found in triplet and singlet spin states. The latter are called scalar mesons or pseudoscalar mesons, depending on their parity (see below).
There is another quantity of quantized angular momentum, called the orbital angular momentum (quantum number ), that is the angular momentum due to quarks orbiting each other, and also comes in increments of 1 . The total angular momentum (quantum number ) of a particle is the combination of the two intrinsic angular momentums (spin) and the orbital angular momentum. It can take any value from up to in increments of 1.
Particle physicists are most interested in mesons with no orbital angular momentum ( = 0), therefore the two groups of mesons most studied are the = 1; = 0 and = 0; = 0, which corresponds to = 1 and = 0, although they are not the only ones. It is also possible to obtain = 1 particles from = 0 and = 1. How to distinguish between the = 1, = 0 and = 0, = 1 mesons is an active area of research in meson spectroscopy.
-parity
-parity is left-right parity, or spatial parity, and was the first of several "parities" discovered, and so is often called just "parity". If the universe were reflected in a mirror, most laws of physics would be identical—things would behave the same way regardless of what we call "left" and what we call "right". This concept of mirror reflection is called parity (). Gravity, the electromagnetic force, and the strong interaction all behave in the same way regardless of whether or not the universe is reflected in a mirror, and thus are said to conserve parity (-symmetry). However, the weak interaction does distinguish "left" from "right", a phenomenon called parity violation (-violation).
Based on this, one might think that, if the wavefunction for each particle (more precisely, the quantum field for each particle type) were simultaneously mirror-reversed, then the new set of wavefunctions would perfectly satisfy the laws of physics (apart from the weak interaction). It turns out that this is not quite true: In order for the equations to be satisfied, the wavefunctions of certain types of particles have to be multiplied by −1, in addition to being mirror-reversed. Such particle types are said to have negative or odd parity ( = −1, or alternatively = −), whereas the other particles are said to have positive or even parity ( = +1, or alternatively = +).
For mesons, parity is related to the orbital angular momentum by the relation:
where the is a result of the parity of the corresponding spherical harmonic of the wavefunction. The "+1" comes from the fact that, according to the Dirac equation, a quark and an antiquark have opposite intrinsic parities. Therefore, the intrinsic parity of a meson is the product of the intrinsic parities of the quark (+1) and antiquark (−1). As these are different, their product is −1, and so it contributes the "+1" that appears in the exponent.
As a consequence, all mesons with no orbital angular momentum ( = 0) have odd parity ( = −1).
C-parity
-parity is only defined for mesons that are their own antiparticle (i.e. neutral mesons). It represents whether or not the wavefunction of the meson remains the same under the interchange of their quark with their antiquark. If
then, the meson is " even" ( = +1). On the other hand, if
then the meson is " odd" ( = −1).
-parity rarely is studied on its own, but more commonly in combination with P-parity into CP-parity. -parity was originally thought to be conserved, but was later found to be violated on rare occasions in weak interactions.
-parity
-parity is a generalization of the -parity. Instead of simply comparing the wavefunction after exchanging quarks and antiquarks, it compares the wavefunction after exchanging the meson for the corresponding antimeson, regardless of quark content.
If
then, the meson is " even" ( = +1). On the other hand, if
then the meson is " odd" ( = −1).
Isospin and charge
Original isospin model
The concept of isospin was first proposed by Werner Heisenberg in 1932 to explain the similarities between protons and neutrons under the strong interaction. Although they had different electric charges, their masses were so similar that physicists believed that they were actually the same particle. The different electric charges were explained as being the result of some unknown excitation similar to spin. This unknown excitation was later dubbed isospin by Eugene Wigner in 1937.
When the first mesons were discovered, they too were seen through the eyes of isospin and so the three pions were believed to be the same particle, but in different isospin states.
The mathematics of isospin was modeled after the mathematics of spin. Isospin projections varied in increments of 1 just like those of spin, and to each projection was associated a "charged state". Because the "pion particle" had three "charged states", it was said to be of isospin Its "charged states" , , and , corresponded to the isospin projections and respectively. Another example is the "rho particle", also with three charged states. Its "charged states" , , and , corresponded to the isospin projections and respectively.
Replacement by the quark model
This belief lasted until Murray Gell-Mann proposed the quark model in 1964 (containing originally only the , , and quarks). The success of the isospin model is now understood to be an artifact of the similar masses of the and quarks. Because the and quarks have similar masses, particles made of the same number of them also have similar masses.
The exact and quark composition determines the charge, because quarks carry charge whereas quarks carry charge . For example, the three pions all have different charges
= a quantum superposition of ) and states
but they all have similar masses ( ) as they are each composed of a same total number of up and down quarks and antiquarks. Under the isospin model, they were considered a single particle in different charged states.
After the quark model was adopted, physicists noted that the isospin projections were related to the up and down quark content of particles by the relation
where the -symbols are the count of up and down quarks and antiquarks.
In the "isospin picture", the three pions and three rhos were thought to be the different states of two particles. However, in the quark model, the rhos are excited states of pions. Isospin, although conveying an inaccurate picture of things, is still used to classify hadrons, leading to unnatural and often confusing nomenclature.
Because mesons are hadrons, the isospin classification is also used for them all, with the quantum number calculated by adding for each positively charged up-or-down quark-or-antiquark (up quarks and down antiquarks), and for each negatively charged up-or-down quark-or-antiquark (up antiquarks and down quarks).
Flavour quantum numbers
The strangeness quantum number S (not to be confused with spin) was noticed to go up and down along with particle mass. The higher the mass, the lower (more negative) the strangeness (the more s quarks). Particles could be described with isospin projections (related to charge) and strangeness (mass) (see the uds nonet figures). As other quarks were discovered, new quantum numbers were made to have similar description of udc and udb nonets. Because only the u and d mass are similar, this description of particle mass and charge in terms of isospin and flavour quantum numbers only works well for the nonets made of one u, one d and one other quark and breaks down for the other nonets (for example ucb nonet). If the quarks all had the same mass, their behaviour would be called symmetric, because they would all behave in exactly the same way with respect to the strong interaction. However, as quarks do not have the same mass, they do not interact in the same way (exactly like an electron placed in an electric field will accelerate more than a proton placed in the same field because of its lighter mass), and the symmetry is said to be broken.
It was noted that charge (Q) was related to the isospin projection (I3), the baryon number (B) and flavour quantum numbers (S, C, , T) by the Gell-Mann–Nishijima formula:
where S, C, , and T represent the strangeness, charm, bottomness and topness flavour quantum numbers respectively. They are related to the number of strange, charm, bottom, and top quarks and antiquark according to the relations:
meaning that the Gell-Mann–Nishijima formula is equivalent to the expression of charge in terms of quark content:
Classification
Mesons are classified into groups according to their isospin (I), total angular momentum (J), parity (P), G-parity (G) or C-parity (C) when applicable, and quark (q) content. The rules for classification are defined by the Particle Data Group, and are rather convoluted. The rules are presented below, in table form for simplicity.
Types of meson
Mesons are classified into types according to their spin configurations. Some specific configurations are given special names based on the mathematical properties of their spin configuration.
Nomenclature
Flavourless mesons
Flavourless mesons are mesons made of pair of quark and antiquarks of the same flavour (all their flavour quantum numbers are zero: = 0, = 0, = 0, = 0). The rules for flavourless mesons are:
In addition
When the spectroscopic state of the meson is known, it is added in parentheses.
When the spectroscopic state is unknown, mass (in MeV/c2) is added in parentheses.
When the meson is in its ground state, nothing is added in parentheses.
Flavoured mesons
Flavoured mesons are mesons made of pair of quark and antiquarks of different flavours. The rules are simpler in this case: The main symbol depends on the heavier quark, the superscript depends on the charge, and the subscript (if any) depends on the lighter quark. In table form, they are:
In addition
If P is in the "normal series" (i.e., P = 0+, 1−, 2+, 3−, ...), a superscript ∗ is added.
If the meson is not pseudoscalar (P = 0−) or vector (P = 1−), is added as a subscript.
When the spectroscopic state of the meson is known, it is added in parentheses.
When the spectroscopic state is unknown, mass (in MeV/c2) is added in parentheses.
When the meson is in its ground state, nothing is added in parentheses.
Exotic mesons
There is experimental evidence for particles that are hadrons (i.e., are composed of quarks) and are color-neutral with zero baryon number, and thus by conventional definition are mesons. Yet, these particles do not consist of a single quark/antiquark pair, as all the other conventional mesons discussed above do. A tentative category for these particles is exotic mesons.
There are at least five exotic meson resonances that have been experimentally confirmed to exist by two or more independent experiments. The most statistically significant of these is the Z(4430), discovered by the Belle experiment in 2007 and confirmed by LHCb in 2014. It is a candidate for being a tetraquark: a particle composed of two quarks and two antiquarks. See the main article above for other particle resonances that are candidates for being exotic mesons.
List
Pseudoscalar mesons
[a] Makeup inexact due to non-zero quark masses.
[b] PDG reports the resonance width (Γ). Here the conversion τ = is given instead.
[c] Strong eigenstate. No definite lifetime (see kaon notes below)
[d] The mass of the and are given as that of the . However, it is known that a difference between the masses of the and on the order of exists.
[e] Weak eigenstate. Makeup is missing small CP–violating term (see notes on neutral kaons below).
Vector mesons
[f] PDG reports the resonance width (Γ). Here the conversion τ = is given instead.
[g] The exact value depends on the method used. See the given reference for detail.
Notes on neutral kaons
There are two complications with neutral kaons:
Due to neutral kaon mixing, the and are not eigenstates of strangeness. However, they are eigenstates of the weak force, which determines how they decay, so these are the particles with definite lifetime.
The linear combinations given in the table for the and are not exactly correct, since there is a small correction due to CP violation. See CP violation in kaons.
Note that these issues also exist in principle for other neutral, flavored mesons; however, the weak eigenstates are considered separate particles only for kaons because of their dramatically different lifetimes.
See also
Mesonic molecule
Standard Model
Footnotes
References
External links
— Compiles authoritative information on particle properties
— An interactive visualisation allowing physical properties to be compared
Further reading
Pauli, Wolfgang (1948) Meson Theory of Nuclear Forces, Interscience Publishers, Inc. New York
Bosons
Hadrons
Force carriers | Meson | [
"Physics"
] | 4,728 | [
"Physical phenomena",
"Matter",
"Force carriers",
"Hadrons",
"Bosons",
"Fundamental interactions",
"Subatomic particles"
] |
19,904 | https://en.wikipedia.org/wiki/Meteorology | Meteorology is a branch of the atmospheric sciences (which include atmospheric chemistry and physics) with a major focus on weather forecasting. The study of meteorology dates back millennia, though significant progress in meteorology did not begin until the 18th century. The 19th century saw modest progress in the field after weather observation networks were formed across broad regions. Prior attempts at prediction of weather depended on historical data. It was not until after the elucidation of the laws of physics, and more particularly in the latter half of the 20th century, the development of the computer (allowing for the automated solution of a great many modelling equations) that significant breakthroughs in weather forecasting were achieved. An important branch of weather forecasting is marine weather forecasting as it relates to maritime and coastal safety, in which weather effects also include atmospheric interactions with large bodies of water.
Meteorological phenomena are observable weather events that are explained by the science of meteorology. Meteorological phenomena are described and quantified by the variables of Earth's atmosphere: temperature, air pressure, water vapour, mass flow, and the variations and interactions of these variables, and how they change over time. Different spatial scales are used to describe and predict weather on local, regional, and global levels.
Meteorology, climatology, atmospheric physics, and atmospheric chemistry are sub-disciplines of the atmospheric sciences. Meteorology and hydrology compose the interdisciplinary field of hydrometeorology. The interactions between Earth's atmosphere and its oceans are part of a coupled ocean-atmosphere system. Meteorology has application in many diverse fields such as the military, energy production, transport, agriculture, and construction.
The word meteorology is from the Ancient Greek μετέωρος metéōros (meteor) and -λογία -logia (-(o)logy), meaning "the study of things high in the air".
History
Ancient meteorology up to the time of Aristotle
Early attempts at predicting weather were often related to prophecy and divining, and were sometimes based on astrological ideas. Ancient religions believed meteorological phenomena to be under the control of the gods. The ability to predict rains and floods based on annual cycles was evidently used by humans at least from the time of agricultural settlement if not earlier. Early approaches to predicting weather were based on astrology and were practiced by priests. The Egyptians had rain-making rituals as early as 3500 BC.
Ancient Indian Upanishads contain mentions of clouds and seasons. The Samaveda mentions sacrifices to be performed when certain phenomena were noticed. Varāhamihira's classical work Brihatsamhita, written about 500 AD, provides evidence of weather observation.
Cuneiform inscriptions on Babylonian tablets included associations between thunder and rain. The Chaldeans differentiated the 22° and 46° halos.
The ancient Greeks were the first to make theories about the weather. Many natural philosophers studied the weather. However, as meteorological instruments did not exist, the inquiry was largely qualitative, and could only be judged by more general theoretical speculations. Herodotus states that Thales predicted the solar eclipse of 585 BC. He studied Babylonian equinox tables. According to Seneca, he gave the explanation that the cause of the Nile's annual floods was due to northerly winds hindering its descent by the sea. Anaximander and Anaximenes thought that thunder and lightning was caused by air smashing against the cloud, thus kindling the flame. Early meteorological theories generally considered that there was a fire-like substance in the atmosphere. Anaximander defined wind as a flowing of air, but this was not generally accepted for centuries. A theory to explain summer hail was first proposed by Anaxagoras. He observed that air temperature decreased with increasing height and that clouds contain moisture. He also noted that heat caused objects to rise, and therefore the heat on a summer day would drive clouds to an altitude where the moisture would freeze. Empedocles theorized on the change of the seasons. He believed that fire and water opposed each other in the atmosphere, and when fire gained the upper hand, the result was summer, and when water did, it was winter. Democritus also wrote about the flooding of the Nile. He said that during the summer solstice, snow in northern parts of the world melted. This would cause vapors to form clouds, which would cause storms when driven to the Nile by northerly winds, thus filling the lakes and the Nile. Hippocrates inquired into the effect of weather on health. Eudoxus claimed that bad weather followed four-year periods, according to Pliny.
Aristotelian meteorology
These early observations would form the basis for Aristotle's Meteorology, written in 350 BC. Aristotle is considered the founder of meteorology. One of the most impressive achievements described in the Meteorology is the description of what is now known as the hydrologic cycle. His work would remain an authority on meteorology for nearly 2,000 years.
The book De Mundo (composed before 250 BC or between 350 and 200 BC) noted:
If the flashing body is set on fire and rushes violently to the Earth it is called a thunderbolt; if it is only half of fire, but violent also and massive, it is called a meteor; if it is entirely free from fire, it is called a smoking bolt. They are all called 'swooping bolts' because they swoop down upon the Earth. Lightning is sometimes smoky and is then called 'smoldering lightning"; sometimes it darts quickly along and is then said to be vivid. At other times, it travels in crooked lines, and is called forked lightning. When it swoops down upon some object it is called 'swooping lightning'
After Aristotle, progress in meteorology stalled for a long time. Theophrastus compiled a book on weather forecasting, called the Book of Signs, as well as On Winds. He gave hundreds of signs for weather phenomena for a period up to a year. His system was based on dividing the year by the setting and the rising of the Pleiad, halves into solstices and equinoxes, and the continuity of the weather for those periods. He also divided months into the new moon, fourth day, eighth day and full moon, in likelihood of a change in the weather occurring. The day was divided into sunrise, mid-morning, noon, mid-afternoon and sunset, with corresponding divisions of the night, with change being likely at one of these divisions. Applying the divisions and a principle of balance in the yearly weather, he came up with forecasts like that if a lot of rain falls in the winter, the spring is usually dry. Rules based on actions of animals are also present in his work, like that if a dog rolls on the ground, it is a sign of a storm. Shooting stars and the Moon were also considered significant. However, he made no attempt to explain these phenomena, referring only to the Aristotelian method. The work of Theophrastus remained a dominant influence in weather forecasting for nearly 2,000 years.
Meteorology after Aristotle
Meteorology continued to be studied and developed over the centuries, but it was not until the Renaissance in the 14th to 17th centuries that significant advancements were made in the field. Scientists such as Galileo and Descartes introduced new methods and ideas, leading to the scientific revolution in meteorology.
Speculation on the cause of the flooding of the Nile ended when Eratosthenes, according to Proclus, stated that it was known that man had gone to the sources of the Nile and observed the rains, although interest in its implications continued.
During the era of Roman Greece and Europe, scientific interest in meteorology waned. In the 1st century BC, most natural philosophers claimed that the clouds and winds extended up to 111 miles, but Posidonius thought that they reached up to five miles, after which the air is clear, liquid and luminous. He closely followed Aristotle's theories. By the end of the second century BC, the center of science shifted from Athens to Alexandria, home to the ancient Library of Alexandria. In the 2nd century AD, Ptolemy's Almagest dealt with meteorology, because it was considered a subset of astronomy. He gave several astrological weather predictions. He constructed a map of the world divided into climatic zones by their illumination, in which the length of the Summer solstice increased by half an hour per zone between the equator and the Arctic. Ptolemy wrote on the atmospheric refraction of light in the context of astronomical observations.
In 25 AD, Pomponius Mela, a Roman geographer, formalized the climatic zone system. In 63–64 AD, Seneca wrote Naturales quaestiones. It was a compilation and synthesis of ancient Greek theories. However, theology was of foremost importance to Seneca, and he believed that phenomena such as lightning were tied to fate. The second book(chapter) of Pliny's Natural History covers meteorology. He states that more than twenty ancient Greek authors studied meteorology. He did not make any personal contributions, and the value of his work is in preserving earlier speculation, much like Seneca's work.
From 400 to 1100, scientific learning in Europe was preserved by the clergy. Isidore of Seville devoted a considerable attention to meteorology in Etymologiae, De ordine creaturum and De natura rerum. Bede the Venerable was the first Englishman to write about the weather in De Natura Rerum in 703. The work was a summary of then extant classical sources. However, Aristotle's works were largely lost until the twelfth century, including Meteorologica. Isidore and Bede were scientifically minded, but they adhered to the letter of Scripture.
Islamic civilization translated many ancient works into Arabic which were transmitted and translated in western Europe to Latin.
In the 9th century, Al-Dinawari wrote the Kitab al-Nabat (Book of Plants), in which he deals with the application of meteorology to agriculture during the Arab Agricultural Revolution. He describes the meteorological character of the sky, the planets and constellations, the sun and moon, the lunar phases indicating seasons and rain, the anwa (heavenly bodies of rain), and atmospheric phenomena such as winds, thunder, lightning, snow, floods, valleys, rivers, lakes.
In 1021, Alhazen showed that atmospheric refraction is also responsible for twilight in Opticae thesaurus; he estimated that twilight begins when the sun is 19 degrees below the horizon, and also used a geometric determination based on this to estimate the maximum possible height of the Earth's atmosphere as 52,000 passim (about 49 miles, or 79 km).
Adelard of Bath was one of the early translators of the classics. He also discussed meteorological topics in his Quaestiones naturales. He thought dense air produced propulsion in the form of wind. He explained thunder by saying that it was due to ice colliding in clouds, and in Summer it melted. In the thirteenth century, Aristotelian theories reestablished dominance in meteorology. For the next four centuries, meteorological work by and large was mostly commentary. It has been estimated over 156 commentaries on the Meteorologica were written before 1650.
Experimental evidence was less important than appeal to the classics and authority in medieval thought. In the thirteenth century, Roger Bacon advocated experimentation and the mathematical approach. In his Opus majus, he followed Aristotle's theory on the atmosphere being composed of water, air, and fire, supplemented by optics and geometric proofs. He noted that Ptolemy's climatic zones had to be adjusted for topography.
St. Albert the Great was the first to propose that each drop of falling rain had the form of a small sphere, and that this form meant that the rainbow was produced by light interacting with each raindrop. Roger Bacon was the first to calculate the angular size of the rainbow. He stated that a rainbow summit cannot appear higher than 42 degrees above the horizon.
In the late 13th century and early 14th century, Kamāl al-Dīn al-Fārisī and Theodoric of Freiberg were the first to give the correct explanations for the primary rainbow phenomenon. Theoderic went further and also explained the secondary rainbow.
By the middle of the sixteenth century, meteorology had developed along two lines: theoretical science based on Meteorologica, and astrological weather forecasting. The pseudoscientific prediction by natural signs became popular and enjoyed protection of the church and princes. This was supported by scientists like Johannes Muller, Leonard Digges, and Johannes Kepler. However, there were skeptics. In the 14th century, Nicole Oresme believed that weather forecasting was possible, but that the rules for it were unknown at the time. Astrological influence in meteorology persisted until the eighteenth century.
Gerolamo Cardano's De Subilitate (1550) was the first work to challenge fundamental aspects of Aristotelian theory. Cardano maintained that there were only three basic elements- earth, air, and water. He discounted fire because it needed material to spread and produced nothing. Cardano thought there were two kinds of air: free air and enclosed air. The former destroyed inanimate things and preserved animate things, while the latter had the opposite effect.
Rene Descartes's Discourse on the Method (1637) typifies the beginning of the scientific revolution in meteorology. His scientific method had four principles: to never accept anything unless one clearly knew it to be true; to divide every difficult problem into small problems to tackle; to proceed from the simple to the complex, always seeking relationships; to be as complete and thorough as possible with no prejudice.
In the appendix Les Meteores, he applied these principles to meteorology. He discussed terrestrial bodies and vapors which arise from them, proceeding to explain the formation of clouds from drops of water, and winds, clouds then dissolving into rain, hail and snow. He also discussed the effects of light on the rainbow. Descartes hypothesized that all bodies were composed of small particles of different shapes and interwovenness. All of his theories were based on this hypothesis. He explained the rain as caused by clouds becoming too large for the air to hold, and that clouds became snow if the air was not warm enough to melt them, or hail if they met colder wind. Like his predecessors, Descartes's method was deductive, as meteorological instruments were not developed and extensively used yet. He introduced the Cartesian coordinate system to meteorology and stressed the importance of mathematics in natural science. His work established meteorology as a legitimate branch of physics.
In the 18th century, the invention of the thermometer and barometer allowed for more accurate measurements of temperature and pressure, leading to a better understanding of atmospheric processes. This century also saw the birth of the first meteorological society, the Societas Meteorologica Palatina in 1780.
In the 19th century, advances in technology such as the telegraph and photography led to the creation of weather observing networks and the ability to track storms. Additionally, scientists began to use mathematical models to make predictions about the weather. The 20th century saw the development of radar and satellite technology, which greatly improved the ability to observe and track weather systems. In addition, meteorologists and atmospheric scientists started to create the first weather forecasts and temperature predictions.
In the 20th and 21st centuries, with the advent of computer models and big data, meteorology has become increasingly dependent on numerical methods and computer simulations. This has greatly improved weather forecasting and climate predictions. Additionally, meteorology has expanded to include other areas such as air quality, atmospheric chemistry, and climatology. The advancement in observational, theoretical and computational technologies has enabled ever more accurate weather predictions and understanding of weather pattern and air pollution. In current time, with the advancement in weather forecasting and satellite technology, meteorology has become an integral part of everyday life, and is used for many purposes such as aviation, agriculture, and disaster management.
Instruments and classification scales
In 1441, King Sejong's son, Prince Munjong of Korea, invented the first standardized rain gauge. These were sent throughout the Joseon dynasty of Korea as an official tool to assess land taxes based upon a farmer's potential harvest. In 1450, Leone Battista Alberti developed a swinging-plate anemometer, and was known as the first anemometer. In 1607, Galileo Galilei constructed a thermoscope. In 1611, Johannes Kepler wrote the first scientific treatise on snow crystals: "Strena Seu de Nive Sexangula (A New Year's Gift of Hexagonal Snow)." In 1643, Evangelista Torricelli invented the mercury barometer. In 1662, Sir Christopher Wren invented the mechanical, self-emptying, tipping bucket rain gauge. In 1714, Gabriel Fahrenheit created a reliable scale for measuring temperature with a mercury-type thermometer. In 1742, Anders Celsius, a Swedish astronomer, proposed the "centigrade" temperature scale, the predecessor of the current Celsius scale. In 1783, the first hair hygrometer was demonstrated by Horace-Bénédict de Saussure. In 1802–1803, Luke Howard wrote On the Modification of Clouds, in which he assigns cloud types Latin names. In 1806, Francis Beaufort introduced his system for classifying wind speeds. Near the end of the 19th century the first cloud atlases were published, including the International Cloud Atlas, which has remained in print ever since. The April 1960 launch of the first successful weather satellite, TIROS-1, marked the beginning of the age where weather information became available globally.
Atmospheric composition research
In 1648, Blaise Pascal rediscovered that atmospheric pressure decreases with height, and deduced that there is a vacuum above the atmosphere. In 1738, Daniel Bernoulli published Hydrodynamics, initiating the Kinetic theory of gases and established the basic laws for the theory of gases. In 1761, Joseph Black discovered that ice absorbs heat without changing its temperature when melting. In 1772, Black's student Daniel Rutherford discovered nitrogen, which he called phlogisticated air, and together they developed the phlogiston theory. In 1777, Antoine Lavoisier discovered oxygen and developed an explanation for combustion. In 1783, in Lavoisier's essay "Reflexions sur le phlogistique," he deprecates the phlogiston theory and proposes a caloric theory. In 1804, John Leslie observed that a matte black surface radiates heat more effectively than a polished surface, suggesting the importance of black-body radiation. In 1808, John Dalton defended caloric theory in A New System of Chemistry and described how it combines with matter, especially gases; he proposed that the heat capacity of gases varies inversely with atomic weight. In 1824, Sadi Carnot analyzed the efficiency of steam engines using caloric theory; he developed the notion of a reversible process and, in postulating that no such thing exists in nature, laid the foundation for the second law of thermodynamics. In 1716, Edmund Halley suggested that aurorae are caused by "magnetic effluvia" moving along the Earth's magnetic field lines.
Research into cyclones and air flow
In 1494, Christopher Columbus experienced a tropical cyclone, which led to the first written European account of a hurricane. In 1686, Edmund Halley presented a systematic study of the trade winds and monsoons and identified solar heating as the cause of atmospheric motions. In 1735, an ideal explanation of global circulation through study of the trade winds was written by George Hadley. In 1743, when Benjamin Franklin was prevented from seeing a lunar eclipse by a hurricane, he decided that cyclones move in a contrary manner to the winds at their periphery. Understanding the kinematics of how exactly the rotation of the Earth affects airflow was partial at first. Gaspard-Gustave Coriolis published a paper in 1835 on the energy yield of machines with rotating parts, such as waterwheels. In 1856, William Ferrel proposed the existence of a circulation cell in the mid-latitudes, and the air within deflected by the Coriolis force resulting in the prevailing westerly winds. Late in the 19th century, the motion of air masses along isobars was understood to be the result of the large-scale interaction of the pressure gradient force and the deflecting force. By 1912, this deflecting force was named the Coriolis effect. Just after World War I, a group of meteorologists in Norway led by Vilhelm Bjerknes developed the Norwegian cyclone model that explains the generation, intensification and ultimate decay (the life cycle) of mid-latitude cyclones, and introduced the idea of fronts, that is, sharply defined boundaries between air masses. The group included Carl-Gustaf Rossby (who was the first to explain the large scale atmospheric flow in terms of fluid dynamics), Tor Bergeron (who first determined how rain forms) and Jacob Bjerknes.
Observation networks and weather forecasting
In the late 16th century and first half of the 17th century a range of meteorological instruments were invented – the thermometer, barometer, hydrometer, as well as wind and rain gauges. In the 1650s natural philosophers started using these instruments to systematically record weather observations. Scientific academies established weather diaries and organised observational networks. In 1654, Ferdinando II de Medici established the first weather observing network, that consisted of meteorological stations in Florence, Cutigliano, Vallombrosa, Bologna, Parma, Milan, Innsbruck, Osnabrück, Paris and Warsaw. The collected data were sent to Florence at regular time intervals. In the 1660s Robert Hooke of the Royal Society of London sponsored networks of weather observers. Hippocrates' treatise Airs, Waters, and Places had linked weather to disease. Thus early meteorologists attempted to correlate weather patterns with epidemic outbreaks, and the climate with public health.
During the Age of Enlightenment meteorology tried to rationalise traditional weather lore, including astrological meteorology. But there were also attempts to establish a theoretical understanding of weather phenomena. Edmond Halley and George Hadley tried to explain trade winds. They reasoned that the rising mass of heated equator air is replaced by an inflow of cooler air from high latitudes. A flow of warm air at high altitude from equator to poles in turn established an early picture of circulation. Frustration with the lack of discipline among weather observers, and the poor quality of the instruments, led the early modern nation states to organise large observation networks. Thus, by the end of the 18th century, meteorologists had access to large quantities of reliable weather data. In 1832, an electromagnetic telegraph was created by Baron Schilling. The arrival of the electrical telegraph in 1837 afforded, for the first time, a practical method for quickly gathering surface weather observations from a wide area.
This data could be used to produce maps of the state of the atmosphere for a region near the Earth's surface and to study how these states evolved through time. To make frequent weather forecasts based on these data required a reliable network of observations, but it was not until 1849 that the Smithsonian Institution began to establish an observation network across the United States under the leadership of Joseph Henry. Similar observation networks were established in Europe at this time. The Reverend William Clement Ley was key in understanding of cirrus clouds and early understandings of Jet Streams. Charles Kenneth Mackinnon Douglas, known as 'CKM' Douglas read Ley's papers after his death and carried on the early study of weather systems.
Nineteenth century researchers in meteorology were drawn from military or medical backgrounds, rather than trained as dedicated scientists. In 1854, the United Kingdom government appointed Robert FitzRoy to the new office of Meteorological Statist to the Board of Trade with the task of gathering weather observations at sea. FitzRoy's office became the United Kingdom Meteorological Office in 1854, the second oldest national meteorological service in the world (the Central Institution for Meteorology and Geodynamics (ZAMG) in Austria was founded in 1851 and is the oldest weather service in the world). The first daily weather forecasts made by FitzRoy's Office were published in The Times newspaper in 1860. The following year a system was introduced of hoisting storm warning cones at principal ports when a gale was expected.
FitzRoy coined the term "weather forecast" and tried to separate scientific approaches from prophetic ones.
Over the next 50 years, many countries established national meteorological services. The India Meteorological Department (1875) was established to follow tropical cyclone and monsoon. The Finnish Meteorological Central Office (1881) was formed from part of Magnetic Observatory of Helsinki University. Japan's Tokyo Meteorological Observatory, the forerunner of the Japan Meteorological Agency, began constructing surface weather maps in 1883. The United States Weather Bureau (1890) was established under the United States Department of Agriculture. The Australian Bureau of Meteorology (1906) was established by a Meteorology Act to unify existing state meteorological services.
Numerical weather prediction
In 1904, Norwegian scientist Vilhelm Bjerknes first argued in his paper Weather Forecasting as a Problem in Mechanics and Physics that it should be possible to forecast weather from calculations based upon natural laws.
It was not until later in the 20th century that advances in the understanding of atmospheric physics led to the foundation of modern numerical weather prediction. In 1922, Lewis Fry Richardson published "Weather Prediction By Numerical Process," after finding notes and derivations he worked on as an ambulance driver in World War I. He described how small terms in the prognostic fluid dynamics equations that govern atmospheric flow could be neglected, and a numerical calculation scheme that could be devised to allow predictions. Richardson envisioned a large auditorium of thousands of people performing the calculations. However, the sheer number of calculations required was too large to complete without electronic computers, and the size of the grid and time steps used in the calculations led to unrealistic results. Though numerical analysis later found that this was due to numerical instability.
Starting in the 1950s, numerical forecasts with computers became feasible. The first weather forecasts derived this way used barotropic (single-vertical-level) models, and could successfully predict the large-scale movement of midlatitude Rossby waves, that is, the pattern of atmospheric lows and highs. In 1959, the UK Meteorological Office received its first computer, a Ferranti Mercury.
In the 1960s, the chaotic nature of the atmosphere was first observed and mathematically described by Edward Lorenz, founding the field of chaos theory. These advances have led to the current use of ensemble forecasting in most major forecasting centers, to take into account uncertainty arising from the chaotic nature of the atmosphere. Mathematical models used to predict the long term weather of the Earth (climate models), have been developed that have a resolution today that are as coarse as the older weather prediction models. These climate models are used to investigate long-term climate shifts, such as what effects might be caused by human emission of greenhouse gases.
Meteorologists
Meteorologists are scientists who study and work in the field of meteorology. The American Meteorological Society publishes and continually updates an authoritative electronic Meteorology Glossary. Meteorologists work in government agencies, private consulting and research services, industrial enterprises, utilities, radio and television stations, and in education. In the United States, meteorologists held about 10,000 jobs in 2018.
Although weather forecasts and warnings are the best known products of meteorologists for the public, weather presenters on radio and television are not necessarily professional meteorologists. They are most often reporters with little formal meteorological training, using unregulated titles such as weather specialist or weatherman. The American Meteorological Society and National Weather Association issue "Seals of Approval" to weather broadcasters who meet certain requirements but this is not mandatory to be hired by the media.
Equipment
Each science has its own unique sets of laboratory equipment. In the atmosphere, there are many things or qualities of the atmosphere that can be measured. Rain, which can be observed, or seen anywhere and anytime was one of the first atmospheric qualities measured historically. Also, two other accurately measured qualities are wind and humidity. Neither of these can be seen but can be felt. The devices to measure these three sprang up in the mid-15th century and were respectively the rain gauge, the anemometer, and the hygrometer. Many attempts had been made prior to the 15th century to construct adequate equipment to measure the many atmospheric variables. Many were faulty in some way or were simply not reliable. Even Aristotle noted this in some of his work as the difficulty to measure the air.
Sets of surface measurements are important data to meteorologists. They give a snapshot of a variety of weather conditions at one single location and are usually at a weather station, a ship or a weather buoy. The measurements taken at a weather station can include any number of atmospheric observables. Usually, temperature, pressure, wind measurements, and humidity are the variables that are measured by a thermometer, barometer, anemometer, and hygrometer, respectively. Professional stations may also include air quality sensors (carbon monoxide, carbon dioxide, methane, ozone, dust, and smoke), ceilometer (cloud ceiling), falling precipitation sensor, flood sensor, lightning sensor, microphone (explosions, sonic booms, thunder), pyranometer/pyrheliometer/spectroradiometer (IR/Vis/UV photodiodes), rain gauge/snow gauge, scintillation counter (background radiation, fallout, radon), seismometer (earthquakes and tremors), transmissometer (visibility), and a GPS clock for data logging. Upper air data are of crucial importance for weather forecasting. The most widely used technique is launches of radiosondes. Supplementing the radiosondes a network of aircraft collection is organized by the World Meteorological Organization.
Remote sensing, as used in meteorology, is the concept of collecting data from remote weather events and subsequently producing weather information. The common types of remote sensing are Radar, Lidar, and satellites (or photogrammetry). Each collects data about the atmosphere from a remote location and, usually, stores the data where the instrument is located. Radar and Lidar are not passive because both use EM radiation to illuminate a specific portion of the atmosphere. Weather satellites along with more general-purpose Earth-observing satellites circling the earth at various altitudes have become an indispensable tool for studying a wide range of phenomena from forest fires to El Niño.
Spatial scales
The study of the atmosphere can be divided into distinct areas that depend on both time and spatial scales. At one extreme of this scale is climatology. In the timescales of hours to days, meteorology separates into micro-, meso-, and synoptic scale meteorology. Respectively, the geospatial size of each of these three scales relates directly with the appropriate timescale.
Other subclassifications are used to describe the unique, local, or broad effects within those subclasses.
Microscale
Microscale meteorology is the study of atmospheric phenomena on a scale of about or less. Individual thunderstorms, clouds, and local turbulence caused by buildings and other obstacles (such as individual hills) are modeled on this scale. Misoscale meteorology is an informal subdivision.
Mesoscale
Mesoscale meteorology is the study of atmospheric phenomena that has horizontal scales ranging from 1 km to 1000 km and a vertical scale that starts at the Earth's surface and includes the atmospheric boundary layer, troposphere, tropopause, and the lower section of the stratosphere. Mesoscale timescales last from less than a day to multiple weeks. The events typically of interest are thunderstorms, squall lines, fronts, precipitation bands in tropical and extratropical cyclones, and topographically generated weather systems such as mountain waves and sea and land breezes.
Synoptic scale
Synoptic scale meteorology predicts atmospheric changes at scales up to 1000 km and 105 sec (28 days), in time and space. At the synoptic scale, the Coriolis acceleration acting on moving air masses (outside of the tropics) plays a dominant role in predictions. The phenomena typically described by synoptic meteorology include events such as extratropical cyclones, baroclinic troughs and ridges, frontal zones, and to some extent jet streams. All of these are typically given on weather maps for a specific time. The minimum horizontal scale of synoptic phenomena is limited to the spacing between surface observation stations.
Global scale
Global scale meteorology is the study of weather patterns related to the transport of heat from the tropics to the poles. Very large scale oscillations are of importance at this scale. These oscillations have time periods typically on the order of months, such as the Madden–Julian oscillation, or years, such as the El Niño–Southern Oscillation and the Pacific decadal oscillation. Global scale meteorology pushes into the range of climatology. The traditional definition of climate is pushed into larger timescales and with the understanding of the longer time scale global oscillations, their effect on climate and weather disturbances can be included in the synoptic and mesoscale timescales predictions.
Numerical Weather Prediction is a main focus in understanding air–sea interaction, tropical meteorology, atmospheric predictability, and tropospheric/stratospheric processes. The Naval Research Laboratory in Monterey, California, developed a global atmospheric model called Navy Operational Global Atmospheric Prediction System (NOGAPS). NOGAPS is run operationally at Fleet Numerical Meteorology and Oceanography Center for the United States Military. Many other global atmospheric models are run by national meteorological agencies.
Some meteorological principles
Boundary layer meteorology
Boundary layer meteorology is the study of processes in the air layer directly above Earth's surface, known as the atmospheric boundary layer (ABL). The effects of the surface – heating, cooling, and friction – cause turbulent mixing within the air layer. Significant movement of heat, matter, or momentum on time scales of less than a day are caused by turbulent motions. Boundary layer meteorology includes the study of all types of surface–atmosphere boundary, including ocean, lake, urban land and non-urban land for the study of meteorology.
Dynamic meteorology
Dynamic meteorology generally focuses on the fluid dynamics of the atmosphere. The idea of air parcel is used to define the smallest element of the atmosphere, while ignoring the discrete molecular and chemical nature of the atmosphere. An air parcel is defined as an infinitesimal region in the fluid continuum of the atmosphere. The fundamental laws of fluid dynamics, thermodynamics, and motion are used to study the atmosphere. The physical quantities that characterize the state of the atmosphere are temperature, density, pressure, etc. These variables have unique values in the continuum.
Applications
Weather forecasting
Weather forecasting is the application of science and technology to predict the state of the atmosphere at a future time and given location. Humans have attempted to predict the weather informally for millennia and formally since at least the 19th century. Weather forecasts are made by collecting quantitative data about the current state of the atmosphere and using scientific understanding of atmospheric processes to project how the atmosphere will evolve.
Once an all-human endeavor based mainly upon changes in barometric pressure, current weather conditions, and sky condition, forecast models are now used to determine future conditions. Human input is still required to pick the best possible forecast model to base the forecast upon, which involves pattern recognition skills, teleconnections, knowledge of model performance, and knowledge of model biases. The chaotic nature of the atmosphere, the massive computational power required to solve the equations that describe the atmosphere, error involved in measuring the initial conditions, and an incomplete understanding of atmospheric processes mean that forecasts become less accurate as the difference in current time and the time for which the forecast is being made (the range of the forecast) increases. The use of ensembles and model consensus help narrow the error and pick the most likely outcome.
There are a variety of end uses to weather forecasts. Weather warnings are important forecasts because they are used to protect life and property. Forecasts based on temperature and precipitation are important to agriculture, and therefore to commodity traders within stock markets. Temperature forecasts are used by utility companies to estimate demand over coming days. On an everyday basis, people use weather forecasts to determine what to wear. Since outdoor activities are severely curtailed by heavy rain, snow, and wind chill, forecasts can be used to plan activities around these events, and to plan ahead and survive them.
Aviation meteorology
Aviation meteorology deals with the impact of weather on air traffic management. It is important for air crews to understand the implications of weather on their flight plan as well as their aircraft, as noted by the Aeronautical Information Manual:
The effects of ice on aircraft are cumulative—thrust is reduced, drag increases, lift lessens, and weight increases. The results are an increase in stall speed and a deterioration of aircraft performance. In extreme cases, 2 to 3 inches of ice can form on the leading edge of the airfoil in less than 5 minutes. It takes but 1/2 inch of ice to reduce the lifting power of some aircraft by 50 percent and increases the frictional drag by an equal percentage.
Agricultural meteorology
Meteorologists, soil scientists, agricultural hydrologists, and agronomists are people concerned with studying the effects of weather and climate on plant distribution, crop yield, water-use efficiency, phenology of plant and animal development, and the energy balance of managed and natural ecosystems. Conversely, they are interested in the role of vegetation on climate and weather.
Hydrometeorology
Hydrometeorology is the branch of meteorology that deals with the hydrologic cycle, the water budget, and the rainfall statistics of storms. A hydrometeorologist prepares and issues forecasts of accumulating (quantitative) precipitation, heavy rain, heavy snow, and highlights areas with the potential for flash flooding. Typically the range of knowledge that is required overlaps with climatology, mesoscale and synoptic meteorology, and other geosciences.
The multidisciplinary nature of the branch can result in technical challenges, since tools and solutions from each of the individual disciplines involved may behave slightly differently, be optimized for different hard- and software platforms and use different data formats. There are some initiatives – such as the DRIHM project – that are trying to address this issue.
Nuclear meteorology
Nuclear meteorology investigates the distribution of radioactive aerosols and gases in the atmosphere.
Maritime meteorology
Maritime meteorology deals with air and wave forecasts for ships operating at sea. Organizations such as the Ocean Prediction Center, Honolulu National Weather Service forecast office, United Kingdom Met Office, KNMI and JMA prepare high seas forecasts for the world's oceans.
Military meteorology
Military meteorology is the research and application of meteorology for military purposes. In the United States, the United States Navy's Commander, Naval Meteorology and Oceanography Command oversees meteorological efforts for the Navy and Marine Corps while the United States Air Force's Air Force Weather Agency is responsible for the Air Force and Army.
Environmental meteorology
Environmental meteorology mainly analyzes industrial pollution dispersion physically and chemically based on meteorological parameters such as temperature, humidity, wind, and various weather conditions.
Renewable energy
Meteorology applications in renewable energy includes basic research, "exploration," and potential mapping of wind power and solar radiation for wind and solar energy.
See also
References
Further reading
Byers, Horace. General Meteorology. New York: McGraw-Hill, 1994.
Dictionaries and encyclopedias
History
External links
Please see weather forecasting for weather forecast sites.
Air Quality Meteorology – Online course that introduces the basic concepts of meteorology and air quality necessary to understand meteorological computer models. Written at a bachelor's degree level.
The GLOBE Program – (Global Learning and Observations to Benefit the Environment) An international environmental science and education program that links students, teachers, and the scientific research community in an effort to learn more about the environment through student data collection and observation.
Glossary of Meteorology – From the American Meteorological Society, an excellent reference of nomenclature, equations, and concepts for the more advanced reader.
JetStream – An Online School for Weather – National Weather Service
Learn About Meteorology – Australian Bureau of Meteorology
The Weather Guide – Weather Tutorials and News at About.com
Meteorology Education and Training (MetEd) – The COMET Program
NOAA Central Library – National Oceanic & Atmospheric Administration
The World Weather 2010 Project The University of Illinois at Urbana–Champaign
Ogimet – online data from meteorological stations of the world, obtained through NOAA free services
National Center for Atmospheric Research Archives, documents the history of meteorology
Weather forecasting and Climate science – United Kingdom Meteorological Office
Meteorology , BBC Radio 4 discussion with Vladimir Janković, Richard Hambyn and Iba Taub (In Our Time, 6 March 2003)
Virtual exhibition about meteorology on the digital library of Paris Observatory
Applied and interdisciplinary physics
Oceanography
Physical geography
Greek words and phrases | Meteorology | [
"Physics",
"Environmental_science"
] | 8,461 | [
"Oceanography",
"Hydrology",
"Meteorology",
"Applied and interdisciplinary physics"
] |
19,916 | https://en.wikipedia.org/wiki/Meitnerium | Meitnerium is a synthetic chemical element; it has symbol Mt and atomic number 109. It is an extremely radioactive synthetic element (an element not found in nature, but can be created in a laboratory). The most stable known isotope, meitnerium-278, has a half-life of 4.5 seconds, although the unconfirmed meitnerium-282 may have a longer half-life of 67 seconds. The element was first synthesized in August 1982 by the GSI Helmholtz Centre for Heavy Ion Research near Darmstadt, Germany, and it was named after Lise Meitner in 1997.
In the periodic table, meitnerium is a d-block transactinide element. It is a member of the 7th period and is placed in the group 9 elements, although no chemical experiments have yet been carried out to confirm that it behaves as the heavier homologue to iridium in group 9 as the seventh member of the 6d series of transition metals. Meitnerium is calculated to have properties similar to its lighter homologues, cobalt, rhodium, and iridium.
Introduction
History
Discovery
Meitnerium was first synthesized on August 29, 1982, by a German research team led by Peter Armbruster and Gottfried Münzenberg at the Institute for Heavy Ion Research (Gesellschaft für Schwerionenforschung) in Darmstadt. The team bombarded a target of bismuth-209 with accelerated nuclei of iron-58 and detected a single atom of the isotope meitnerium-266:
+ → +
This work was confirmed three years later at the Joint Institute for Nuclear Research at Dubna (then in the Soviet Union).
Naming
Using Mendeleev's nomenclature for unnamed and undiscovered elements, meitnerium should be known as eka-iridium. In 1979, during the Transfermium Wars (but before the synthesis of meitnerium), IUPAC published recommendations according to which the element was to be called unnilennium (with the corresponding symbol of Une), a systematic element name as a placeholder, until the element was discovered (and the discovery then confirmed) and a permanent name was decided on. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations were mostly ignored among scientists in the field, who either called it "element 109", with the symbol of E109, (109) or even simply 109, or used the proposed name "meitnerium".
The naming of meitnerium was discussed in the element naming controversy regarding the names of elements 104 to 109, but meitnerium was the only proposal and thus was never disputed. The name meitnerium (Mt) was suggested by the GSI team in September 1992 in honor of the Austrian physicist Lise Meitner, a co-discoverer of protactinium (with Otto Hahn), and one of the discoverers of nuclear fission. In 1994 the name was recommended by IUPAC, and was officially adopted in 1997. It is thus the only element named specifically after a non-mythological woman (curium being named for both Pierre and Marie Curie).
Isotopes
Meitnerium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. Eight different isotopes of meitnerium have been reported with mass numbers 266, 268, 270, and 274–278, two of which, meitnerium-268 and meitnerium-270, have unconfirmed metastable states. A ninth isotope with mass number 282 is unconfirmed. Most of these decay predominantly through alpha decay, although some undergo spontaneous fission.
Stability and half-lives
All meitnerium isotopes are extremely unstable and radioactive; in general, heavier isotopes are more stable than the lighter. The most stable known meitnerium isotope, 278Mt, is also the heaviest known; it has a half-life of 4.5 seconds. The unconfirmed 282Mt is even heavier and appears to have a longer half-life of 67 seconds. With a half-life of 0.8 seconds, the next most stable known isotope is 270Mt. The isotopes 276Mt and 274Mt have half-lives of 0.62 and 0.64 seconds respectively.
The isotope 277Mt, created as the final decay product of 293Ts for the first time in 2012, was observed to undergo spontaneous fission with a half-life of 5 milliseconds. Preliminary data analysis considered the possibility of this fission event instead originating from 277Hs, for it also has a half-life of a few milliseconds, and could be populated following undetected electron capture somewhere along the decay chain. This possibility was later deemed very unlikely based on observed decay energies of 281Ds and 281Rg and the short half-life of 277Mt, although there is still some uncertainty of the assignment. Regardless, the rapid fission of 277Mt and 277Hs is strongly suggestive of a region of instability for superheavy nuclei with N = 168–170. The existence of this region, characterized by a decrease in fission barrier height between the deformed shell closure at N = 162 and spherical shell closure at N = 184, is consistent with theoretical models.
Predicted properties
Other than nuclear properties, no properties of meitnerium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that meitnerium and its parents decay very quickly. Properties of meitnerium metal remain unknown and only predictions are available.
Chemical
Meitnerium is the seventh member of the 6d series of transition metals, and should be much like the platinum group metals. Calculations on its ionization potentials and atomic and ionic radii are similar to that of its lighter homologue iridium, thus implying that meitnerium's basic properties will resemble those of the other group 9 elements, cobalt, rhodium, and iridium.
Prediction of the probable chemical properties of meitnerium has not received much attention recently. Meitnerium is expected to be a noble metal. The standard electrode potential for the Mt3+/Mt couple is expected to be 0.8 V. Based on the most stable oxidation states of the lighter group 9 elements, the most stable oxidation states of meitnerium are predicted to be the +6, +3, and +1 states, with the +3 state being the most stable in aqueous solutions. In comparison, rhodium and iridium show a maximum oxidation state of +6, while the most stable states are +4 and +3 for iridium and +3 for rhodium. The oxidation state +9, represented only by iridium in [IrO4]+, might be possible for its congener meitnerium in the nonafluoride (MtF9) and the [MtO4]+ cation, although [IrO4]+ is expected to be more stable than these meitnerium compounds. The tetrahalides of meitnerium have also been predicted to have similar stabilities to those of iridium, thus also allowing a stable +4 state. It is further expected that the maximum oxidation states of elements from bohrium (element 107) to darmstadtium (element 110) may be stable in the gas phase but not in aqueous solution.
Physical and atomic
Meitnerium is expected to be a solid under normal conditions and assume a face-centered cubic crystal structure, similarly to its lighter congener iridium. It should be a very heavy metal with a density of around 27–28 g/cm3, which would be among the highest of any of the 118 known elements. Meitnerium is also predicted to be paramagnetic.
Theoreticians have predicted the covalent radius of meitnerium to be 6 to 10 pm larger than that of iridium. The atomic radius of meitnerium is expected to be around 128 pm.
Experimental chemistry
Meitnerium is the first element on the periodic table whose chemistry has not yet been investigated. Unambiguous determination of the chemical characteristics of meitnerium has yet to have been established due to the short half-lives of meitnerium isotopes and a limited number of likely volatile compounds that could be studied on a very small scale. One of the few meitnerium compounds that are likely to be sufficiently volatile is meitnerium hexafluoride (), as its lighter homologue iridium hexafluoride () is volatile above 60 °C and therefore the analogous compound of meitnerium might also be sufficiently volatile; a volatile octafluoride () might also be possible. For chemical studies to be carried out on a transactinide, at least four atoms must be produced, the half-life of the isotope used must be at least 1 second, and the rate of production must be at least one atom per week. Even though the half-life of 278Mt, the most stable confirmed meitnerium isotope, is 4.5 seconds, long enough to perform chemical studies, another obstacle is the need to increase the rate of production of meitnerium isotopes and allow experiments to carry on for weeks or months so that statistically significant results can be obtained. Separation and detection must be carried out continuously to separate out the meitnerium isotopes and have automated systems experiment on the gas-phase and solution chemistry of meitnerium, as the yields for heavier elements are predicted to be smaller than those for lighter elements; some of the separation techniques used for bohrium and hassium could be reused. However, the experimental chemistry of meitnerium has not received as much attention as that of the heavier elements from copernicium to livermorium.
The Lawrence Berkeley National Laboratory attempted to synthesize the isotope 271Mt in 2002–2003 for a possible chemical investigation of meitnerium, because it was expected that it might be more stable than nearby isotopes due to having 162 neutrons, a magic number for deformed nuclei; its half-life was predicted to be a few seconds, long enough for a chemical investigation. However, no atoms of 271Mt were detected; this isotope of meitnerium is currently unknown.
An experiment determining the chemical properties of a transactinide would need to compare a compound of that transactinide with analogous compounds of some of its lighter homologues: for example, in the chemical characterization of hassium, hassium tetroxide (HsO4) was compared with the analogous osmium compound, osmium tetroxide (OsO4). In a preliminary step towards determining the chemical properties of meitnerium, the GSI attempted sublimation of the rhodium compounds rhodium(III) oxide (Rh2O3) and rhodium(III) chloride (RhCl3). However, macroscopic amounts of the oxide would not sublimate until 1000 °C and the chloride would not until 780 °C, and then only in the presence of carbon aerosol particles: these temperatures are far too high for such procedures to be used on meitnerium, as most of the current methods used for the investigation of the chemistry of superheavy elements do not work above 500 °C.
Following the 2014 successful synthesis of seaborgium hexacarbonyl, Sg(CO)6, studies were conducted with the stable transition metals of groups 7 through 9, suggesting that carbonyl formation could be extended to further probe the chemistries of the early 6d transition metals from rutherfordium to meitnerium inclusive. Nevertheless, the challenges of low half-lives and difficult production reactions make meitnerium difficult to access for radiochemists, though the isotopes 278Mt and 276Mt are long-lived enough for chemical research and may be produced in the decay chains of 294Ts and 288Mc respectively. 276Mt is likely more suitable, since producing tennessine requires a rare and rather short-lived berkelium target. The isotope 270Mt, observed in the decay chain of 278Nh with a half-life of 0.69 seconds, may also be sufficiently long-lived for chemical investigations, though a direct synthesis route leading to this isotope and more precise measurements of its decay properties would be required.
Notes
References
Bibliography
External links
Meitnerium at The Periodic Table of Videos (University of Nottingham)
Chemical elements
Chemical elements with face-centered cubic structure
Synthetic elements
Transition metals | Meitnerium | [
"Physics",
"Chemistry"
] | 2,557 | [
"Chemical elements",
"Synthetic materials",
"Synthetic elements",
"Radioactivity",
"Atoms",
"Matter"
] |
19,951 | https://en.wikipedia.org/wiki/Pressure%20measurement | Pressure measurement is the measurement of an applied force by a fluid (liquid or gas) on a surface. Pressure is typically measured in units of force per unit of surface area. Many techniques have been developed for the measurement of pressure and vacuum. Instruments used to measure and display pressure mechanically are called pressure gauges, vacuum gauges or compound gauges (vacuum & pressure). The widely used Bourdon gauge is a mechanical device, which both measures and indicates and is probably the best known type of gauge.
A vacuum gauge is used to measure pressures lower than the ambient atmospheric pressure, which is set as the zero point, in negative values (for instance, −1 bar or −760 mmHg equals total vacuum). Most gauges measure pressure relative to atmospheric pressure as the zero point, so this form of reading is simply referred to as "gauge pressure". However, anything greater than total vacuum is technically a form of pressure. For very low pressures, a gauge that uses total vacuum as the zero point reference must be used, giving pressure reading as an absolute pressure.
Other methods of pressure measurement involve sensors that can transmit the pressure reading to a remote indicator or control system (telemetry).
Absolute, gauge and differential pressures — zero reference
Everyday pressure measurements, such as for vehicle tire pressure, are usually made relative to ambient air pressure. In other cases measurements are made relative to a vacuum or to some other specific reference. When distinguishing between these zero references, the following terms are used:
is zero-referenced against a perfect vacuum, using an absolute scale, so it is equal to gauge pressure plus atmospheric pressure. Absolute pressure sensors are used in applications where a constant reference is required, like for example, high-performance industrial applications such as monitoring vacuum pumps, liquid pressure measurement, industrial packaging, industrial process control and aviation inspection.
is zero-referenced against ambient air pressure, so it is equal to absolute pressure minus atmospheric pressure. A tire pressure gauge is an example of gauge pressure measurement; when it indicates zero, then the pressure it is measuring is the same as the ambient pressure. Most sensors for measuring up to 50 bar are manufactured in this way, since otherwise the atmospheric pressure fluctuation (weather) is reflected as an error in the measurement result.
is the difference in pressure between two points. Differential pressure sensors are used to measure many properties, such as pressure drops across oil filters or air filters, fluid levels (by comparing the pressure above and below the liquid) or flow rates (by measuring the change in pressure across a restriction). Technically speaking, most pressure sensors are really differential pressure sensors; for example a gauge pressure sensor is merely a differential pressure sensor in which one side is open to the ambient atmosphere. A DP cell is a device that measures the differential pressure between two inputs.
The zero reference in use is usually implied by context, and these words are added only when clarification is needed. Tire pressure and blood pressure are gauge pressures by convention, while atmospheric pressures, deep vacuum pressures, and altimeter pressures must be absolute.
For most working fluids where a fluid exists in a closed system, gauge pressure measurement prevails. Pressure instruments connected to the system will indicate pressures relative to the current atmospheric pressure. The situation changes when extreme vacuum pressures are measured, then absolute pressures are typically used instead and measuring instruments used will be different.
Differential pressures are commonly used in industrial process systems. Differential pressure gauges have two inlet ports, each connected to one of the volumes whose pressure is to be monitored. In effect, such a gauge performs the mathematical operation of subtraction through mechanical means, obviating the need for an operator or control system to watch two separate gauges and determine the difference in readings.
Moderate vacuum pressure readings can be ambiguous without the proper context, as they may represent absolute pressure or gauge pressure without a negative sign. Thus a vacuum of 26 inHg gauge is equivalent to an absolute pressure of 4 inHg, calculated as 30 inHg (typical atmospheric pressure) − 26 inHg (gauge pressure).
Atmospheric pressure is typically about 100 kPa at sea level, but is variable with altitude and weather. If the absolute pressure of a fluid stays constant, the gauge pressure of the same fluid will vary as atmospheric pressure changes. For example, when a car drives up a mountain, the (gauge) tire pressure goes up because atmospheric pressure goes down. The absolute pressure in the tire is essentially unchanged.
Using atmospheric pressure as reference is usually signified by a "g" for gauge after the pressure unit, e.g. 70 psig, which means that the pressure measured is the total pressure minus atmospheric pressure. There are two types of gauge reference pressure: vented gauge (vg) and sealed gauge (sg).
A vented-gauge pressure transmitter, for example, allows the outside air pressure to be exposed to the negative side of the pressure-sensing diaphragm, through a vented cable or a hole on the side of the device, so that it always measures the pressure referred to ambient barometric pressure. Thus a vented-gauge reference pressure sensor should always read zero pressure when the process pressure connection is held open to the air.
A sealed gauge reference is very similar, except that atmospheric pressure is sealed on the negative side of the diaphragm. This is usually adopted on high pressure ranges, such as hydraulics, where atmospheric pressure changes will have a negligible effect on the accuracy of the reading, so venting is not necessary. This also allows some manufacturers to provide secondary pressure containment as an extra precaution for pressure equipment safety if the burst pressure of the primary pressure sensing diaphragm is exceeded.
There is another way of creating a sealed gauge reference, and this is to seal a high vacuum on the reverse side of the sensing diaphragm. Then the output signal is offset, so the pressure sensor reads close to zero when measuring atmospheric pressure.
A sealed gauge reference pressure transducer will never read exactly zero because atmospheric pressure is always changing and the reference in this case is fixed at 1 bar.
To produce an absolute pressure sensor, the manufacturer seals a high vacuum behind the sensing diaphragm. If the process-pressure connection of an absolute-pressure transmitter is open to the air, it will read the actual barometric pressure.
A sealed pressure sensor is similar to a gauge pressure sensor except that it measures pressure relative to some fixed pressure rather than the ambient atmospheric pressure (which varies according to the location and the weather).
History
For much of human history, the pressure of gases like air was ignored, denied, or taken for granted, but as early as the 6th century BC, Greek philosopher Anaximenes of Miletus claimed that all things are made of air that is simply changed by varying levels of pressure. He could observe water evaporating, changing to a gas, and felt that this applied even to solid matter. More condensed air made colder, heavier objects, and expanded air made lighter, hotter objects. This was akin to how gases really do become less dense when warmer, more dense when cooler.
In the 17th century, Evangelista Torricelli conducted experiments with mercury that allowed him to measure the presence of air. He would dip a glass tube, closed at one end, into a bowl of mercury and raise the closed end up out of it, keeping the open end submerged. The weight of the mercury would pull it down, leaving a partial vacuum at the far end. This validated his belief that air/gas has mass, creating pressure on things around it. Previously, the more popular conclusion, even for Galileo, was that air was weightless and it is vacuum that provided force, as in a siphon. The discovery helped bring Torricelli to the conclusion:
This test, known as Torricelli's experiment, was essentially the first documented pressure gauge.
Blaise Pascal went further, having his brother-in-law try the experiment at different altitudes on a mountain, and finding indeed that the farther down in the ocean of atmosphere, the higher the pressure.
Units
The SI unit for pressure is the pascal (Pa), equal to one newton per square metre (N·m−2 or kg·m−1·s−2). This special name for the unit was added in 1971; before that, pressure in SI was expressed in units such as N·m−2. When indicated, the zero reference is stated in parentheses following the unit, for example 101 kPa (abs). The pound per square inch (psi) is still in widespread use in the US and Canada, for measuring, for instance, tire pressure. A letter is often appended to the psi unit to indicate the measurement's zero reference; psia for absolute, psig for gauge, psid for differential, although this practice is discouraged by the NIST.
Because pressure was once commonly measured by its ability to displace a column of liquid in a manometer, pressures are often expressed as a depth of a particular fluid (e.g., inches of water). Manometric measurement is the subject of pressure head calculations. The most common choices for a manometer's fluid are mercury (Hg) and water; water is nontoxic and readily available, while mercury's density allows for a shorter column (and so a smaller manometer) to measure a given pressure. The abbreviation "W.C." or the words "water column" are often printed on gauges and measurements that use water for the manometer.
Fluid density and local gravity can vary from one reading to another depending on local factors, so the height of a fluid column does not define pressure precisely. So measurements in "millimetres of mercury" or "inches of mercury" can be converted to SI units as long as attention is paid to the local factors of fluid density and gravity. Temperature fluctuations change the value of fluid density, while location can affect gravity.
Although no longer preferred, these manometric units are still encountered in many fields. Blood pressure is measured in millimetres of mercury (see torr) in most of the world, central venous pressure and lung pressures in centimeters of water are still common, as in settings for CPAP machines. Natural gas pipeline pressures are measured in inches of water, expressed as "inches W.C."
Underwater divers use manometric units: the ambient pressure is measured in units of metres sea water (msw) which is defined as equal to one tenth of a bar. The unit used in the US is the foot sea water (fsw), based on standard gravity and a sea-water density of 64 lb/ft3. According to the US Navy Diving Manual, one fsw equals 0.30643 msw, , or , though elsewhere it states that 33 fsw is (one atmosphere), which gives one fsw equal to about 0.445 psi. The msw and fsw are the conventional units for measurement of diver pressure exposure used in decompression tables and the unit of calibration for pneumofathometers and hyperbaric chamber pressure gauges. Both msw and fsw are measured relative to normal atmospheric pressure.
In vacuum systems, the units torr (millimeter of mercury), micron (micrometer of mercury), and inch of mercury (inHg) are most commonly used. Torr and micron usually indicates an absolute pressure, while inHg usually indicates a gauge pressure.
Atmospheric pressures are usually stated using hectopascal (hPa), kilopascal (kPa), millibar (mbar) or atmospheres (atm). In American and Canadian engineering, stress is often measured in kip. Stress is not a true pressure since it is not scalar. In the cgs system the unit of pressure was the barye (ba), equal to 1 dyn·cm−2. In the mts system, the unit of pressure was the pieze, equal to 1 sthene per square metre.
Many other hybrid units are used such as mmHg/cm2 or grams-force/cm2 (sometimes as kg/cm2 without properly identifying the force units). Using the names kilogram, gram, kilogram-force, or gram-force (or their symbols) as a unit of force is prohibited in SI; the unit of force in SI is the newton (N).
Static and dynamic pressure
Static pressure is uniform in all directions, so pressure measurements are independent of direction in an immovable (static) fluid. Flow, however, applies additional pressure on surfaces perpendicular to the flow direction, while having little impact on surfaces parallel to the flow direction. This directional component of pressure in a moving (dynamic) fluid is called dynamic pressure. An instrument facing the flow direction measures the sum of the static and dynamic pressures; this measurement is called the total pressure or stagnation pressure. Since dynamic pressure is referenced to static pressure, it is neither gauge nor absolute; it is a differential pressure.
While static gauge pressure is of primary importance to determining net loads on pipe walls, dynamic pressure is used to measure flow rates and airspeed. Dynamic pressure can be measured by taking the differential pressure between instruments parallel and perpendicular to the flow. Pitot-static tubes, for example perform this measurement on airplanes to determine airspeed. The presence of the measuring instrument inevitably acts to divert flow and create turbulence, so its shape is critical to accuracy and the calibration curves are often non-linear.
Example: A water tank has a pressure of 10 atm. The atmospheric pressure is 1 atm. What is the gauge pressure?
P_g = P_a - P_v
= 10 atm - 1 atm
= 9 atm
Therefore, the gauge pressure is 9 atm.
Instruments
A pressure sensor is a device for pressure measurement of gases or liquids.
Pressure sensors can alternatively be called pressure transducers, pressure transmitters, pressure senders, pressure indicators, piezometers and manometers, among other names.
Pressure is an expression of the force required to stop a fluid from expanding, and is usually stated in terms of force per unit area. A pressure sensor usually acts as a transducer; it generates a signal as a function of the pressure imposed.
Pressure sensors can vary drastically in technology, design, performance, application suitability and cost. A conservative estimate would be that there may be over 50 technologies and at least 300 companies making pressure sensors worldwide. There is also a category of pressure sensors that are designed to measure in a dynamic mode for capturing very high speed changes in pressure. Example applications for this type of sensor would be in the measuring of combustion pressure in an engine cylinder or in a gas turbine. These sensors are commonly manufactured out of piezoelectric materials such as quartz.
Some pressure sensors are pressure switches, which turn on or off at a particular pressure. For example, a water pump can be controlled by a pressure switch so that it starts when water is released from the system, reducing the pressure in a reservoir.
Pressure range, sensitivity, dynamic response and cost all vary by several orders of magnitude from one instrument design to the next. The oldest type is the liquid column (a vertical tube filled with mercury) manometer invented by Evangelista Torricelli in 1643. The U-Tube was invented by Christiaan Huygens in 1661.
There are two basic categories of analog pressure sensors: force collector and other types.
Force collector types These types of electronic pressure sensors generally use a force collector (such a diaphragm, piston, Bourdon tube, or bellows) to measure strain (or deflection) due to applied force over an area (pressure).
Piezoresistive strain gauge: Uses the piezoresistive effect of bonded or formed strain gauges to detect strain due to an applied pressure, electrical resistance increasing as pressure deforms the material. Common technology types are silicon (monocrystalline), polysilicon thin film, bonded metal foil, thick film, silicon-on-sapphire and sputtered thin film. Generally, the strain gauges are connected to form a Wheatstone bridge circuit to maximize the output of the sensor and to reduce sensitivity to errors. This is the most commonly employed sensing technology for general purpose pressure measurement.
Capacitive: Uses a diaphragm and pressure cavity to create a variable capacitor to detect strain due to applied pressure, capacitance decreasing as pressure deforms the diaphragm. Common technologies use metal, ceramic, and silicon diaphragms. Capacitive pressure sensors are being integrated into CMOS technology and it is being explored if thin 2D materials can be used as diaphragm material.
Electromagnetic: Measures the displacement of a diaphragm by means of changes in inductance (reluctance), linear variable differential transformer (LVDT), Hall effect, or by eddy current principle.
Piezoelectric: Uses the piezoelectric effect in certain materials such as quartz to measure the strain upon the sensing mechanism due to pressure. This technology is commonly employed for the measurement of highly dynamic pressures. As the basic principle is dynamic, no static pressures can be measured with piezoelectric sensors.
Strain-Gauge: Strain gauge based pressure sensors also use a pressure sensitive element where metal strain gauges are glued on or thin-film gauges are applied on by sputtering. This measuring element can either be a diaphragm or for metal foil gauges measuring bodies in can-type can also be used. The big advantages of this monolithic can-type design are an improved rigidity and the capability to measure highest pressures of up to 15,000 bar. The electrical connection is normally done via a Wheatstone bridge, which allows for a good amplification of the signal and precise and constant measuring results.
Optical: Techniques include the use of the physical change of an optical fiber to detect strain due to applied pressure. A common example of this type utilizes Fiber Bragg Gratings. This technology is employed in challenging applications where the measurement may be highly remote, under high temperature, or may benefit from technologies inherently immune to electromagnetic interference. Another analogous technique utilizes an elastic film constructed in layers that can change reflected wavelengths according to the applied pressure (strain).
Potentiometric: Uses the motion of a wiper along a resistive mechanism to detect the strain caused by applied pressure.
Force balancing: Force-balanced fused quartz Bourdon tubes use a spiral Bourdon tube to exert force on a pivoting armature containing a mirror, the reflection of a beam of light from the mirror senses the angular displacement and current is applied to electromagnets on the armature to balance the force from the tube and bring the angular displacement to zero, the current that is applied to the coils is used as the measurement. Due to the extremely stable and repeatable mechanical and thermal properties of fused quartz and the force balancing which eliminates most non-linear effects these sensors can be accurate to around 1PPM of full scale. Due to the extremely fine fused quartz structures which are made by hand and require expert skill to construct these sensors are generally limited to scientific and calibration purposes. Non force-balancing sensors have lower accuracy and reading the angular displacement cannot be done with the same precision as a force-balancing measurement, although easier to construct due to the larger size these are no longer used.
Other types These types of electronic pressure sensors use other properties (such as density) to infer pressure of a gas, or liquid.
Resonant: Uses the changes in resonant frequency in a sensing mechanism to measure stress, or changes in gas density, caused by applied pressure. This technology may be used in conjunction with a force collector, such as those in the category above. Alternatively, resonant technology may be employed by exposing the resonating element itself to the media, whereby the resonant frequency is dependent upon the density of the media. Sensors have been made out of vibrating wire, vibrating cylinders, quartz, and silicon MEMS. Generally, this technology is considered to provide very stable readings over time. The squeeze-film pressure sensor is a type of MEMS resonant pressure sensor that operates by a thin membrane that compresses a thin film of gas at high frequency. Since the compressibility and stiffness of the gas film are pressure dependent, the resonance frequency of the squeeze-film pressure sensor is used as a measure of the gas pressure.
Thermal: Uses the changes in thermal conductivity of a gas due to density changes to measure pressure. A common example of this type is the Pirani gauge.
Ionization: Measures the flow of charged gas particles (ions) which varies due to density changes to measure pressure. Common examples are the Hot and Cold Cathode gauges.
A pressure sensor, a resonant quartz crystal strain gauge with a Bourdon tube force collector, is the critical sensor of DART. DART detects tsunami waves from the bottom of the open ocean. It has a pressure resolution of approximately 1mm of water when measuring pressure at a depth of several kilometers.
Hydrostatic
Hydrostatic gauges (such as the mercury column manometer) compare pressure to the hydrostatic force per unit area at the base of a column of fluid. Hydrostatic gauge measurements are independent of the type of gas being measured, and can be designed to have a very linear calibration. They have poor dynamic response.
Piston
Piston-type gauges counterbalance the pressure of a fluid with a spring (for example tire-pressure gauges of comparatively low accuracy) or a solid weight, in which case it is known as a deadweight tester and may be used for calibration of other gauges.
Liquid column (manometer)
Liquid-column gauges consist of a column of liquid in a tube whose ends are exposed to different pressures. The column will rise or fall until its weight (a force applied due to gravity) is in equilibrium with the pressure differential between the two ends of the tube (a force applied due to fluid pressure). A very simple version is a U-shaped tube half-full of liquid, one side of which is connected to the region of interest while the reference pressure (which might be the atmospheric pressure or a vacuum) is applied to the other. The difference in liquid levels represents the applied pressure. The pressure exerted by a column of fluid of height h and density ρ is given by the hydrostatic pressure equation, P = hgρ. Therefore, the pressure difference between the applied pressure Pa and the reference pressure P0 in a U-tube manometer can be found by solving . In other words, the pressure on either end of the liquid (shown in blue in the figure) must be balanced (since the liquid is static), and so .
In most liquid-column measurements, the result of the measurement is the height h, expressed typically in mm, cm, or inches. The h is also known as the pressure head. When expressed as a pressure head, pressure is specified in units of length and the measurement fluid must be specified. When accuracy is critical, the temperature of the measurement fluid must likewise be specified, because liquid density is a function of temperature. So, for example, pressure head might be written "742.2 mmHg" or "4.2 inH2O at 59 °F" for measurements taken with mercury or water as the manometric fluid respectively. The word "gauge" or "vacuum" may be added to such a measurement to distinguish between a pressure above or below the atmospheric pressure. Both mm of mercury and inches of water are common pressure heads, which can be converted to S.I. units of pressure using unit conversion and the above formulas.
If the fluid being measured is significantly dense, hydrostatic corrections may have to be made for the height between the moving surface of the manometer working fluid and the location where the pressure measurement is desired, except when measuring differential pressure of a fluid (for example, across an orifice plate or venturi), in which case the density ρ should be corrected by subtracting the density of the fluid being measured.
Although any fluid can be used, mercury is preferred for its high density (13.534 g/cm3) and low vapour pressure. Its convex meniscus is advantageous since this means there will be no pressure errors from wetting the glass, though under exceptionally clean circumstances, the mercury will stick to glass and the barometer may become stuck (the mercury can sustain a negative absolute pressure) even under a strong vacuum. For low pressure differences, light oil or water are commonly used (the latter giving rise to units of measurement such as inches water gauge and millimetres H2O). Liquid-column pressure gauges have a highly linear calibration. They have poor dynamic response because the fluid in the column may react slowly to a pressure change.
When measuring vacuum, the working liquid may evaporate and contaminate the vacuum if its vapor pressure is too high. When measuring liquid pressure, a loop filled with gas or a light fluid can isolate the liquids to prevent them from mixing, but this can be unnecessary, for example, when mercury is used as the manometer fluid to measure differential pressure of a fluid such as water. Simple hydrostatic gauges can measure pressures ranging from a few torrs (a few 100 Pa) to a few atmospheres (approximately ).
A single-limb liquid-column manometer has a larger reservoir instead of one side of the U-tube and has a scale beside the narrower column. The column may be inclined to further amplify the liquid movement. Based on the use and structure, following types of manometers are used
Simple manometer
Micromanometer
Differential manometer
Inverted differential manometer
McLeod gauge
A McLeod gauge isolates a sample of gas and compresses it in a modified mercury manometer until the pressure is a few millimetres of mercury. The technique is very slow and unsuited to continual monitoring, but is capable of good accuracy. Unlike other manometer gauges, the McLeod gauge reading is dependent on the composition of the gas, since the interpretation relies on the sample compressing as an ideal gas. Due to the compression process, the McLeod gauge completely ignores partial pressures from non-ideal vapors that condense, such as pump oils, mercury, and even water if compressed enough.
0.1 mPa is the lowest direct measurement of pressure that is possible with current technology. Other vacuum gauges can measure lower pressures, but only indirectly by measurement of other pressure-dependent properties. These indirect measurements must be calibrated to SI units by a direct measurement, most commonly a McLeod gauge.
Aneroid
Aneroid gauges are based on a metallic pressure-sensing element that flexes elastically under the effect of a pressure difference across the element. "Aneroid" means "without fluid", and the term originally distinguished these gauges from the hydrostatic gauges described above. However, aneroid gauges can be used to measure the pressure of a liquid as well as a gas, and they are not the only type of gauge that can operate without fluid. For this reason, they are often called mechanical gauges in modern language. Aneroid gauges are not dependent on the type of gas being measured, unlike thermal and ionization gauges, and are less likely to contaminate the system than hydrostatic gauges. The pressure sensing element may be a Bourdon tube, a diaphragm, a capsule, or a set of bellows, which will change shape in response to the pressure of the region in question. The deflection of the pressure sensing element may be read by a linkage connected to a needle, or it may be read by a secondary transducer. The most common secondary transducers in modern vacuum gauges measure a change in capacitance due to the mechanical deflection. Gauges that rely on a change in capacitance are often referred to as capacitance manometers.
Bourdon tube
The Bourdon pressure gauge uses the principle that a flattened tube tends to straighten or regain its circular form in cross-section when pressurized. (A party horn illustrates this principle.) This change in cross-section may be hardly noticeable, involving moderate stresses within the elastic range of easily workable materials. The strain of the material of the tube is magnified by forming the tube into a C shape or even a helix, such that the entire tube tends to straighten out or uncoil elastically as it is pressurized. Eugène Bourdon patented his gauge in France in 1849, and it was widely adopted because of its superior simplicity, linearity, and accuracy; Bourdon is now part of the Baumer group and still manufacture Bourdon tube gauges in France. Edward Ashcroft purchased Bourdon's American patent rights in 1852 and became a major manufacturer of gauges. Also in 1849, Bernard Schaeffer in Magdeburg, Germany patented a successful diaphragm (see below) pressure gauge, which, together with the Bourdon gauge, revolutionized pressure measurement in industry. But in 1875 after Bourdon's patents expired, his company Schaeffer and Budenberg also manufactured Bourdon tube gauges.
In practice, a flattened thin-wall, closed-end tube is connected at the hollow end to a fixed pipe containing the fluid pressure to be measured. As the pressure increases, the closed end moves in an arc, and this motion is converted into the rotation of a (segment of a) gear by a connecting link that is usually adjustable. A small-diameter pinion gear is on the pointer shaft, so the motion is magnified further by the gear ratio. The positioning of the indicator card behind the pointer, the initial pointer shaft position, the linkage length and initial position, all provide means to calibrate the pointer to indicate the desired range of pressure for variations in the behavior of the Bourdon tube itself. Differential pressure can be measured by gauges containing two different Bourdon tubes, with connecting linkages (but is more usually measured via diaphragms or bellows and a balance system).
Bourdon tubes measures gauge pressure, relative to ambient atmospheric pressure, as opposed to absolute pressure; vacuum is sensed as a reverse motion. Some aneroid barometers use Bourdon tubes closed at both ends (but most use diaphragms or capsules, see below). When the measured pressure is rapidly pulsing, such as when the gauge is near a reciprocating pump, an orifice restriction in the connecting pipe is frequently used to avoid unnecessary wear on the gears and provide an average reading; when the whole gauge is subject to mechanical vibration, the case (including the pointer and dial) can be filled with an oil or glycerin. Typical high-quality modern gauges provide an accuracy of ±1% of span (Nominal diameter 100mm, Class 1 EN837-1), and a special high-accuracy gauge can be as accurate as 0.1% of full scale.
Force-balanced fused quartz Bourdon tube sensors work on the same principle but uses the reflection of a beam of light from a mirror to sense the angular displacement and current is applied to electromagnets to balance the force of the tube and bring the angular displacement back to zero, the current that is applied to the coils is used as the measurement. Due to the extremely stable and repeatable mechanical and thermal properties of quartz and the force balancing which eliminates nearly all physical movement these sensors can be accurate to around 1 PPM of full scale. Due to the extremely fine fused quartz structures which must be made by hand these sensors are generally limited to scientific and calibration purposes.
In the following illustrations of a compound gauge (vacuum and gauge pressure), the case and window has been removed to show only the dial, pointer and process connection. This particular gauge is a combination vacuum and pressure gauge used for automotive diagnosis:
The left side of the face, used for measuring vacuum, is calibrated in inches of mercury on its outer scale and centimetres of mercury on its inner scale
The right portion of the face is used to measure fuel pump pressure or turbo boost and is scaled in pounds per square inch on its outer scale and kg/cm2 on its inner scale.
Mechanical details include stationary and moving parts.
Stationary parts:
Moving parts:
Stationary end of Bourdon tube. This communicates with the inlet pipe through the receiver block.
Moving end of Bourdon tube. This end is sealed.
Pivot and pivot pin
Link joining pivot pin to lever (5) with pins to allow joint rotation
Lever, an extension of the sector gear (7)
Sector gear axle pin
Sector gear
Indicator needle axle. This has a spur gear that engages the sector gear (7) and extends through the face to drive the indicator needle. Due to the short distance between the lever arm link boss and the pivot pin and the difference between the effective radius of the sector gear and that of the spur gear, any motion of the Bourdon tube is greatly amplified. A small motion of the tube results in a large motion of the indicator needle.
Hair spring to preload the gear train to eliminate gear lash and hysteresis
Diaphragm (membrane)
A second type of aneroid gauge uses deflection of a flexible membrane that separates regions of different pressure. The amount of deflection is repeatable for known pressures so the pressure can be determined by using calibration. The deformation of a thin diaphragm is dependent on the difference in pressure between its two faces. The reference face can be open to atmosphere to measure gauge pressure, open to a second port to measure differential pressure, or can be sealed against a vacuum or other fixed reference pressure to measure absolute pressure. The deformation can be measured using mechanical, optical or capacitive techniques. Ceramic and metallic diaphragms are used.
The useful range is above 10−2 Torr (roughly 1 Pa).
For absolute measurements, welded pressure capsules with diaphragms on either side are often used.
Membrane shapes include:
Flat
Corrugated
Flattened tube
Capsule
Bellows
In gauges intended to sense small pressures or pressure differences, or require that an absolute pressure be measured, the gear train and needle may be driven by an enclosed and sealed bellows chamber, called an aneroid. (Early barometers used a column of liquid such as water or the liquid metal mercury suspended by a vacuum.) This bellows configuration is used in aneroid barometers (barometers with an indicating needle and dial card), altimeters, altitude recording barographs, and the altitude telemetry instruments used in weather balloon radiosondes. These devices use the sealed chamber as a reference pressure and are driven by the external pressure. Other sensitive aircraft instruments such as air speed indicators and rate of climb indicators (variometers) have connections both to the internal part of the aneroid chamber and to an external enclosing chamber.
Magnetic coupling
These gauges use the attraction of two magnets to translate differential pressure into motion of a dial pointer. As differential pressure increases, a magnet attached to either a piston or rubber diaphragm moves. A rotary magnet that is attached to a pointer then moves in unison. To create different pressure ranges, the spring rate can be increased or decreased.
Spinning-rotor gauge
The spinning-rotor gauge works by measuring how a rotating ball is slowed by the viscosity of the gas being measured. The ball is made of steel and is magnetically levitated inside a steel tube closed at one end and exposed to the gas to be measured at the other. The ball is brought up to speed (about 2500 or 3800 rad/s), and the deceleration rate is measured after switching off the drive, by electromagnetic transducers. The range of the instrument is 5−5 to 102 Pa (103 Pa with less accuracy). It is accurate and stable enough to be used as a secondary standard. During the last years this type of gauge became much more user friendly and easier to operate. In the past the instrument was famous for requiring some skill and knowledge to use correctly. For high accuracy measurements various corrections must be applied and the ball must be spun at a pressure well below the intended measurement pressure for five hours before using. It is most useful in calibration and research laboratories where high accuracy is required and qualified technicians are available. Insulation vacuum monitoring of cryogenic liquids is a well suited application for this system too. With the inexpensive and long term stable, weldable sensor, that can be separated from the more costly electronics, it is a perfect fit to all static vacuums.
Electronic pressure instruments
Metal strain gauge
The strain gauge is generally glued (foil strain gauge) or deposited (thin-film strain gauge) onto a membrane. Membrane deflection due to pressure causes a resistance change in the strain gauge which can be electronically measured.
Piezoresistive strain gauge
Uses the piezoresistive effect of bonded or formed strain gauges to detect strain due to applied pressure.
Piezoresistive silicon pressure sensor
The sensor is generally a temperature compensated, piezoresistive silicon pressure sensor chosen for its excellent performance and long-term stability. Integral temperature compensation is provided over a range of 0–50°C using laser-trimmed resistors. An additional laser-trimmed resistor is included to normalize pressure sensitivity variations by programming the gain of an external differential amplifier. This provides good sensitivity and long-term stability. The two ports of the sensor, apply pressure to the same single transducer, please see pressure flow diagram below.
This is an over-simplified diagram, but you can see the fundamental design of the internal ports in the sensor. The important item here to note is the "diaphragm" as this is the sensor itself. Is it slightly convex in shape (highly exaggerated in the drawing); this is important as it affects the accuracy of the sensor in use.
The shape of the sensor is important because it is calibrated to work in the direction of air flow as shown by the RED arrows. This is normal operation for the pressure sensor, providing a positive reading on the display of the digital pressure meter. Applying pressure in the reverse direction can induce errors in the results as the movement of the air pressure is trying to force the diaphragm to move in the opposite direction. The errors induced by this are small, but can be significant, and therefore it is always preferable to ensure that the more positive pressure is always applied to the positive (+ve) port and the lower pressure is applied to the negative (-ve) port, for normal 'gauge pressure' application. The same applies to measuring the difference between two vacuums, the larger vacuum should always be applied to the negative (-ve) port.
The measurement of pressure via the Wheatstone Bridge looks something like this....
The effective electrical model of the transducer, together with a basic signal conditioning circuit, is shown in the application schematic. The pressure sensor is a fully active Wheatstone bridge which has been temperature compensated and offset adjusted by means of thick film, laser trimmed resistors. The excitation to the bridge is applied via a constant current. The low-level bridge output is at +O and -O, and the amplified span is set by the gain programming resistor (r). The electrical design is microprocessor controlled, which allows for calibration, the additional functions for the user, such as Scale Selection, Data Hold, Zero and Filter functions, the Record function that stores/displays MAX/MIN.
Capacitive
Uses a diaphragm and pressure cavity to create a variable capacitor to detect strain due to applied pressure.
Magnetic
Measures the displacement of a diaphragm by means of changes in inductance (reluctance), LVDT, Hall effect, or by eddy current principle.
Piezoelectric
Uses the piezoelectric effect in certain materials such as quartz to measure the strain upon the sensing mechanism due to pressure.
Optical
Uses the physical change of an optical fiber to detect strain due to applied pressure.
Potentiometric
Uses the motion of a wiper along a resistive mechanism to detect the strain caused by applied pressure.
Resonant
Uses the changes in resonant frequency in a sensing mechanism to measure stress, or changes in gas density, caused by applied pressure.
Thermal conductivity
Generally, as a real gas increases in density -which may indicate an increase in pressure- its ability to conduct heat increases. In this type of gauge, a wire filament is heated by running current through it. A thermocouple or resistance thermometer (RTD) can then be used to measure the temperature of the filament. This temperature is dependent on the rate at which the filament loses heat to the surrounding gas, and therefore on the thermal conductivity. A common variant is the Pirani gauge, which uses a single platinum filament as both the heated element and RTD. These gauges are accurate from 10−3 Torr to 10 Torr, but their calibration is sensitive to the chemical composition of the gases being measured.
Pirani (one wire)
A Pirani gauge consists of a metal wire open to the pressure being measured. The wire is heated by a current flowing through it and cooled by the gas surrounding it. If the gas pressure is reduced, the cooling effect will decrease, hence the equilibrium temperature of the wire will increase. The resistance of the wire is a function of its temperature: by measuring the voltage across the wire and the current flowing through it, the resistance (and so the gas pressure) can be determined. This type of gauge was invented by Marcello Pirani.
Two-wire
In two-wire gauges, one wire coil is used as a heater, and the other is used to measure temperature due to convection. Thermocouple gauges and thermistor gauges work in this manner using a thermocouple or thermistor, respectively, to measure the temperature of the heated wire.
Ionization gauge
Ionization gauges are the most sensitive gauges for very low pressures (also referred to as hard or high vacuum). They sense pressure indirectly by measuring the electrical ions produced when the gas is bombarded with electrons. Fewer ions will be produced by lower density gases. The calibration of an ion gauge is unstable and dependent on the nature of the gases being measured, which is not always known. They can be calibrated against a McLeod gauge which is much more stable and independent of gas chemistry.
Thermionic emission generates electrons, which collide with gas atoms and generate positive ions. The ions are attracted to a suitably biased electrode known as the collector. The current in the collector is proportional to the rate of ionization, which is a function of the pressure in the system. Hence, measuring the collector current gives the gas pressure. There are several sub-types of ionization gauge.
Most ion gauges come in two types: hot cathode and cold cathode. In the hot cathode version, an electrically heated filament produces an electron beam. The electrons travel through the gauge and ionize gas molecules around them. The resulting ions are collected at a negative electrode. The current depends on the number of ions, which depends on the pressure in the gauge. Hot cathode gauges are accurate from 10−3 Torr to 10−10 Torr. The principle behind cold cathode version is the same, except that electrons are produced in the discharge of a high voltage. Cold cathode gauges are accurate from 10−2 Torr to 10−9 Torr. Ionization gauge calibration is very sensitive to construction geometry, chemical composition of gases being measured, corrosion and surface deposits. Their calibration can be invalidated by activation at atmospheric pressure or low vacuum. The composition of gases at high vacuums will usually be unpredictable, so a mass spectrometer must be used in conjunction with the ionization gauge for accurate measurement.
Hot cathode
A hot-cathode ionization gauge is composed mainly of three electrodes acting together as a triode, wherein the cathode is the filament. The three electrodes are a collector or plate, a filament, and a grid. The collector current is measured in picoamperes by an electrometer. The filament voltage to ground is usually at a potential of 30 volts, while the grid voltage at 180–210 volts DC, unless there is an optional electron bombardment feature, by heating the grid, which may have a high potential of approximately 565 volts.
The most common ion gauge is the hot-cathode Bayard–Alpert gauge, with a small ion collector inside the grid. A glass envelope with an opening to the vacuum can surround the electrodes, but usually the nude gauge is inserted in the vacuum chamber directly, the pins being fed through a ceramic plate in the wall of the chamber. Hot-cathode gauges can be damaged or lose their calibration if they are exposed to atmospheric pressure or even low vacuum while hot. The measurements of a hot-cathode ionization gauge are always logarithmic.
Electrons emitted from the filament move several times in back-and-forth movements around the grid before finally entering the grid. During these movements, some electrons collide with a gaseous molecule to form a pair of an ion and an electron (electron ionization). The number of these ions is proportional to the gaseous molecule density multiplied by the electron current emitted from the filament, and these ions pour into the collector to form an ion current. Since the gaseous molecule density is proportional to the pressure, the pressure is estimated by measuring the ion current.
The low-pressure sensitivity of hot-cathode gauges is limited by the photoelectric effect. Electrons hitting the grid produce x-rays that produce photoelectric noise in the ion collector. This limits the range of older hot-cathode gauges to 10−8 Torr and the Bayard–Alpert to about 10−10 Torr. Additional wires at cathode potential in the line of sight between the ion collector and the grid prevent this effect. In the extraction type the ions are not attracted by a wire, but by an open cone. As the ions cannot decide which part of the cone to hit, they pass through the hole and form an ion beam. This ion beam can be passed on to a:
Faraday cup
Microchannel plate detector with Faraday cup
Quadrupole mass analyzer with Faraday cup
Quadrupole mass analyzer with microchannel plate detector and Faraday cup
Ion lens and acceleration voltage and directed at a target to form a sputter gun. In this case a valve lets gas into the grid-cage.
Cold cathode
There are two subtypes of cold-cathode ionization gauges: the Penning gauge (invented by Frans Michel Penning), and the inverted magnetron, also called a Redhead gauge. The major difference between the two is the position of the anode with respect to the cathode. Neither has a filament, and each may require a DC potential of about 4 kV for operation. Inverted magnetrons can measure down to 1 Torr.
Likewise, cold-cathode gauges may be reluctant to start at very low pressures, in that the near-absence of a gas makes it difficult to establish an electrode current - in particular in Penning gauges, which use an axially symmetric magnetic field to create path lengths for electrons that are of the order of metres. In ambient air, suitable ion-pairs are ubiquitously formed by cosmic radiation; in a Penning gauge, design features are used to ease the set-up of a discharge path. For example, the electrode of a Penning gauge is usually finely tapered to facilitate the field emission of electrons.
Maintenance cycles of cold cathode gauges are, in general, measured in years, depending on the gas type and pressure that they are operated in. Using a cold cathode gauge in gases with substantial organic components, such as pump oil fractions, can result in the growth of delicate carbon films and shards within the gauge that eventually either short-circuit the electrodes of the gauge or impede the generation of a discharge path.
Dynamic transients
When fluid flows are not in equilibrium, local pressures may be higher or lower than the average pressure in a medium. These disturbances propagate from their source as longitudinal pressure variations along the path of propagation. This is also called sound. Sound pressure is the instantaneous local pressure deviation from the average pressure caused by a sound wave. Sound pressure can be measured using a microphone in air and a hydrophone in water. The effective sound pressure is the root mean square of the instantaneous sound pressure over a given interval of time. Sound pressures are normally small and are often expressed in units of microbar.
frequency response of pressure sensors
resonance
Calibration and standards
The American Society of Mechanical Engineers (ASME) has developed two separate and distinct standards on pressure measurement, B40.100 and PTC 19.2. B40.100 provides guidelines on Pressure Indicated Dial Type and Pressure Digital Indicating Gauges, Diaphragm Seals, Snubbers, and Pressure Limiter Valves. PTC 19.2 provides instructions and guidance for the accurate determination of pressure values in support of the ASME Performance Test Codes. The choice of method, instruments, required calculations, and corrections to be applied depends on the purpose of the measurement, the allowable uncertainty, and the characteristics of the equipment being tested.
The methods for pressure measurement and the protocols used for data transmission are also provided. Guidance is given for setting up the instrumentation and determining the uncertainty of the measurement. Information regarding the instrument type, design, applicable pressure range, accuracy, output, and relative cost is provided. Information is also provided on pressure-measuring devices that are used in field environments i.e., piston gauges, manometers, and low-absolute-pressure (vacuum) instruments.
These methods are designed to assist in the evaluation of measurement uncertainty based on current technology and engineering knowledge, taking into account published instrumentation specifications and measurement and application techniques. This Supplement provides guidance in the use of methods to establish the pressure-measurement uncertainty.
European (CEN) Standard
EN 472 : Pressure gauge - Vocabulary.
EN 837-1 : Pressure gauges. Bourdon tube pressure gauges. Dimensions, metrology, requirements and testing.
EN 837-2 : Pressure gauges. Selection and installation recommendations for pressure gauges.
EN 837-3 : Pressure gauges. Diaphragm and capsule pressure gauges. Dimensions, metrology, requirements, and testing.
US ASME Standards
B40.100-2013: Pressure gauges and Gauge attachments.
PTC 19.2-2010 : The Performance test code for pressure measurement.
Applications
There are many applications for pressure sensors:
Pressure sensing
This is where the measurement of interest is pressure, expressed as a force per unit area. This is useful in weather instrumentation, aircraft, automobiles, and any other machinery that has pressure functionality implemented.
Altitude sensing
This is useful in aircraft, rockets, satellites, weather balloons, and many other applications. All these applications make use of the relationship between changes in pressure relative to the altitude. This relationship is governed by the following equation:
This equation is calibrated for an altimeter, up to 36,090 feet (11,000 m). Outside that range, an error will be introduced which can be calculated differently for each different pressure sensor. These error calculations will factor in the error introduced by the change in temperature as we go up.
Barometric pressure sensors can have an altitude resolution of less than 1 meter, which is significantly better than GPS systems (about 20 meters altitude resolution). In navigation applications altimeters are used to distinguish between stacked road levels for car navigation and floor levels in buildings for pedestrian navigation.
Flow sensing
This is the use of pressure sensors in conjunction with the venturi effect to measure flow. Differential pressure is measured between two segments of a venturi tube that have a different aperture. The pressure difference between the two segments is directly proportional to the flow rate through the venturi tube. A low pressure sensor is almost always required as the pressure difference is relatively small.
Level / depth sensing
A pressure sensor may also be used to calculate the level of a fluid. This technique is commonly employed to measure the depth of a submerged body (such as a diver or submarine), or level of contents in a tank (such as in a water tower). For most practical purposes, fluid level is directly proportional to pressure. In the case of fresh water where the contents are under atmospheric pressure, 1psi = 27.7 inH2O / 1Pa = 9.81 mmH2O. The basic equation for such a measurement is
where P = pressure, ρ = density of the fluid, g = standard gravity, h = height of fluid column above pressure sensor
Leak testing
A pressure sensor may be used to sense the decay of pressure due to a system leak. This is commonly done by either comparison to a known leak using differential pressure, or by means of utilizing the pressure sensor to measure pressure change over time.
Groundwater measurement
A piezometer is either a device used to measure liquid pressure in a system by measuring the height to which a column of the liquid rises against gravity, or a device which measures the pressure (more precisely, the piezometric head) of groundwater at a specific point. A piezometer is designed to measure static pressures, and thus differs from a pitot tube by not being pointed into the fluid flow. Observation wells give some information on the water level in a formation, but must be read manually. Electrical pressure transducers of several types can be read automatically, making data acquisition more convenient.
The first piezometers in geotechnical engineering were open wells or standpipes (sometimes called Casagrande piezometers) installed into an aquifer. A Casagrande piezometer will typically have a solid casing down to the depth of interest, and a slotted or screened casing within the zone where water pressure is being measured. The casing is sealed into the drillhole with clay, bentonite or concrete to prevent surface water from contaminating the groundwater supply. In an unconfined aquifer, the water level in the piezometer would not be exactly coincident with the water table, especially when the vertical component of flow velocity is significant. In a confined aquifer under artesian conditions, the water level in the piezometer indicates the pressure in the aquifer, but not necessarily the water table. Piezometer wells can be much smaller in diameter than production wells, and a 5 cm diameter standpipe is common.
Piezometers in durable casings can be buried or pushed into the ground to measure the groundwater pressure at the point of installation. The pressure gauges (transducer) can be vibrating-wire, pneumatic, or strain-gauge in operation, converting pressure into an electrical signal. These piezometers are cabled to the surface where they can be read by data loggers or portable readout units, allowing faster or more frequent reading than is possible with open standpipe piezometers.
See also
Applications
References
Sources
External links
Home Made Manometer
Manometer
Underwater diving safety equipment
Vacuum | Pressure measurement | [
"Physics",
"Technology",
"Engineering"
] | 11,413 | [
"Pressure gauges",
"Vacuum",
"Matter",
"Measuring instruments"
] |
19,957 | https://en.wikipedia.org/wiki/Maser | A maser is a device that produces coherent electromagnetic waves (microwaves), through amplification by stimulated emission. The term is an acronym for microwave amplification by stimulated emission of radiation. Nikolay Basov, Alexander Prokhorov and Joseph Weber introduced the concept of the maser in 1952, and Charles H. Townes, James P. Gordon, and Herbert J. Zeiger built the first maser at Columbia University in 1953. Townes, Basov and Prokhorov won the 1964 Nobel Prize in Physics for theoretical work leading to the maser. Masers are used as timekeeping devices in atomic clocks, and as extremely low-noise microwave amplifiers in radio telescopes and deep-space spacecraft communication ground-stations.
Modern masers can be designed to generate electromagnetic waves at microwave frequencies and radio and infrared frequencies. For this reason, Townes suggested replacing "microwave" with "molecular" as the first word in the acronym "maser".
The laser works by the same principle as the maser, but produces higher-frequency coherent radiation at visible wavelengths. The maser was the precursor to the laser, inspiring theoretical work by Townes and Arthur Leonard Schawlow that led to the invention of the laser in 1960 by Theodore Maiman. When the coherent optical oscillator was first imagined in 1957, it was originally called the "optical maser". This was ultimately changed to , for "light amplification by stimulated emission of radiation". Gordon Gould is credited with creating this acronym in 1957.
History
The theoretical principles governing the operation of a maser were first described by Joseph Weber of the University of Maryland, College Park at the Electron Tube Research Conference in June 1952 in Ottawa, with a summary published in the June 1953 Transactions of the Institute of Radio Engineers Professional Group on Electron Devices, and simultaneously by Nikolay Basov and Alexander Prokhorov from Lebedev Institute of Physics, at an All-Union Conference on Radio-Spectroscopy held by the USSR Academy of Sciences in May 1952, published in October 1954.
Independently, Charles Hard Townes, James P. Gordon, and H. J. Zeiger built the first ammonia maser at Columbia University in 1953. This device used stimulated emission in a stream of energized ammonia molecules to produce amplification of microwaves at a frequency of about 24.0 gigahertz. Townes later worked with Arthur L. Schawlow to describe the principle of the optical maser, or laser, of which Theodore H. Maiman created the first working model in 1960.
For their research in the field of stimulated emission, Townes, Basov and Prokhorov were awarded the Nobel Prize in Physics in 1964.
Technology
The maser is based on the principle of stimulated emission proposed by Albert Einstein in 1917. When atoms have been induced into an excited energy state, they can amplify radiation at a frequency particular to the element or molecule used as the masing medium (similar to what occurs in the lasing medium in a laser).
By putting such an amplifying medium in a resonant cavity, feedback is created that can produce coherent radiation.
Some common types
Atomic beam masers
Ammonia maser
Free electron maser
Hydrogen maser
Gas masers
Rubidium maser
Liquid-dye and chemical laser
Solid state masers
Ruby maser
Whispering-gallery modes iron-sapphire maser
Dual noble gas maser (The dual noble gas of a masing medium which is nonpolar.)
21st-century developments
In 2012, a research team from the National Physical Laboratory and Imperial College London developed a solid-state maser that operated at room temperature by using optically pumped, pentacene-doped p-Terphenyl as the amplifier medium. It produced pulses of maser emission lasting for a few hundred microseconds.
In 2018, a research team from Imperial College London and University College London demonstrated continuous-wave maser oscillation using synthetic diamonds containing nitrogen-vacancy defects.
Uses
Masers serve as high precision frequency references. These "atomic frequency standards" are one of the many forms of atomic clocks. Masers were also used as low-noise microwave amplifiers in radio telescopes, though these have largely been replaced by amplifiers based on FETs.
During the early 1960s, the Jet Propulsion Laboratory developed a maser to provide ultra-low-noise amplification of S-band microwave signals received from deep space probes. This maser used deeply refrigerated helium to chill the amplifier down to a temperature of 4 kelvin. Amplification was achieved by exciting a ruby comb with a 12.0 gigahertz klystron. In the early years, it took days to chill and remove the impurities from the hydrogen lines.
Refrigeration was a two-stage process, with a large Linde unit on the ground, and a crosshead compressor within the antenna. The final injection was at through a micrometer-adjustable entry to the chamber. The whole system noise temperature looking at cold sky (2.7 kelvin in the microwave band) was 17 kelvin. This gave such a low noise figure that the Mariner IV space probe could send still pictures from Mars back to the Earth, even though the output power of its radio transmitter was only 15 watts, and hence the total signal power received was only −169 decibels with respect to a milliwatt (dBm).
Hydrogen maser
The hydrogen maser is used as an atomic frequency standard. Together with other kinds of atomic clocks, these help make up the International Atomic Time standard ("Temps Atomique International" or "TAI" in French). This is the international time scale coordinated by the International Bureau of Weights and Measures. Norman Ramsey and his colleagues first conceived of the maser as a timing standard. More recent masers are practically identical to their original design. Maser oscillations rely on the stimulated emission between two hyperfine energy levels of atomic hydrogen.
Here is a brief description of how they work:
First, a beam of atomic hydrogen is produced. This is done by submitting the gas at low pressure to a high-frequency radio wave discharge (see the picture on this page).
The next step is "state selection"—in order to get some stimulated emission, it is necessary to create a population inversion of the atoms. This is done in a way that is very similar to the Stern–Gerlach experiment. After passing through an aperture and a magnetic field, many of the atoms in the beam are left in the upper energy level of the lasing transition. From this state, the atoms can decay to the lower state and emit some microwave radiation.
A high Q factor (quality factor) microwave cavity confines the microwaves and reinjects them repeatedly into the atom beam. The stimulated emission amplifies the microwaves on each pass through the beam. This combination of amplification and feedback is what defines all oscillators. The resonant frequency of the microwave cavity is tuned to the frequency of the hyperfine energy transition of hydrogen: 1,420,405,752 hertz.
A small fraction of the signal in the microwave cavity is coupled into a coaxial cable and then sent to a coherent radio receiver.
The microwave signal coming out of the maser is very weak, a few picowatts. The frequency of the signal is fixed and extremely stable. The coherent receiver is used to amplify the signal and change the frequency. This is done using a series of phase-locked loops and a high performance quartz oscillator.
Astrophysical masers
Maser-like stimulated emission has also been observed in nature from interstellar space, and it is frequently called "superradiant emission" to distinguish it from laboratory masers. Such emission is observed from molecules such as water (H2O), hydroxyl radicals (•OH), methanol (CH3OH), formaldehyde (HCHO), silicon monoxide (SiO), and carbodiimide (HNCNH). Water molecules in star-forming regions can undergo a population inversion and emit radiation at about 22.0 GHz, creating the brightest spectral line in the radio universe. Some water masers also emit radiation from a rotational transition at a frequency of 96 GHz.
Extremely powerful masers, associated with active galactic nuclei, are known as megamasers and are up to a million times more powerful than stellar masers.
Terminology
The meaning of the term maser has changed slightly since its introduction. Initially the acronym was universally given as "microwave amplification by stimulated emission of radiation", which described devices which emitted in the microwave region of the electromagnetic spectrum.
The principle and concept of stimulated emission has since been extended to more devices and frequencies. Thus, the original acronym is sometimes modified, as suggested by Charles H. Townes, to "molecular amplification by stimulated emission of radiation." Some have asserted that Townes's efforts to extend the acronym in this way were primarily motivated by the desire to increase the importance of his invention, and his reputation in the scientific community.
When the laser was developed, Townes and Schawlow and their colleagues at Bell Labs pushed the use of the term optical maser, but this was largely abandoned in favor of laser, coined by their rival Gordon Gould. In modern usage, devices that emit in the X-ray through infrared portions of the spectrum are typically called lasers, and devices that emit in the microwave region and below are commonly called masers, regardless of whether they emit microwaves or other frequencies.
Gould originally proposed distinct names for devices that emit in each portion of the spectrum, including grasers (gamma ray lasers), xasers (x-ray lasers), uvasers (ultraviolet lasers), lasers (visible lasers), irasers (infrared lasers), masers (microwave masers), and rasers (RF masers). Most of these terms never caught on, however, and all have now become (apart from in science fiction) obsolete except for maser and laser.
See also
List of laser types
X-ray laser
Gamma-ray laser (graser)
Sonic laser (saser)
Spaser
References
Further reading
J.R. Singer, Masers, John Whiley and Sons Inc., 1959.
J. Vanier, C. Audoin, The Quantum Physics of Atomic Frequency Standards, Adam Hilger, Bristol, 1989.
External links
The Feynman Lectures on Physics Vol. III Ch. 9: The Ammonia Maser
arXiv.org search for "maser"
Bright Idea: The First Lasers
Invention of the Maser and Laser, American Physical Society
Shawlow and Townes Invent the Laser, Bell Labs
American inventions
Laser types
Microwave technology
Optical devices
Russian inventions
20th-century inventions | Maser | [
"Materials_science",
"Engineering"
] | 2,224 | [
"Glass engineering and science",
"Optical devices"
] |
20,051 | https://en.wikipedia.org/wiki/Mach%20number | The Mach number (M or Ma), often only Mach, (; ) is a dimensionless quantity in fluid dynamics representing the ratio of flow velocity past a boundary to the local speed of sound.
It is named after the Austrian physicist and philosopher Ernst Mach.
where:
is the local Mach number,
is the local flow velocity with respect to the boundaries (either internal, such as an object immersed in the flow, or external, like a channel), and
is the speed of sound in the medium, which in air varies with the square root of the thermodynamic temperature.
By definition, at Mach1, the local flow velocity is equal to the speed of sound. At Mach0.65, is 65% of the speed of sound (subsonic), and, at Mach1.35, is 35% faster than the speed of sound (supersonic).
The local speed of sound, and hence the Mach number, depends on the temperature of the surrounding gas. The Mach number is primarily used to determine the approximation with which a flow can be treated as an incompressible flow. The medium can be a gas or a liquid. The boundary can be travelling in the medium, or it can be stationary while the medium flows along it, or they can both be moving, with different velocities: what matters is their relative velocity with respect to each other. The boundary can be the boundary of an object immersed in the medium, or of a channel such as a nozzle, diffuser or wind tunnel channelling the medium. As the Mach number is defined as the ratio of two speeds, it is a dimensionless quantity. If < 0.2–0.3 and the flow is quasi-steady and isothermal, compressibility effects will be small and simplified incompressible flow equations can be used.
Etymology
The Mach number is named after the physicist and philosopher Ernst Mach, in honour of his achievements, according to a proposal by the aeronautical engineer Jakob Ackeret in 1929. The word Mach is always capitalized since it derives from a proper name, and since the Mach number is a dimensionless quantity rather than a unit of measure, the number comes after the word Mach. It was also known as Mach's number by Lockheed when reporting the effects of compressibility on the P-38 aircraft in 1942.
Overview
Mach number is a measure of the compressibility characteristics of fluid flow: the fluid (air) behaves under the influence of compressibility in a similar manner at a given Mach number, regardless of other variables. As modeled in the International Standard Atmosphere, dry air at mean sea level, standard temperature of , the speed of sound is . The speed of sound is not a constant; in a gas, it increases proportionally to the square root of the absolute temperature, and since atmospheric temperature generally decreases with increasing altitude between sea level and , the speed of sound also decreases. For example, the standard atmosphere model lapses temperature to at altitude, with a corresponding speed of sound (Mach1) of , 86.7% of the sea level value.
Classification of Mach regimes
The terms subsonic and supersonic are used to refer to speeds below and above the local speed of sound, and to particular ranges of Mach values. This occurs because of the presence of a transonic regime around flight (free stream) M = 1 where approximations of the Navier-Stokes equations used for subsonic design no longer apply; the simplest explanation is that the flow around an airframe locally begins to exceed M = 1 even though the free stream Mach number is below this value.
Meanwhile, the supersonic regime is usually used to talk about the set of Mach numbers for which linearised theory may be used, where for example the (air) flow is not chemically reacting, and where heat-transfer between air and vehicle may be reasonably neglected in calculations.
Generally, NASA defines high hypersonic as any Mach number from 10 to 25, and re-entry speeds as anything greater than Mach 25. Aircraft operating in this regime include the Space Shuttle and various space planes in development.
High-speed flow around objects
Flight can be roughly classified in six categories:
At transonic speeds, the flow field around the object includes both sub- and supersonic parts. The transonic period begins when first zones of M > 1 flow appear around the object. In case of an airfoil (such as an aircraft's wing), this typically happens above the wing. Supersonic flow can decelerate back to subsonic only in a normal shock; this typically happens before the trailing edge. (Fig.1a)
As the speed increases, the zone of M > 1 flow increases towards both leading and trailing edges. As M = 1 is reached and passed, the normal shock reaches the trailing edge and becomes a weak oblique shock: the flow decelerates over the shock, but remains supersonic. A normal shock is created ahead of the object, and the only subsonic zone in the flow field is a small area around the object's leading edge. (Fig.1b)
When an aircraft exceeds Mach 1 (i.e. the sound barrier), a large pressure difference is created just in front of the aircraft. This abrupt pressure difference, called a shock wave, spreads backward and outward from the aircraft in a cone shape (a so-called Mach cone). It is this shock wave that causes the sonic boom heard as a fast moving aircraft travels overhead. A person inside the aircraft will not hear this. The higher the speed, the more narrow the cone; at just over M = 1 it is hardly a cone at all, but closer to a slightly concave plane.
At fully supersonic speed, the shock wave starts to take its cone shape and flow is either completely supersonic, or (in case of a blunt object), only a very small subsonic flow area remains between the object's nose and the shock wave it creates ahead of itself. (In the case of a sharp object, there is no air between the nose and the shock wave: the shock wave starts from the nose.)
As the Mach number increases, so does the strength of the shock wave and the Mach cone becomes increasingly narrow. As the fluid flow crosses the shock wave, its speed is reduced and temperature, pressure, and density increase. The stronger the shock, the greater the changes. At high enough Mach numbers the temperature increases so much over the shock that ionization and dissociation of gas molecules behind the shock wave begin. Such flows are called hypersonic.
It is clear that any object travelling at hypersonic speeds will likewise be exposed to the same extreme temperatures as the gas behind the nose shock wave, and hence choice of heat-resistant materials becomes important.
High-speed flow in a channel
As a flow in a channel becomes supersonic, one significant change takes place. The conservation of mass flow rate leads one to expect that contracting the flow channel would increase the flow speed (i.e. making the channel narrower results in faster air flow) and at subsonic speeds this holds true. However, once the flow becomes supersonic, the relationship of flow area and speed is reversed: expanding the channel actually increases the speed.
The obvious result is that in order to accelerate a flow to supersonic, one needs a convergent-divergent nozzle, where the converging section accelerates the flow to sonic speeds, and the diverging section continues the acceleration. Such nozzles are called de Laval nozzles and in extreme cases they are able to reach hypersonic speeds ( at 20 °C).
Calculation
When the speed of sound is known, the Mach number at which an aircraft is flying can be calculated by
where:
M is the Mach number
u is velocity of the moving aircraft and
c is the speed of sound at the given altitude (more properly temperature)
and the speed of sound varies with the thermodynamic temperature as:
where:
is the ratio of specific heat of a gas at a constant pressure to heat at a constant volume (1.4 for air)
is the specific gas constant for air.
is the static air temperature.
If the speed of sound is not known, Mach number may be determined by measuring the various air pressures (static and dynamic) and using the following formula that is derived from Bernoulli's equation for Mach numbers less than 1.0. Assuming air to be an ideal gas, the formula to compute Mach number in a subsonic compressible flow is:
where:
qc is impact pressure (dynamic pressure) and
p is static pressure
is the ratio of specific heat of a gas at a constant pressure to heat at a constant volume (1.4 for air)
The formula to compute Mach number in a supersonic compressible flow is derived from the Rayleigh supersonic pitot equation:
Calculating Mach number from pitot tube pressure
Mach number is a function of temperature and true airspeed.
Aircraft flight instruments, however, operate using pressure differential to compute Mach number, not temperature.
Assuming air to be an ideal gas, the formula to compute Mach number in a subsonic compressible flow is found from Bernoulli's equation for (above):
The formula to compute Mach number in a supersonic compressible flow can be found from the Rayleigh supersonic pitot equation (above) using parameters for air:
where:
qc is the dynamic pressure measured behind a normal shock.
As can be seen, M appears on both sides of the equation, and for practical purposes a root-finding algorithm must be used for a numerical solution (the equation is a septic equation in M2 and, though some of these may be solved explicitly, the Abel–Ruffini theorem guarantees that there exists no general form for the roots of these polynomials). It is first determined whether M is indeed greater than 1.0 by calculating M from the subsonic equation. If M is greater than 1.0 at that point, then the value of M from the subsonic equation is used as the initial condition for fixed point iteration of the supersonic equation, which usually converges very rapidly. Alternatively, Newton's method can also be used.
See also
Notes
External links
Gas Dynamics Toolbox Calculate Mach number and normal shock wave parameters for mixtures of perfect and imperfect gases.
NASA's page on Mach Number Interactive calculator for Mach number.
NewByte standard atmosphere calculator and speed converter
Ernst Mach
Aerodynamics
Airspeed
Dimensionless numbers of fluid mechanics
Fluid dynamics | Mach number | [
"Physics",
"Chemistry",
"Engineering"
] | 2,136 | [
"Physical quantities",
"Chemical engineering",
"Aerodynamics",
"Airspeed",
"Aerospace engineering",
"Piping",
"Wikipedia categories named after physical quantities",
"Fluid dynamics"
] |
20,306 | https://en.wikipedia.org/wiki/Mole%20fraction | In chemistry, the mole fraction or molar fraction, also called mole proportion or molar proportion, is a quantity defined as the ratio between the amount of a constituent substance, ni (expressed in unit of moles, symbol mol), and the total amount of all constituents in a mixture, ntot (also expressed in moles):
It is denoted xi (lowercase Roman letter x), sometimes (lowercase Greek letter chi). (For mixtures of gases, the letter y is recommended.)
It is a dimensionless quantity with dimension of and dimensionless unit of moles per mole (mol/mol or molmol−1) or simply 1; metric prefixes may also be used (e.g., nmol/mol for 10−9).
When expressed in percent, it is known as the mole percent or molar percentage (unit symbol %, sometimes "mol%", equivalent to cmol/mol for 10−2).
The mole fraction is called amount fraction by the International Union of Pure and Applied Chemistry (IUPAC) and amount-of-substance fraction by the U.S. National Institute of Standards and Technology (NIST). This nomenclature is part of the International System of Quantities (ISQ), as standardized in ISO 80000-9, which deprecates "mole fraction" based on the unacceptability of mixing information with units when expressing the values of quantities.
The sum of all the mole fractions in a mixture is equal to 1:
Mole fraction is numerically identical to the number fraction, which is defined as the number of particles (molecules) of a constituent Ni divided by the total number of all molecules Ntot.
Whereas mole fraction is a ratio of amounts to amounts (in units of moles per moles), molar concentration is a quotient of amount to volume (in units of moles per litre).
Other ways of expressing the composition of a mixture as a dimensionless quantity are mass fraction and volume fraction.
Properties
Mole fraction is used very frequently in the construction of phase diagrams. It has a number of advantages:
it is not temperature dependent (as is molar concentration) and does not require knowledge of the densities of the phase(s) involved
a mixture of known mole fraction can be prepared by weighing off the appropriate masses of the constituents
the measure is symmetric: in the mole fractions x = 0.1 and x = 0.9, the roles of 'solvent' and 'solute' are reversed.
In a mixture of ideal gases, the mole fraction can be expressed as the ratio of partial pressure to total pressure of the mixture
In a ternary mixture one can express mole fractions of a component as functions of other components mole fraction and binary mole ratios:
Differential quotients can be formed at constant ratios like those above:
or
The ratios X, Y, and Z of mole fractions can be written for ternary and multicomponent systems:
These can be used for solving PDEs like:
or
This equality can be rearranged to have differential quotient of mole amounts or fractions on one side.
or
Mole amounts can be eliminated by forming ratios:
Thus the ratio of chemical potentials becomes:
Similarly the ratio for the multicomponents system becomes
Related quantities
Mass fraction
The mass fraction wi can be calculated using the formula
where Mi is the molar mass of the component i and M̄ is the average molar mass of the mixture.
Molar mixing ratio
The mixing of two pure components can be expressed introducing the amount or molar mixing ratio of them . Then the mole fractions of the components will be:
The amount ratio equals the ratio of mole fractions of components:
due to division of both numerator and denominator by the sum of molar amounts of components. This property has consequences for representations of phase diagrams using, for instance, ternary plots.
Mixing binary mixtures with a common component to form ternary mixtures
Mixing binary mixtures with a common component gives a ternary mixture with certain mixing ratios between the three components. These mixing ratios from the ternary and the corresponding mole fractions of the ternary mixture x1(123), x2(123), x3(123) can be expressed as a function of several mixing ratios involved, the mixing ratios between the components of the binary mixtures and the mixing ratio of the binary mixtures to form the ternary one.
Mole percentage
Multiplying mole fraction by 100 gives the mole percentage, also referred as amount/amount percent [abbreviated as (n/n)% or mol %].
Mass concentration
The conversion to and from mass concentration ρi is given by:
where M̄ is the average molar mass of the mixture.
Molar concentration
The conversion to molar concentration ci is given by:
where M̄ is the average molar mass of the solution, c is the total molar concentration and ρ is the density of the solution.
Mass and molar mass
The mole fraction can be calculated from the masses mi and molar masses Mi of the components:
Spatial variation and gradient
In a spatially non-uniform mixture, the mole fraction gradient triggers the phenomenon of diffusion.
References
Dimensionless quantities of chemistry | Mole fraction | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,071 | [
"Physical quantities",
"Quantity",
"Chemical quantities",
"Dimensionless quantities of chemistry",
"Dimensionless quantities",
"Dimensionless numbers of chemistry"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.