id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
53,453 | https://en.wikipedia.org/wiki/Fermat%27s%20little%20theorem | In number theory, Fermat's little theorem states that if is a prime number, then for any integer , the number is an integer multiple of . In the notation of modular arithmetic, this is expressed as
For example, if and , then , and is an integer multiple of .
If is not divisible by , that is, if is coprime to , then Fermat's little theorem is equivalent to the statement that is an integer multiple of , or in symbols:
For example, if and , then , and is a multiple of .
Fermat's little theorem is the basis for the Fermat primality test and is one of the fundamental results of elementary number theory. The theorem is named after Pierre de Fermat, who stated it in 1640. It is called the "little theorem" to distinguish it from Fermat's Last Theorem.
History
Pierre de Fermat first stated the theorem in a letter dated October 18, 1640, to his friend and confidant Frénicle de Bessy. His formulation is equivalent to the following:
If is a prime and is any integer not divisible by , then is divisible by .
Fermat's original statement was
This may be translated, with explanations and formulas added in brackets for easier understanding, as:
Every prime number [] divides necessarily one of the powers minus one of any [geometric] progression [] [that is, there exists such that divides ], and the exponent of this power [] divides the given prime minus one [divides ]. After one has found the first power [] that satisfies the question, all those whose exponents are multiples of the exponent of the first one satisfy similarly the question [that is, all multiples of the first have the same property].
Fermat did not consider the case where is a multiple of nor prove his assertion, only stating:
(And this proposition is generally true for all series [sic] and for all prime numbers; I would send you a demonstration of it, if I did not fear going on for too long.)
Euler provided the first published proof in 1736, in a paper titled "Theorematum Quorundam ad Numeros Primos Spectantium Demonstratio" (in English: "Demonstration of Certain Theorems Concerning Prime Numbers") in the Proceedings of the St. Petersburg Academy, but Leibniz had given virtually the same proof in an unpublished manuscript from sometime before 1683.
The term "Fermat's little theorem" was probably first used in print in 1913 in Zahlentheorie by Kurt Hensel:
(There is a fundamental theorem holding in every finite group, usually called Fermat's little theorem because Fermat was the first to have proved a very special part of it.)
An early use in English occurs in A.A. Albert's Modern Higher Algebra (1937), which refers to "the so-called 'little' Fermat theorem" on page 206.
Further history
Some mathematicians independently made the related hypothesis (sometimes incorrectly called the Chinese hypothesis) that if and only if is prime. Indeed, the "if" part is true, and it is a special case of Fermat's little theorem. However, the "only if" part is false: For example, , but 341 = 11 × 31 is a pseudoprime to base 2. See below.
Proofs
Several proofs of Fermat's little theorem are known. It is frequently proved as a corollary of Euler's theorem.
Generalizations
Euler's theorem is a generalization of Fermat's little theorem: For any modulus and any integer coprime to , one has
where denotes Euler's totient function (which counts the integers from 1 to that are coprime to ). Fermat's little theorem is indeed a special case, because if is a prime number, then .
A corollary of Euler's theorem is: For every positive integer , if the integer is coprime with , then
for any integers and .
This follows from Euler's theorem, since, if , then for some integer , and one has
If is prime, this is also a corollary of Fermat's little theorem. This is widely used in modular arithmetic, because this allows reducing modular exponentiation with large exponents to exponents smaller than .
Euler's theorem is used with not prime in public-key cryptography, specifically in the RSA cryptosystem, typically in the following way: if
retrieving from the values of , and is easy if one knows . In fact, the extended Euclidean algorithm allows computing the modular inverse of modulo , that is, the integer such that
It follows that
On the other hand, if is the product of two distinct prime numbers, then . In this case, finding from and is as difficult as computing (this has not been proven, but no algorithm is known for computing without knowing ). Knowing only , the computation of has essentially the same difficulty as the factorization of , since , and conversely, the factors and are the (integer) solutions of the equation .
The basic idea of RSA cryptosystem is thus: If a message is encrypted as , using public values of and , then, with the current knowledge, it cannot be decrypted without finding the (secret) factors and of .
Fermat's little theorem is also related to the Carmichael function and Carmichael's theorem, as well as to Lagrange's theorem in group theory.
Converse
The converse of Fermat's little theorem fails for Carmichael numbers. However, a slightly weaker variant of the converse is Lehmer's theorem:
If there exists an integer such that
and for all primes dividing one has
then is prime.
This theorem forms the basis for the Lucas primality test, an important primality test, and Pratt's primality certificate.
Pseudoprimes
If and are coprime numbers such that is divisible by , then need not be prime. If it is not, then is called a (Fermat) pseudoprime to base . The first pseudoprime to base 2 was found in 1820 by Pierre Frédéric Sarrus: 341 = 11 × 31.
A number that is a Fermat pseudoprime to base for every number coprime to is called a Carmichael number. Alternately, any number satisfying the equality
is either a prime or a Carmichael number.
Miller–Rabin primality test
The Miller–Rabin primality test uses the following extension of Fermat's little theorem:
If is an odd prime and with and odd > 0, then for every coprime to , either or there exists such that and .
This result may be deduced from Fermat's little theorem by the fact that, if is an odd prime, then the integers modulo form a finite field, in which 1 modulo has exactly two square roots, 1 and −1 modulo .
Note that holds trivially for , because the congruence relation is compatible with exponentiation. And holds trivially for since is odd, for the same reason. That is why one usually chooses a random in the interval .
The Miller–Rabin test uses this property in the following way: given an odd integer for which primality has to be tested, write with and odd > 0, and choose a random such that ; then compute ; if is not 1 nor −1, then square it repeatedly modulo until you get −1 or have squared times. If and −1 has not been obtained by squaring, then is a composite and is a witness for the compositeness of . Otherwise, is a strong probable prime to base a; that is, it may be prime or not. If is composite, the probability that the test declares it a strong probable prime anyway is at most , in which case is a strong pseudoprime, and is a strong liar. Therefore after non-conclusive random tests, the probability that is composite is at most 4−k, and may thus be made as low as desired by increasing .
In summary, the test either proves that a number is composite or asserts that it is prime with a probability of error that may be chosen as low as desired. The test is very simple to implement and computationally more efficient than all known deterministic tests. Therefore, it is generally used before starting a proof of primality.
See also
Fermat quotient
Frobenius endomorphism
-derivation
Fractions with prime denominators: numbers with behavior relating to Fermat's little theorem
RSA
Table of congruences
Modular multiplicative inverse
Notes
References
Further reading
Paulo Ribenboim (1995). The New Book of Prime Number Records (3rd ed.). New York: Springer-Verlag. . pp. 22–25, 49.
External links
János Bolyai and the pseudoprimes (in Hungarian)
Fermat's Little Theorem at cut-the-knot
Euler Function and Theorem at cut-the-knot
Fermat's Little Theorem and Sophie's Proof
Modular arithmetic
Theorems about prime numbers | Fermat's little theorem | [
"Mathematics"
] | 1,917 | [
"Theorems about prime numbers",
"Theorems in number theory",
"Arithmetic",
"Modular arithmetic",
"Number theory"
] |
53,455 | https://en.wikipedia.org/wiki/Minkowski%27s%20theorem | In mathematics, Minkowski's theorem is the statement that every convex set in which is symmetric with respect to the origin and which has volume greater than contains a non-zero integer point (meaning a point in that is not the origin). The theorem was proved by Hermann Minkowski in 1889 and became the foundation of the branch of number theory called the geometry of numbers. It can be extended from the integers to any lattice and to any symmetric convex set with volume greater than , where denotes the covolume of the lattice (the absolute value of the determinant of any of its bases).
Formulation
Suppose that is a lattice of determinant in the -dimensional real vector space and is a convex subset of that is symmetric with respect to the origin, meaning that if is in then is also in . Minkowski's theorem states that if the volume of is strictly greater than , then must contain at least one lattice point other than the origin. (Since the set is symmetric, it would then contain at least three lattice points: the origin 0 and a pair of points , where .)
Example
The simplest example of a lattice is the integer lattice of all points with integer coefficients; its determinant is 1. For , the theorem claims that a convex figure in the Euclidean plane symmetric about the origin and with area greater than 4 encloses at least one lattice point in addition to the origin. The area bound is sharp: if is the interior of the square with vertices then is symmetric and convex, and has area 4, but the only lattice point it contains is the origin. This example, showing that the bound of the theorem is sharp, generalizes to hypercubes in every dimension .
Proof
The following argument proves Minkowski's theorem for the specific case of
Proof of the case: Consider the map
Intuitively, this map cuts the plane into 2 by 2 squares, then stacks the squares on top of each other. Clearly has area less than or equal to 4, because this set lies within a 2 by 2 square. Assume for a contradiction that could be injective, which means the pieces of cut out by the squares stack up in a non-overlapping way. Because is locally area-preserving, this non-overlapping property would make it area-preserving for all of , so the area of would be the same as that of , which is greater than 4. That is not the case, so the assumption must be false: is not injective, meaning that there exist at least two distinct points in that are mapped by to the same point: .
Because of the way was defined, the only way that can equal is for
to equal for some integers and , not both zero.
That is, the coordinates of the two points differ by two even integers.
Since is symmetric about the origin, is also a point in . Since is convex, the line segment between and lies entirely in , and in particular the midpoint of that segment lies in . In other words,
is a point in . But this point is an integer point, and is not the origin since and are not both zero.
Therefore, contains a nonzero integer point.
Remarks:
The argument above proves the theorem that any set of volume contains two distinct points that differ by a lattice vector. This is a special case of Blichfeldt's theorem.
The argument above highlights that the term is the covolume of the lattice .
To obtain a proof for general lattices, it suffices to prove Minkowski's theorem only for ; this is because every full-rank lattice can be written as for some linear transformation , and the properties of being convex and symmetric about the origin are preserved by linear transformations, while the covolume of is and volume of a body scales by exactly under an application of .
Applications
Bounding the shortest vector
Minkowski's theorem gives an upper bound for the length of the shortest nonzero vector. This result has applications in lattice cryptography and number theory.
Theorem (Minkowski's bound on the shortest vector): Let be a lattice. Then there is a with . In particular, by the standard comparison between and norms, .
Remarks:
The constant in the bound can be improved, for instance by taking the open ball of radius as in the above argument. The optimal constant is known as the Hermite constant.
The bound given by the theorem can be very loose, as can be seen by considering the lattice generated by . But it cannot be further improved in the sense that there exists a global constant such that there exists an -dimensional lattice satisfying for all . Furthermore, such lattice can be self-dual.
Even though Minkowski's theorem guarantees a short lattice vector within a certain magnitude bound, finding this vector is in general a hard computational problem. Finding the vector within a factor guaranteed by Minkowski's bound is referred to as Minkowski's Vector Problem (MVP), and it is known that approximation SVP reduces to it using transference properties of the dual lattice. The computational problem is also sometimes referred to as HermiteSVP.
The LLL-basis reduction algorithm can be seen as a weak but efficiently algorithmic version of Minkowski's bound on the shortest vector. This is because a -LLL reduced basis for has the property that ; see these lecture notes of Micciancio for more on this. As explained in, proofs of bounds on the Hermite constant contain some of the key ideas in the LLL-reduction algorithm.
Applications to number theory
Primes that are sums of two squares
The difficult implication in Fermat's theorem on sums of two squares can be proven using Minkowski's bound on the shortest vector.
Theorem: Every prime with can be written as a sum of two squares.
Additionally, the lattice perspective gives a computationally efficient approach to Fermat's theorem on sums of squares:
First, recall that finding any nonzero vector with norm less than in , the lattice of the proof, gives a decomposition of as a sum of two squares. Such vectors can be found efficiently, for instance using LLL-algorithm. In particular, if is a -LLL reduced basis, then, by the property that , . Thus, by running the LLL-lattice basis reduction algorithm with , we obtain a decomposition of as a sum of squares. Note that because every vector in has norm squared a multiple of , the vector returned by the LLL-algorithm in this case is in fact a shortest vector.
Lagrange's four-square theorem
Minkowski's theorem is also useful to prove Lagrange's four-square theorem, which states that every natural number can be written as the sum of the squares of four natural numbers.
Dirichlet's theorem on simultaneous rational approximation
Minkowski's theorem can be used to prove Dirichlet's theorem on simultaneous rational approximation.
Algebraic number theory
Another application of Minkowski's theorem is the result that every class in the ideal class group of a number field contains an integral ideal of norm not exceeding a certain bound, depending on , called Minkowski's bound: the finiteness of the class number of an algebraic number field follows immediately.
Complexity theory
The complexity of finding the point guaranteed by Minkowski's theorem, or the closely related Blichfeldt's theorem, have been studied from the perspective of TFNP search problems. In particular, it is known that a computational analogue of Blichfeldt's theorem, a corollary of the proof of Minkowski's theorem, is PPP-complete. It is also known that the computational analogue of Minkowski's theorem is in the class PPP, and it was conjectured to be PPP complete.
See also
Danzer set
Pick's theorem
Dirichlet's unit theorem
Minkowski's second theorem
Ehrhart's volume conjecture
References
Further reading
([1996 with minor corrections])
Wolfgang M. Schmidt.Diophantine approximations and Diophantine equations, Lecture Notes in Mathematics, Springer Verlag 2000.
External links
Stevenhagen, Peter. Number Rings.
Geometry of numbers
Convex analysis
Theorems in number theory
Articles containing proofs
Hermann Minkowski | Minkowski's theorem | [
"Mathematics"
] | 1,672 | [
"Geometry of numbers",
"Theorems in number theory",
"Mathematical problems",
"Articles containing proofs",
"Mathematical theorems",
"Number theory"
] |
53,460 | https://en.wikipedia.org/wiki/Statcoulomb | The statcoulomb (statC), franklin (Fr), or electrostatic unit of charge (esu) is the unit of measurement for electrical charge used in the centimetre–gram–second electrostatic units variant (CGS-ESU) and Gaussian systems of units. In terms of the Gaussian base units, it is
That is, it is defined so that the proportionality constant in Coulomb's law using CGS-ESU quantities is a dimensionless quantity equal to 1.
Definition and relation to CGS base units
Coulomb's law in the CGS-Gaussian system takes the form
where F is the force, q and q are the two electric charges, and r is the distance between the charges. This serves to define charge as a quantity in the Gaussian system.
The statcoulomb is defined such that if two electric charges of 1 statC each and have a separation of , the force of mutual electrical repulsion is 1 dyne. Substituting F = 1 dyn, q = q = 1 statC, and r = 1 cm, we get:
From this it is also evident that the quantity dimension of electric charge as defined in the CGS-ESU and Gaussian systems is .
Conversion between systems
Conversion of a quantity to the corresponding quantity of the International System of Quantities (ISQ) that underlies the International System of Units (SI) by using the defining equations of each system.
The SI uses the coulomb (C) as its unit of electric charge. The conversion factor between corresponding quantities with the units coulomb and statcoulomb depends on which quantity is to be converted. The most common cases are:
For electric charge:
For electric flux ():
For electric flux density ():
The symbol "≘" ('corresponds to') is used instead of "=" because the two sides cannot be equated.
References
Units of electrical charge
Centimetre–gram–second system of units | Statcoulomb | [
"Physics",
"Mathematics"
] | 416 | [
"Physical quantities",
"Electric charge",
"Quantity",
"Units of electrical charge",
"Units of measurement"
] |
53,539 | https://en.wikipedia.org/wiki/Soyuz%20programme | The Soyuz programme ( , ; , meaning "Union") is a human spaceflight programme initiated by the Soviet Union in the early 1960s. The Soyuz spacecraft was originally part of a Moon landing project intended to put a Soviet cosmonaut on the Moon. It was the third Soviet human spaceflight programme after the Vostok (1961–1963) and Voskhod (1964–1965) programmes.
The programme consists of the Soyuz capsule and the Soyuz rocket and is now the responsibility of the Russian Roscosmos. After the retirement of the Space Shuttle in 2011, Soyuz was the only way for humans to get to the International Space Station (ISS) until 30 May 2020, when Crew Dragon flew to the ISS for the first time with astronauts.
Soyuz rocket
The launch vehicles used in the Soyuz expendable launch system are manufactured at the Progress State Research and Production Rocket Space Center (TsSKB-Progress) in Samara, Russia. As well as being used in the Soyuz programme as the launcher for the crewed Soyuz spacecraft, Soyuz launch vehicles are now also used to launch robotic Progress supply spacecraft to the International Space Station and commercial launches marketed and operated by TsSKB-Progress and the Starsem company. Currently Soyuz vehicles are launched from the Baikonur Cosmodrome in Kazakhstan and the Plesetsk Cosmodrome in northwest Russia and, since 2011, Soyuz launch vehicles are also being launched from the Guiana Space Centre in French Guiana. The Spaceport's new Soyuz launch site has been handling Soyuz launches since 21 October 2011, the date of the first launch. As of December 2019, 19 Guiana Soyuz launches had been made from French Guiana Space Centre, all successful.
The Soyuz rocket family is one of the most dependable and widely utilized launch vehicles in the history of space travel. It has been in operation for nearly six decades, having been developed by the Soviet Union and presently run by Russia. The Soyuz rockets have played an important role in both crewed and uncrewed space missions, launching people to the International Space Station (ISS) and delivering satellites and scientific payloads.
Soyuz spacecraft
The basic Soyuz spacecraft design was the basis for many projects, many of which were never developed. Its earliest form was intended to travel to the Moon without employing a huge booster like the Saturn V or the Soviet N-1 by repeatedly docking with upper stages that had been put in orbit using the same rocket as the Soyuz. This and the initial civilian designs were done under the Soviet Chief Designer Sergei Pavlovich Korolev, who did not live to see the craft take flight. Several military derivatives took precedence in the Soviet design process, though they never came to pass.
A Soyuz spacecraft consists of three parts (from front to back):
a spheroid orbital module
a small aerodynamic reentry module
a cylindrical service module with solar panels attached
There have been many variants of the Soyuz spacecraft, including:
Sever early crewed spacecraft proposal to replace Vostok (1959)
L1-1960 crewed circumlunar spacecraft proposal (1960); evolved into the Soyuz-A design
L4-1960 crewed lunar orbiter proposal (1960)
L1-1962 crewed lunar flyby spacecraft proposal (1962); early design led to Soyuz
OS-1962 space station proposal (1962)
Soyuz-A 7K-9K-11K circumlunar complex proposal (1963)
Soyuz 7K crewed spacecraft concept; cancelled in 1964 in favor of the LK-1
Soyuz 9K proposed orbital tug; cancelled in 1964 when the Soyuz 7K and Soyuz P were cancelled
Soyuz 11K proposed fuel tanker; cancelled in 1964 when the Soyuz 7K and Soyuz P were cancelled
L3-1963 crewed lunar lander proposal (1963)
L4-1963 crewed lunar orbiter proposal; modified 7K (1963)
Soyuz 7K-OK (1967–1970)
Soyuz 7K-L1 Zond (1967–1970)
Soyuz 7K-L3 LOK (1971–1972)
Soyuz 7K-OKS (1971); also known as 7KT-OK
Soyuz 7K-T or "ferry" (1973–1981)
Soyuz 7K-T-AF (1973); 7K-T modified for space station flight with Orion 2 space telescope
Soyuz 7K-T/A9 (1974–1978); 7K-T modified for flights to military Almaz space stations
Soyuz 7K-TM (1974–1976)
7K-MF6 (1976); 7K-TM modified for space station flight with MKF-6 camera
Soyuz-T (1976–1986)
Zarya planned 'Super Soyuz' replacement for Soyuz and Progress (1985)
Alpha Lifeboat rescue spacecraft based on Zarya (1995); cancelled in favor of a modified Soyuz TM
Big Soyuz enlarged version of Soyuz reentry vehicle (2008)
Soyuz-TM (1986–2003)
Soyuz TMA (2003–2012)
Soyuz-ACTS (2006)
Soyuz TMA-M (2010–2016)
Soyuz MS (since 2016)
Military Soyuz (P, PPK, R, 7K-VI Zvezda, and OIS)
Soyuz P crewed satellite interceptor proposal (1962); cancelled in 1964 in favor of the Istrebitel Sputnikov program
Soyuz R command-reconnaissance spacecraft proposal (1962); cancelled in 1966 and replaced by Almaz
Soyuz 7K-TK transport spacecraft proposal for delivering cosmonauts to Soyuz R military stations (1966); cancelled in 1970 in favor of the TKS spacecraft
Soyuz PPK revised version of Soyuz P (1964)
Soyuz 7K-VI Zvezda space station proposal (1964)
Soyuz-VI crewed combat spacecraft proposal; cancelled in 1965
Soyuz OIS (1967)
Soyuz OB-VI space station proposal (1967)
Soyuz 7K-S military transport proposal (1974)
Soyuz 7K-ST concept for Soyuz T and TM (1974)
Derivatives
The Zond spacecraft was designed to take a crew around the Moon, but never achieved the required degree of safety or political need. Zond 5 did circle the Moon in September 1968, with two tortoises and other life forms, and returned safely to Earth although in an atmospheric entry which probably would have killed human travelers.
The Progress series of robotic cargo ships for the Salyut, Mir, and ISS use the engine section, orbital module, automatic navigation, docking mechanism, and overall layout of the Soyuz spacecraft, but are incapable of reentry.
While not a direct derivative, the Chinese Shenzhou spacecraft follows the basic template originally pioneered by Soyuz.
Soyuz crewed flights
Soviet human spaceflight missions started in 1961 and ended in 1991 with the dissolution of the Soviet Union.
The Russian human spaceflight missions program started in 1991 and continues to this day. Soyuz crewed missions were the only spacecraft visiting the International Space Station, starting from when the Space Shuttle program ended in 2011, until the launch of Crew Dragon Demo-2 on 30 May 2020. The International Space Station always has at least one Soyuz spacecraft docked at all times for use as an escape craft.
Soyuz uncrewed flights
Kosmos 133 - launch failure
Kosmos 140 - reentry damage
Kosmos 186
Kosmos 188
Kosmos 212
Kosmos 213
Kosmos 238
Soyuz 2 - failed to dock
Kosmos 379
Kosmos 396
Kosmos 434
Kosmos 496
Kosmos 573
Kosmos 613
Kosmos 638
Kosmos 656
Kosmos 670
Kosmos 672
Kosmos 772 - partial fail
Soyuz 20
Kosmos 869
Kosmos 1001
Kosmos 1074
Soyuz 34
Soyuz T-1
Soyuz TM-1
Soyuz MS-14
Soyuz MS-23
Gallery
See also
Shenzhou, a Chinese spacecraft influenced by Soyuz
Space Shuttle
Buran (spacecraft)
List of spaceflight-related accidents and incidents
References
Human spaceflight programs
Crewed space program of Russia
Crewed space program of the Soviet Union
Projects established in 1963
1963 establishments in the Soviet Union | Soyuz programme | [
"Engineering"
] | 1,642 | [
"Space programs",
"Human spaceflight programs"
] |
53,601 | https://en.wikipedia.org/wiki/Metal%20casting | In metalworking and jewelry making, casting is a process in which a liquid metal is delivered into a mold (usually by a crucible) that contains a negative impression (i.e., a three-dimensional negative image) of the intended shape. The metal is poured into the mold through a hollow channel called a sprue. The metal and mold are then cooled, and the metal part (the casting) is extracted. Casting is most often used for making complex shapes that would be difficult or uneconomical to make by other methods.
Casting processes have been known for thousands of years, and have been widely used for sculpture (especially in bronze), jewelry in precious metals, and weapons and tools. Highly engineered castings are found in 90 percent of durable goods, including cars, trucks, aerospace, trains, mining and construction equipment, oil wells, appliances, pipes, hydrants, wind turbines, nuclear plants, medical devices, defense products, toys, and more.
Traditional techniques include lost-wax casting (which may be further divided into centrifugal casting, and vacuum assist direct pour casting), plaster mold casting and sand casting.
The modern casting process is subdivided into two main categories: expendable and non-expendable casting. It is further broken down by the mold material, such as sand or metal, and pouring method, such as gravity, vacuum, or low pressure.
Expendable mold casting
Expendable mold casting is a generic classification that includes sand, plastic, shell, plaster, and investment (lost-wax technique) moldings. This method of mold casting involves the use of temporary, non-reusable molds.
Sand casting
Sand casting is one of the most popular and simplest types of casting, and has been used for centuries. Sand casting allows for smaller batches than permanent mold casting and at a very reasonable cost. Not only does this method allow manufacturers to create products at a low cost, but there are other benefits to sand casting, such as very small-size operations. The process allows for castings small enough fit in the palm of one's hand to those large enough for a train car bed (one casting can create the entire bed for one rail car). Sand casting also allows most metals to be cast depending on the type of sand used for the molds.
Sand casting requires a lead time of days, or even weeks sometimes, for production at high output rates (1–20 pieces/hr-mold) and is unsurpassed for large-part production. Green (moist) sand, which is black in color, has almost no part weight limit, whereas dry sand has a practical part mass limit of . Minimum part weight ranges from . The sand is bonded using clays, chemical binders, or polymerized oils (such as motor oil). Sand can be recycled many times in most operations and requires little maintenance.
Loam molding
Loam molding has been used to produce large symmetrical objects such as cannon and church bells. Loam is a mixture of clay and sand with straw or dung. A model of the produced is formed in a friable material (the chemise). The mold is formed around this chemise by covering it with loam. This is then baked (fired) and the chemise removed. The mold is then stood upright in a pit in front of the furnace for the molten metal to be poured. Afterwards the mold is broken off. Molds can thus only be used once, so that other methods are preferred for most purposes.
Plaster mold casting
Plaster casting is similar to sand casting except that plaster of paris is used instead of sand as a mold material. Generally, the form takes less than a week to prepare, after which a production rate of 1–10 units/hr-mold is achieved, with items as massive as and as small as with very good surface finish and close tolerances. Plaster casting is an inexpensive alternative to other molding processes for complex parts due to the low cost of the plaster and its ability to produce near net shape castings. The biggest disadvantage is that it can only be used with low melting point non-ferrous materials, such as aluminium, copper, magnesium, and zinc.
Shell molding
Shell molding is similar to sand casting, but the molding cavity is formed by a hardened "shell" of sand instead of a flask filled with sand. The sand used is finer than sand casting sand and is mixed with a resin so that it can be heated by the pattern and hardened into a shell around the pattern. Because of the resin and finer sand, it gives a much finer surface finish. The process is easily automated and more precise than sand casting. Common metals that are cast include cast iron, aluminium, magnesium, and copper alloys. This process is ideal for complex items that are small to medium-sized.
Investment casting
Investment casting (known as lost-wax casting in art) is a process that has been practiced for thousands of years, with the lost-wax process being one of the oldest known metal forming techniques. From 5000 years ago, when beeswax formed the pattern, to today's high technology waxes, refractory materials, and specialist alloys, the castings ensure high-quality components are produced with the key benefits of accuracy, repeatability, versatility, and integrity.
Investment casting derives its name from the fact that the pattern is invested, or surrounded, with a refractory material. The wax patterns require extreme care for they are not strong enough to withstand forces encountered during the mold making. One advantage of investment casting is that the wax can be reused.
The process is suitable for repeatable production of net shape components from a variety of different metals and high performance alloys. Although generally used for small castings, this process has been used to produce complete aircraft door frames, with steel castings of up to 300 kg and aluminium castings of up to 30 kg. Compared to other casting processes such as die casting or sand casting, it can be an expensive process. However, the components that can be produced using investment casting can incorporate intricate contours, and in most cases the components are cast near net shape, so require little or no rework once cast.
Waste molding of plaster
A durable plaster intermediate is often used as a stage toward the production of a bronze sculpture or as a pointing guide for the creation of a carved stone. With the completion of a plaster, the work is more durable (if stored indoors) than a clay original which must be kept moist to avoid cracking. With the low cost plaster at hand, the expensive work of bronze casting or stone carving may be deferred until a patron is found, and as such work is considered to be a technical, rather than artistic process, it may even be deferred beyond the lifetime of the artist.
In waste molding a simple and thin plaster mold, reinforced by sisal or burlap, is cast over the original clay mixture. When cured, it is then removed from the damp clay, incidentally destroying the fine details in undercuts present in the clay, but which are now captured in the mold. The mold may then at any later time (but only once) be used to cast a plaster positive image, identical to the original clay. The surface of this plaster may be further refined and may be painted and waxed to resemble a finished bronze casting.
Evaporative-pattern casting
This is a class of casting processes that use pattern materials that evaporate during the pour, which means there is no need to remove the pattern material from the mold before casting. The two main processes are lost-foam casting and full-mold casting.
Lost-foam casting
Lost-foam casting is a type of evaporative-pattern casting process that is similar to investment casting except foam is used for the pattern instead of wax. This process takes advantage of the low boiling point of foam to simplify the investment casting process by removing the need to melt the wax out of the mold.
Full-mold casting
Full-mold casting is an evaporative-pattern casting process which is a combination of sand casting and lost-foam casting. It uses an expanded polystyrene foam pattern which is then surrounded by sand, much like sand casting. The metal is then poured directly into the mold, which vaporizes the foam upon contact.
Non-expendable mold casting
Non-expendable mold casting differs from expendable processes in that the mold need not be reformed after each production cycle. This technique includes at least four different methods: permanent, die, centrifugal, and continuous casting. This form of casting also results in improved repeatability in parts produced and delivers near net shape results.
Permanent mold casting
Permanent mold casting is a metal casting process that employs reusable molds ("permanent molds"), usually made from metal. The most common process uses gravity to fill the mold. However, gas pressure or a vacuum are also used. A variation on the typical gravity casting process, called slush casting, produces hollow castings. Common casting metals are aluminum, magnesium, and copper alloys. Other materials include tin, zinc, and lead alloys and iron and steel are also cast in graphite molds. Permanent molds, while lasting more than one casting still have a limited life before wearing out.
Die casting
The die casting process forces molten metal under high pressure into mold cavities (which are machined into dies). Most die castings are made from nonferrous metals, specifically zinc, copper, and aluminium-based alloys, but ferrous metal die castings are possible. The die casting method is especially suited for applications where many small to medium-sized parts are needed with good detail, a fine surface quality and dimensional consistency.
Semi-solid metal casting
Semi-solid metal (SSM) casting is a modified die casting process that reduces or eliminates the residual porosity present in most die castings. Rather than using liquid metal as the feed material, SSM casting uses a higher viscosity feed material that is partially solid and partially liquid. A modified die casting machine is used to inject the semi-solid slurry into reusable hardened steel dies. The high viscosity of the semi-solid metal, along with the use of controlled die filling conditions, ensures that the semi-solid metal fills the die in a non-turbulent manner so that harmful porosity can be essentially eliminated.
Used commercially mainly for aluminium and magnesium alloys, SSM castings can be heat treated to the T4, T5 or T6 tempers. The combination of heat treatment, fast cooling rates (from using uncoated steel dies) and minimal porosity provides excellent combinations of strength and ductility. Other advantages of SSM casting include the ability to produce complex shaped parts net shape, pressure tightness, tight dimensional tolerances and the ability to cast thin walls.
Centrifugal casting
In this process molten metal is poured in the mold and allowed to solidify while the mold is rotating. Metal is poured into the center of the mold at its axis of rotation. Due to inertial force, the liquid metal is thrown out toward the periphery.
Centrifugal casting is both gravity and pressure independent since it creates its own force feed using a temporary sand mold held in a spinning chamber. Lead time varies with the application. Semi- and true-centrifugal processing permit 30–50 pieces/hr-mold to be produced, with a practical limit for batch processing of approximately 9000 kg total mass with a typical per-item limit of 2.3–4.5 kg.
Industrially, the centrifugal casting of railway wheels was an early application of the method developed by the German industrial company Krupp and this capability enabled the rapid growth of the enterprise.
Small art pieces such as jewelry are often cast by this method using the lost wax process, as the forces enable the rather viscous liquid metals to flow through very small passages and into fine details such as leaves and petals. This effect is similar to the benefits from vacuum casting, also applied to jewelry casting.
Continuous casting
Continuous casting is a refinement of the casting process for the continuous, high-volume production of metal sections with a constant cross-section. It's primarily used to produce a semi-finished products for further processing. Molten metal is poured into an open-ended, water-cooled mold, which allows a 'skin' of solid metal to form over the still-liquid center, gradually solidifying the metal from the outside in. After solidification, the strand, as it is sometimes called, is continuously withdrawn from the mold. Predetermined lengths of the strand can be cut off by either mechanical shears or traveling oxyacetylene torches and transferred to further forming processes, or to a stockpile. Cast sizes can range from strip (a few millimeters thick by about five meters wide) to billets (90 to 160 mm square) to slabs (1.25 m wide by 230 mm thick). Sometimes, the strand may undergo an initial hot rolling process before being cut.
Continuous casting is used due to the lower costs associated with continuous production of a standard product, and also increased quality of the final product. Metals such as steel, copper, aluminum and lead are continuously cast, with steel being the metal with the greatest tonnages cast using this method.
Upcasting
The upcasting (up-casting, upstream, or upward casting) is a method of either vertical or horizontal continuous casting of rods and pipes of various profiles (cylindrical, square, hexagonal, slabs etc.) of 8-30mm in diameter. Copper (Cu), bronze (Cu·Sn alloy), nickel alloys are usually used because of greater casting speed (in case of vertical upcasting) and because of better physical features obtained. The advantage of this method is that metals are almost oxygen-free and that the rate of product crystallization (solidification) may be adjusted in a crystallizer - a high-temperature resistant device that cools a growing metal rod or pipe by using water.
The method is comparable to Czochralski method of growing silicon (Si) crystals, which is a metalloid.
Terminology
Metal casting processes uses the following terminology:
Pattern: An approximate duplicate of the final casting used to form the mold cavity.
Molding material: The material that is packed around the pattern and then the pattern is removed to leave the cavity where the casting material will be poured.
Flask: The rigid wood or metal frame that holds the molding material.
Cope: The top half of the pattern, flask, mold, or core.
Drag: The bottom half of the pattern, flask, mold, or core.
Core: An insert in the mold that produces internal features in the casting, such as holes.
Core print: The region added to the pattern, core, or mold used to locate and support the core.
Mold cavity: The combined open area of the molding material and core, where the metal is poured to produce the casting.
Riser: An extra void in the mold that fills with molten material to compensate for shrinkage during solidification.
Gating system: The network of connected channels that deliver the molten material to the mold cavities.
Pouring cup or pouring basin: The part of the gating system that receives the molten material from the pouring vessel.
Sprue: The pouring cup attaches to the sprue, which is the vertical part of the gating system. The other end of the sprue attaches to the runners.
Runners: The horizontal portion of the gating system that connects the sprues to the gates.
Gates: The controlled entrances from the runners into the mold cavities.
Vents: Additional channels that provide an escape for gases generated during the pour.
Parting line or parting surface: The interface between the cope and drag halves of the mold, flask, or pattern.
Draft: The taper on the casting or pattern that allow it to be withdrawn from the mold
Core box: The mold or die used to produce the cores.
Chaplet: Long vertical holding rod for core that after casting it become the integral part of casting, provide the support to the core.
Some specialized processes, such as die casting, use additional terminology.
Theory
Casting is a solidification process, which means the solidification phenomenon controls most of the properties of the casting. Moreover, most of the casting defects occur during solidification, such as gas porosity and solidification shrinkage.
Solidification occurs in two steps: nucleation and crystal growth. In the nucleation stage, solid particles form within the liquid. When these particles form, their internal energy is lower than the surrounded liquid, which creates an energy interface between the two. The formation of the surface at this interface requires energy, so as nucleation occurs, the material actually undercools (i.e. cools below its solidification temperature) because of the extra energy required to form the interface surfaces. It then recalescences, or heats back up to its solidification temperature, for the crystal growth stage. Nucleation occurs on a pre-existing solid surface because not as much energy is required for a partial interface surface as for a complete spherical interface surface. This can be advantageous because fine-grained castings possess better properties than coarse-grained castings. A fine grain structure can be induced by grain refinement or inoculation, which is the process of adding impurities to induce nucleation.
All of the nucleations represent a crystal, which grows as the heat of fusion is extracted from the liquid until there is no liquid left. The direction, rate, and type of growth can be controlled to maximize the properties of the casting. Directional solidification is when the material solidifies at one end and proceeds to solidify to the other end; this is the most ideal type of grain growth because it allows liquid material to compensate for shrinkage.
Cooling curves
Cooling curves are important in controlling the quality of a casting. The most important part of the cooling curve is the cooling rate which affects the microstructure and properties. Generally speaking, an area of the casting which is cooled quickly will have a fine grain structure and an area which cools slowly will have a coarse grain structure. Below is an example cooling curve of a pure metal or eutectic alloy, with defining terminology.
Note that before the thermal arrest the material is a liquid and after it the material is a solid; during the thermal arrest the material is converting from a liquid to a solid. Also, note that the greater the superheat the more time there is for the liquid material to flow into intricate details.
The above cooling curve depicts a basic situation with a pure metal, however, most castings are of alloys, which have a cooling curve shaped as shown below.
Note that there is no longer a thermal arrest, instead there is a freezing range. The freezing range corresponds directly to the liquidus and solidus found on the phase diagram for the specific alloy.
Chvorinov's rule
The local solidification time can be calculated using Chvorinov's rule, which is:
Where t is the solidification time, V is the volume of the casting, A is the surface area of the casting that contacts the mold, n is a constant, and B is the mold constant. It is most useful in determining if a riser will solidify before the casting, because if the riser does solidify first then it is worthless.
The gating system
The gating system serves many purposes, the most important being conveying the liquid material to the mold, but also controlling shrinkage, the speed of the liquid, turbulence, and trapping dross. The gates are usually attached to the thickest part of the casting to assist in controlling shrinkage. In especially large castings multiple gates or runners may be required to introduce metal to more than one point in the mold cavity. The speed of the material is important because if the material is traveling too slowly it can cool before completely filling, leading to misruns and cold shuts. If the material is moving too fast then the liquid material can erode the mold and contaminate the final casting. The shape and length of the gating system can also control how quickly the material cools; short round or square channels minimize heat loss.
The gating system may be designed to minimize turbulence, depending on the material being cast. For example, steel, cast iron, and most copper alloys are turbulent insensitive, but aluminium and magnesium alloys are turbulent sensitive. The turbulent insensitive materials usually have a short and open gating system to fill the mold as quickly as possible. However, for turbulent sensitive materials short sprues are used to minimize the distance the material must fall when entering the mold. Rectangular pouring cups and tapered sprues are used to prevent the formation of a vortex as the material flows into the mold; these vortices tend to suck gas and oxides into the mold. A large sprue well is used to dissipate the kinetic energy of the liquid material as it falls down the sprue, decreasing turbulence. The choke, which is the smallest cross-sectional area in the gating system used to control flow, can be placed near the sprue well to slow down and smooth out the flow. Note that on some molds the choke is still placed on the gates to make separation of the part easier, but induces extreme turbulence. The gates are usually attached to the bottom of the casting to minimize turbulence and splashing.
The gating system may also be designed to trap dross. One method is to take advantage of the fact that some dross has a lower density than the base material so it floats to the top of the gating system. Therefore, long flat runners with gates that exit from the bottom of the runners can trap dross in the runners; note that long flat runners will cool the material more rapidly than round or square runners. For materials where the dross is a similar density to the base material, such as aluminium, runner extensions and runner wells can be advantageous. These take advantage of the fact that the dross is usually located at the beginning of the pour, therefore the runner is extended past the last gate(s) and the contaminates are contained in the wells. Screens or filters may also be used to trap contaminates.
It is important to keep the size of the gating system small, because it all must be cut from the casting and remelted to be reused. The efficiency, or , of a casting system can be calculated by dividing the weight of the casting by the weight of the metal poured. Therefore, the higher the number the more efficient the gating system/risers.
Shrinkage
There are three types of shrinkage: shrinkage of the liquid, solidification shrinkage and patternmaker's shrinkage. The shrinkage of the liquid is rarely a problem because more material is flowing into the mold behind it. Solidification shrinkage occurs because metals are less dense as a liquid than a solid, so during solidification the metal density dramatically increases. Patternmaker's shrinkage refers to the shrinkage that occurs when the material is cooled from the solidification temperature to room temperature, which occurs due to thermal contraction.
Solidification shrinkage
Most materials shrink as they solidify, but, as the adjacent table shows, a few materials do not, such as gray cast iron. For the materials that do shrink upon solidification the type of shrinkage depends on how wide the freezing range is for the material. For materials with a narrow freezing range, less than , a cavity, known as a pipe, forms in the center of the casting, because the outer shell freezes first and progressively solidifies to the center. Pure and eutectic metals usually have narrow solidification ranges. These materials tend to form a skin in open air molds, therefore they are known as skin forming alloys. For materials with a wide freezing range, greater than , much more of the casting occupies the mushy or slushy zone (the temperature range between the solidus and the liquidus), which leads to small pockets of liquid trapped throughout and ultimately porosity. These castings tend to have poor ductility, toughness, and fatigue resistance. Moreover, for these types of materials to be fluid-tight, a secondary operation is required to impregnate the casting with a lower melting point metal or resin.
For the materials that have narrow solidification ranges, pipes can be overcome by designing the casting to promote directional solidification, which means the casting freezes first at the point farthest from the gate, then progressively solidifies toward the gate. This allows a continuous feed of liquid material to be present at the point of solidification to compensate for the shrinkage. Note that there is still a shrinkage void where the final material solidifies, but if designed properly, this will be in the gating system or riser.
Risers and riser aids
Risers, also known as feeders, are the most common way of providing directional solidification. It supplies liquid metal to the solidifying casting to compensate for solidification shrinkage. For a riser to work properly the riser must solidify after the casting, otherwise it cannot supply liquid metal to shrinkage within the casting. Risers add cost to the casting because it lowers the yield of each casting; i.e. more metal is lost as scrap for each casting. Another way to promote directional solidification is by adding chills to the mold. A chill is any material which will conduct heat away from the casting more rapidly than the material used for molding.
Risers are classified by three criteria. The first is if the riser is open to the atmosphere, if it is then it is called an open riser, otherwise it is known as a blind type. The second criterion is where the riser is located; if it is located on the casting then it is known as a top riser and if it is located next to the casting it is known as a side riser. Finally, if the riser is located on the gating system so that it fills after the molding cavity, it is known as a live riser or hot riser, but if the riser fills with materials that have already flowed through the molding cavity it is known as a dead riser or cold riser.
Riser aids are items used to assist risers in creating directional solidification or reducing the number of risers required. One of these items are chills which accelerate cooling in a certain part of the mold. There are two types: external and internal chills. External chills are masses of high-heat-capacity and high-thermal-conductivity material that are placed on an edge of the molding cavity. Internal chills are pieces of the same metal that is being poured, which are placed inside the mold cavity and become part of the casting. Insulating sleeves and toppings may also be installed around the riser cavity to slow the solidification of the riser. Heater coils may also be installed around or above the riser cavity to slow solidification.
Patternmaker's shrink
Shrinkage after solidification can be dealt with by using an oversized pattern designed specifically for the alloy used. s, or s, are used to make the patterns oversized to compensate for this type of shrinkage. These rulers are up to 2.5% oversize, depending on the material being cast. These rulers are mainly referred to by their percentage change. A pattern made to match an existing part would be made as follows: First, the existing part would be measured using a standard ruler, then when constructing the pattern, the pattern maker would use a contraction rule, ensuring that the casting would contract to the correct size.
Note that patternmaker's shrinkage does not take phase change transformations into account. For example, eutectic reactions, martensitic reactions, and graphitization can cause expansions or contractions.
Mold cavity
The mold cavity of a casting does not reflect the exact dimensions of the finished part due to a number of reasons. These modifications to the mold cavity are known as allowances and account for patternmaker's shrinkage, draft, machining, and distortion. In non-expendable processes, these allowances are imparted directly into the permanent mold, but in expendable mold processes they are imparted into the patterns, which later form the mold cavity. Note that for non-expendable molds an allowance is required for the dimensional change of the mold due to heating to operating temperatures.
For surfaces of the casting that are perpendicular to the parting line of the mold a draft must be included. This is so that the casting can be released in non-expendable processes or the pattern can be released from the mold without destroying the mold in expendable processes. The required draft angle depends on the size and shape of the feature, the depth of the mold cavity, how the part or pattern is being removed from the mold, the pattern or part material, the mold material, and the process type. Usually the draft is not less than 1%.
The machining allowance varies drastically from one process to another. Sand castings generally have a rough surface finish, therefore need a greater machining allowance, whereas die casting has a very fine surface finish, which may not need any machining tolerance. Also, the draft may provide enough of a machining allowance to begin with.
The distortion allowance is only necessary for certain geometries. For instance, U-shaped castings will tend to distort with the legs splaying outward, because the base of the shape can contract while the legs are constrained by the mold. This can be overcome by designing the mold cavity to slope the leg inward to begin with. Also, long horizontal sections tend to sag in the middle if ribs are not incorporated, so a distortion allowance may be required.
Cores may be used in expendable mold processes to produce internal features. The core can be of metal but it is usually done in sand.
Filling
There are a few common methods for filling the mold cavity: gravity, low-pressure, high-pressure, and vacuum.
Vacuum filling, also known as counter-gravity filling, is more metal efficient than gravity pouring because less material solidifies in the gating system. Gravity pouring only has a 15 to 50% metal yield as compared to 60 to 95% for vacuum pouring. There is also less turbulence, so the gating system can be simplified since it does not have to control turbulence. Plus, because the metal is drawn from below the top of the pool the metal is free from dross and slag, as these are lower density (lighter) and float to the top of the pool. The pressure differential helps the metal flow into every intricacy of the mold. Finally, lower temperatures can be used, which improves the grain structure. The first patented vacuum casting machine and process dates to 1879.
Low-pressure filling uses 5 to 15 psig (35 to 100 kPag) of air pressure to force liquid metal up a feed tube into the mold cavity. This eliminates turbulence found in gravity casting and increases density, repeatability, tolerances, and grain uniformity. After the casting has solidified the pressure is released and any remaining liquid returns to the crucible, which increases yield.
Tilt filling
Tilt filling, also known as tilt casting, is an uncommon filling technique where the crucible is attached to the gating system and both are slowly rotated so that the metal enters the mold cavity with little turbulence. The goal is to reduce porosity and inclusions by limiting turbulence. For most uses tilt filling is not feasible because the following inherent problem: if the system is rotated slow enough to not induce turbulence, the front of the metal stream begins to solidify, which results in mis-runs. If the system is rotated faster it induces turbulence, which defeats the purpose. Durville of France was the first to try tilt casting, in the 1800s. He tried to use it to reduce surface defects when casting coinage from aluminium bronze.
Macrostructure
The grain macrostructure in ingots and most castings have three distinct regions or zones: the chill zone, columnar zone, and equiaxed zone. The image below depicts these zones.
The chill zone is named so because it occurs at the walls of the mold where the wall chills the material. Here is where the nucleation phase of the solidification process takes place. As more heat is removed the grains grow towards the center of the casting. These are thin, long columns that are perpendicular to the casting surface, which are undesirable because they have anisotropic properties. Finally, in the center the equiaxed zone contains spherical, randomly oriented crystals. These are desirable because they have isotropic properties. The creation of this zone can be promoted by using a low pouring temperature, alloy inclusions, or inoculants.
Inspection
Common inspection methods for steel castings are magnetic particle testing and liquid penetrant testing. Common inspection methods for aluminum castings are radiography, ultrasonic testing, and liquid penetrant testing.
Defects
There are a number of problems that can be encountered during the casting process. The main types are: gas porosity, shrinkage defects, mold material defects, pouring metal defects, and metallurgical defects.
Casting process simulation
Casting processes simulation uses numerical methods to calculate cast component quality considering mold filling, solidification and cooling, and provides a quantitative prediction of casting mechanical properties, thermal stresses and distortion. Simulation accurately describes a cast component's quality up-front before production starts. The casting rigging can be designed with respect to the required component properties. This has benefits beyond a reduction in pre-production sampling, as the precise layout of the complete casting system also leads to energy, material, and tooling savings.
The software supports the user in component design, the determination of melting practice and casting methoding through to pattern and mold making, heat treatment, and finishing. This saves costs along the entire casting manufacturing route.
Casting process simulation was initially developed at universities starting from the early 1970s, mainly in Europe and in the U.S., and is regarded as the most important innovation in casting technology over the last 50 years. Since the late 1980s, commercial programs are available which make it possible for foundries to gain new insight into what is happening inside the mold or die during the casting process.
See also
Bronze and brass ornamental work
Bronze sculpture
Forging
Foundry
Porosity sealing
Spin casting
Spray forming
Stone mould
References
Notes
Bibliography
.
.
.
.
.
External links
Interactive casting design/manufacturing examples
Castings or Forgings? A look at the advantages of each manufacturing process
Video clip of a 50 gram arc cast alloy solidifying
Metalworking
Jewellery making
History of metallurgy
Metallurgy
Sculpture techniques
es:Fundición
hr:Lijevanje
pt:Fundição | Metal casting | [
"Chemistry",
"Materials_science",
"Engineering"
] | 7,172 | [
"Metallurgy",
"History of metallurgy",
"Materials science",
"nan"
] |
53,683 | https://en.wikipedia.org/wiki/Nuclear%20fallout | Nuclear fallout is residual radioactive material propelled into the upper atmosphere following a nuclear blast, so called because it "falls out" of the sky after the explosion and the shock wave has passed. It commonly refers to the radioactive dust and ash created when a nuclear weapon explodes. The amount and spread of fallout is a product of the size of the weapon and the altitude at which it is detonated. Fallout may get entrained with the products of a pyrocumulus cloud and when combined with precipitation falls as black rain (rain darkened by soot and other particulates), which occurred within 30–40 minutes of the atomic bombings of Hiroshima and Nagasaki. This radioactive dust, usually consisting of fission products mixed with bystanding atoms that are neutron-activated by exposure, is a form of radioactive contamination.
Types of fallout
Fallout comes in two varieties. The first is a small amount of carcinogenic material with a long half-life. The second, depending on the height of detonation, is a large quantity of radioactive dust and sand with a short half-life.
All nuclear explosions produce fission products, un-fissioned nuclear material, and weapon residues vaporized by the heat of the fireball. These materials are limited to the original mass of the device, but include radioisotopes with long lives. When the nuclear fireball does not reach the ground, this is the only fallout produced. Its amount can be estimated from the fission-fusion design and yield of the weapon.
Global fallout
After the detonation of a weapon at or above the fallout-free altitude (an air burst), fission products, un-fissioned nuclear material, and weapon residues vaporized by the heat of the fireball condense into a suspension of particles 10 nm to 20 μm in diameter. This size of particulate matter, lifted to the stratosphere, may take months or years to settle, and may do so anywhere in the world. Its radioactive characteristics increase the statistical cancer risk, with up to 2.4 million people having died by 2020 from the measurable elevated atmospheric radioactivity after the widespread nuclear weapons testing of the 1950s, peaking in 1963 (the Bomb pulse). Levels reached about 0.15 mSv per year worldwide, or about 7% of average background radiation dose from all sources, and has slowly decreased since, with natural background radiation levels being around 1 mSv.
Radioactive fallout has occurred around the world; for example, people have been exposed to iodine-131 from atmospheric nuclear testing. Fallout accumulates on vegetation, including fruits and vegetables. Starting from 1951 people may have gotten exposure, depending on whether they were outside, the weather, and whether they consumed contaminated milk, vegetables or fruit. Exposure can be on an intermediate time scale or long term. The intermediate time scale results from fallout that has been put into the troposphere and ejected by precipitation during the first month. Long-term fallout can sometimes occur from deposition of tiny particles carried in the stratosphere. By the time that stratospheric fallout has begun to reach the earth, the radioactivity is very much decreased. Also, after a year it is estimated that a sizable quantity of fission products move from the northern to the southern stratosphere. The intermediate time scale is between 1 and 30 days, with long term fallout occurring after that.
Examples of both intermediate and long term fallout occurred after the 1986 Chernobyl accident, which contaminated over of land in Ukraine and Belarus. The main fuel of the reactor was uranium, and surrounding this was graphite, both of which were vaporized by the hydrogen explosion that destroyed the reactor and breached its containment. An estimated 31 people died within a few weeks after this happened, including two plant workers killed at the scene. Although residents were evacuated within 36 hours, people started to complain of vomiting, migraines and other major signs of radiation sickness. The officials of Ukraine had to close off an area with an radius. Long term effects included at least 6,000 cases of thyroid cancer, mainly among children. Fallout spread throughout Europe, with Northern Scandinavia receiving a heavy dose, contaminating reindeer herds in Lapland, and salad greens becoming almost unavailable in France. Some sheep farms in North Wales and the North Of England were required to monitor radioactivity levels in their flocks until the control was lifted in 2012.
Local fallout
During detonations of devices at ground level (surface burst), below the fallout-free altitude, or in shallow water, heat vaporizes large amounts of earth or water, which is drawn up into the radioactive cloud. This material becomes radioactive when it combines with fission products or other radio-contaminants, or when it is neutron-activated.
The table below summarizes the abilities of common isotopes to form fallout. Some radiation taints large amounts of land and drinking water causing formal mutations throughout animal and human life.
A surface burst generates large amounts of particulate matter, composed of particles from less than 100 nm to several millimeters in diameter—in addition to very fine particles that contribute to worldwide fallout. The larger particles spill out of the stem and cascade down the outside of the fireball in a downdraft even as the cloud rises, so fallout begins to arrive near ground zero within an hour. More than half the total bomb debris lands on the ground within about 24 hours as local fallout. Chemical properties of the elements in the fallout control the rate at which they are deposited on the ground. Less volatile elements deposit first.
Severe local fallout contamination can extend far beyond the blast and thermal effects, particularly in the case of high yield surface detonations. The ground track of fallout from an explosion depends on the weather from the time of detonation onward. In stronger winds, fallout travels faster but takes the same time to descend, so although it covers a larger path, it is more spread out or diluted. Thus, the width of the fallout pattern for any given dose rate is reduced where the downwind distance is increased by higher winds. The total amount of activity deposited up to any given time is the same irrespective of the wind pattern, so overall casualty figures from fallout are generally independent of winds. But thunderstorms can bring down activity as rain allows fallout to drop more rapidly, particularly if the mushroom cloud is low enough to be below ("washout"), or mixed with ("rainout"), the thunderstorm.
Whenever individuals remain in a radiologically contaminated area, such contamination leads to an immediate external radiation exposure as well as a possible later internal hazard from inhalation and ingestion of radiocontaminants, such as the rather short-lived iodine-131, which is accumulated in the thyroid.
Factors affecting fallout
Location
There are two main considerations for the location of an explosion: height and surface composition. A nuclear weapon detonated in the air, called an air burst, produces less fallout than a comparable explosion near the ground. A nuclear explosion in which the fireball touches the ground pulls soil and other materials into the cloud and neutron activates it before it falls back to the ground. An air burst produces a relatively small amount of the highly radioactive heavy metal components of the device itself.
In case of water surface bursts, the particles tend to be rather lighter and smaller, producing less local fallout but extending over a greater area. The particles contain mostly sea salts with some water; these can have a cloud seeding effect causing local rainout and areas of high local fallout. Fallout from a seawater burst is difficult to remove once it has soaked into porous surfaces because the fission products are present as metallic ions that chemically bond to many surfaces. Water and detergent washing effectively removes less than 50% of this chemically bonded activity from concrete or steel. Complete decontamination requires aggressive treatment like sandblasting, or acidic treatment. After the Crossroads underwater test, it was found that wet fallout must be immediately removed from ships by continuous water washdown (such as from the fire sprinkler system on the decks).
Parts of the sea bottom may become fallout. After the Castle Bravo test, white dust—contaminated calcium oxide particles originating from pulverized and calcined corals—fell for several hours, causing beta burns and radiation exposure to the inhabitants of the nearby atolls and the crew of the Daigo Fukuryū Maru fishing boat. The scientists called the fallout Bikini snow.
For subsurface bursts, there is an additional phenomenon present called "base surge". The base surge is a cloud that rolls outward from the bottom of the subsiding column, which is caused by an excessive density of dust or water droplets in the air. For underwater bursts, the visible surge is, in effect, a cloud of liquid (usually water) droplets with the property of flowing almost as if it were a homogeneous fluid. After the water evaporates, an invisible base surge of small radioactive particles may persist.
For subsurface land bursts, the surge is made up of small solid particles, but it still behaves like a fluid. A soil earth medium favors base surge formation in an underground burst. Although the base surge typically contains only about 10% of the total bomb debris in a subsurface burst, it can create larger radiation doses than fallout near the detonation, because it arrives sooner than fallout, before much radioactive decay has occurred.
Meteorological
Meteorological conditions greatly influence fallout, particularly local fallout. Atmospheric winds are able to bring fallout over large areas. For example, as a result of a Castle Bravo surface burst of a 15 Mt thermonuclear device at Bikini Atoll on 1 March 1954, a roughly cigar-shaped area of the Pacific extending over 500 km downwind and varying in width to a maximum of 100 km was severely contaminated. There are three very different versions of the fallout pattern from this test, because the fallout was measured only on a small number of widely spaced Pacific Atolls. The two alternative versions both ascribe the high radiation levels at north Rongelap to a downwind hot spot caused by the large amount of radioactivity carried on fallout particles of about 50–100 micrometres size.
After Bravo, it was discovered that fallout landing on the ocean disperses in the top water layer (above the thermocline at 100 m depth), and the land equivalent dose rate can be calculated by multiplying the ocean dose rate at two days after burst by a factor of about 530. In other 1954 tests, including Yankee and Nectar, hot spots were mapped out by ships with submersible probes, and similar hot spots occurred in 1956 tests such as Zuni and Tewa.
However, the major U.S. "DELFIC" (Defence Land Fallout Interpretive Code) computer calculations use the natural size distributions of particles in soil instead of the afterwind sweep-up spectrum, and this results in more straightforward fallout patterns lacking the downwind hot spot.
Snow and rain, especially if they come from considerable heights, accelerate local fallout. Under special meteorological conditions, such as a local rain shower that originates above the radioactive cloud, limited areas of heavy contamination just downwind of a nuclear blast may be formed.
Effects
A wide range of biological changes may follow the irradiation of animals. These vary from rapid death following high doses of penetrating whole-body radiation, to essentially normal lives for a variable period of time until the development of delayed radiation effects, in a portion of the exposed population, following low dose exposures.
The unit of actual exposure is the röntgen, defined in ionisations per unit volume of air. All ionisation based instruments (including geiger counters and ionisation chambers) measure exposure. However, effects depend on the energy per unit mass, not the exposure measured in air. A deposit of 1 joule per kilogram has the unit of 1 gray (Gy). For 1 MeV energy gamma rays, an exposure of 1 röntgen in air produces a dose of about 0.01 gray (1 centigray, cGy) in water or surface tissue. Because of shielding by the tissue surrounding the bones, the bone marrow only receives about 0.67 cGy when the air exposure is 1 röntgen and the surface skin dose is 1 cGy. Some lower values reported for the amount of radiation that would kill 50% of personnel (the ) refer to bone marrow dose, which is only 67% of the air dose.
Short term
The dose that would be lethal to 50% of a population is a common parameter used to compare the effects of various fallout types or circumstances. Usually, the term is defined for a specific time, and limited to studies of acute lethality. The common time periods used are 30 days or less for most small laboratory animals and to 60 days for large animals and humans. The LD50 figure assumes that the individuals did not receive other injuries or medical treatment.
In the 1950s, the LD50 for gamma rays was set at 3.5 Gy, while under more dire conditions of war (a bad diet, little medical care, poor nursing) the LD50 was 2.5 Gy (250 rad). There have been few documented cases of survival beyond 6 Gy. One person at Chernobyl survived a dose of more than 10 Gy, but many of the persons exposed there were not uniformly exposed over their entire body. If a person is exposed in a non-homogeneous manner then a given dose (averaged over the entire body) is less likely to be lethal. For instance, if a person gets a hand/low arm dose of 100 Gy, which gives them an overall dose of 4 Gy, they are more likely to survive than a person who gets a 4 Gy dose over their entire body. A hand dose of 10 Gy or more would likely result in loss of the hand. A British industrial radiographer who was estimated to have received a hand dose of 100 Gy over the course of his lifetime lost his hand because of radiation dermatitis. Most people become ill after an exposure to 1 Gy or more. Fetuses are often more vulnerable to radiation and may miscarry, especially in the first trimester.
Because of the large amount of short-lived fission products, the activity and radiation levels of nuclear fallout decrease very quickly after being released; it is reduced by 50% in the first hour after a detonation, then by 80% during the first day. As a result, early gross decontamination, such as removing contaminated articles of outer clothing, is more effective than delayed but more thorough cleaning. Most areas become fairly safe for travel and decontamination after three to five weeks.
One hour after a surface burst, the radiation from fallout in the crater region is 30 grays per hour (Gy/h). Civilian dose rates in peacetime range from 30 to 100 μGy per year.
For yields of up to 10 kt, prompt radiation is the dominant producer of casualties on the battlefield. Humans receiving an acute incapacitating dose (30 Gy) have their performance degraded almost immediately and become ineffective within several hours. However, they do not die until five to six days after exposure, assuming they do not receive any other injuries. Individuals receiving less than a total of 1.5 Gy are not incapacitated. People receiving doses greater than 1.5 Gy become disabled, and some eventually die.
A dose of 5.3 Gy to 8.3 Gy is considered lethal but not immediately incapacitating. Personnel exposed to this amount of radiation have their cognitive performance degraded in two to three hours, depending on how physically demanding the tasks they must perform are, and remain in this disabled state at least two days. However, at that point they experience a recovery period and can perform non-demanding tasks for about six days, after which they relapse for about four weeks. At this time they begin exhibiting symptoms of radiation poisoning of sufficient severity to render them totally ineffective. Death follows at approximately six weeks after exposure, although outcomes may vary.
Long term
Late or delayed effects of radiation occur following a wide range of doses and dose rates. Delayed effects may appear months to years after irradiation and include a wide variety of effects involving almost all tissues or organs. Some of the possible delayed consequences of radiation injury, with the rates above the background prevalence, depending on the absorbed dose, include carcinogenesis, cataract formation, chronic radiodermatitis, decreased fertility, and genetic mutations.
Presently, the only teratological effect observed in humans following nuclear attacks on highly populated areas is microcephaly which is the only proven malformation, or congenital abnormality, found in the in utero developing human fetuses present during the Hiroshima and Nagasaki bombings. Of all the pregnant women who were close enough to be exposed to the prompt burst of intense neutron and gamma doses in the two cities, the total number of children born with microcephaly was below 50. No statistically demonstrable increase of congenital malformations was found among the later conceived children born to survivors of the nuclear detonations at Hiroshima and Nagasaki. The surviving women of Hiroshima and Nagasaki who could conceive and were exposed to substantial amounts of radiation went on and had children with no higher incidence of abnormalities than the Japanese average.
The Baby Tooth Survey founded by the husband and wife team of physicians Eric Reiss and Louise Reiss, was a research effort focused on detecting the presence of strontium-90, a cancer-causing radioactive isotope created by the more than 400 atomic tests conducted above ground that is absorbed from water and dairy products into the bones and teeth given its chemical similarity to calcium. The team sent collection forms to schools in the St. Louis, Missouri area, hoping to gather 50,000 teeth each year. Ultimately, the project collected over 300,000 teeth from children of various ages before the project was ended in 1970.
Preliminary results of the Baby Tooth Survey were published in the 24 November 1961, edition of the journal Science, and showed that levels of strontium-90 had risen steadily in children born in the 1950s, with those born later showing the most pronounced increases. The results of a more comprehensive study of the elements found in the teeth collected showed that children born after 1963 had levels of strontium-90 in their baby teeth that was 50 times higher than that found in children born before large-scale atomic testing began. The findings helped convince U.S. President John F. Kennedy to sign the Partial Nuclear Test Ban Treaty with the United Kingdom and Soviet Union, which ended the above-ground nuclear weapons testing that created the greatest amounts of atmospheric nuclear fallout.
Some considered the baby tooth survey a "campaign [that] effectively employed a variety of media advocacy strategies" to alarm the public and "galvanized" support against atmospheric nuclear testing,, and putting an end to such testing was commonly viewed as a positive outcome for a myriad of reasons. The survey could not show at the time, nor in the decades that have elapsed, that the levels of global strontium-90 or fallout in general, were life-threatening, primarily because "50 times the strontium-90 from before nuclear testing" is a minuscule number, and multiplication of minuscule numbers results in only a slightly larger minuscule number. Moreover, the Radiation and Public Health Project that currently retains the teeth has had their stance and publications criticized: a 2003 article in The New York Times states that many scientists consider the group's work controversial, with little credibility with the scientific establishment, while some scientists consider it "good, careful work". In an April 2014 article in Popular Science, Sarah Fecht argues that the group's work, specifically the widely discussed case of cherry-picking data to suggest that fallout from the 2011 Fukushima accident caused infant deaths in America, is "junk science", as despite their papers being peer-reviewed, independent attempts to corroborate their results return findings that are not in agreement with what the organization suggests. The organization had earlier suggested the same thing occurred after the 1979 Three Mile Island accident, though the Atomic Energy Commission argued this was unfounded. The tooth survey, and the organization's new target of pushing for test bans with US nuclear electric power stations, is detailed and critically labelled as the "Tooth Fairy issue" by the Nuclear Regulatory Commission.
Effects on the environment
In the event of a large-scale nuclear exchange, the effects would be drastic on the environment as well as directly to the human population. Within direct blast zones everything would be vaporized and destroyed. Cities damaged but not completely destroyed would lose their water system due to the loss of power and supply lines rupturing. Within the local nuclear fallout pattern suburban areas' water supplies would become extremely contaminated. At this point stored water would be the only safe water to use. All surface water within the fallout would be contaminated by falling fission products.
Within the first few months of the nuclear exchange the nuclear fallout will continue to develop and detriment the environment. Dust, smoke, and radioactive particles will fall hundreds of kilometers downwind of the explosion point and pollute surface water supplies. Iodine-131 would be the dominant fission product within the first few weeks, and in the months following the dominant fission product would be strontium-90. These fission products would remain in the fallout dust, resulting in rivers, lakes, sediments, and soils being contaminated with the fallout.
Rural areas' water supplies would be slightly less polluted by fission particles in intermediate and long-term fallout than cities and suburban areas. Without additional contamination, the lakes, reservoirs, rivers, and runoff would be gradually less contaminated as water continued to flow through its system.
Groundwater supplies such as aquifers would however remain unpolluted initially in the event of a nuclear fallout. Over time the groundwater could become contaminated with fallout particles, and would remain contaminated for over 10 years after a nuclear engagement. It would take hundreds or thousands of years for an aquifer to become completely pure. Groundwater would still be safer than surface water supplies and would need to be consumed in smaller doses. Long term, cesium-137 and strontium-90 would be the major radionuclides affecting the fresh water supplies.
The dangers of nuclear fallout do not stop at increased risks of cancer and radiation sickness, but also include the presence of radionuclides in human organs from food. A fallout event would leave fission particles in the soil for animals to consume, followed by humans. Radioactively contaminated milk, meat, fish, vegetables, grains and other food would all be dangerous because of fallout.
From 1945 to 1967 the U.S. conducted hundreds of nuclear weapon tests. Atmospheric testing took place over the US mainland during this time and as a consequence scientists have been able to study the effect of nuclear fallout on the environment. Detonations conducted near the surface of the earth irradiated thousands of tons of soil. Of the material drawn into the atmosphere, portions of radioactive material will be carried by low altitude winds and deposited in surrounding areas as radioactive dust. The material intercepted by high altitude winds will continue to travel. When a radiation cloud at high altitude is exposed to rainfall, the radioactive fallout will contaminate the downwind area below.
Agricultural fields and plants will absorb the contaminated material and animals will consume the radioactive material. As a result, the nuclear fallout may cause livestock to become ill or die, and if consumed the radioactive material will be passed on to humans.
The damage to other living organism as a result to nuclear fallout depends on the species. Mammals particularly are extremely sensitive to nuclear radiation, followed by birds, plants, fish, reptiles, crustaceans, insects, moss, lichen, algae, bacteria, mollusks, and viruses.
Climatologist Alan Robock and atmospheric and oceanic sciences professor Brian Toon created a model of a hypothetical small-scale nuclear war that would have approximately 100 weapons used. In this scenario, the fires would create enough soot into the atmosphere to block sunlight, lowering global temperatures by more than one degree Celsius. The result would have the potential of creating widespread food insecurity (nuclear famine). Precipitation across the globe would be disrupted as a result. If enough soot was introduced in the upper atmosphere the planet's ozone layer could potentially be depleted, affecting plant growth and human health.
Radiation from the fallout would linger in soil, plants, and food chains for years. Marine food chains are more vulnerable to the nuclear fallout and the effects of soot in the atmosphere.
Fallout radionuclides' detriment in the human food chain is apparent in the lichen-caribou-eskimo studies in Alaska. The primary effect on humans observed was thyroid dysfunction. The result of a nuclear fallout is incredibly detrimental to human survival and the biosphere. Fallout alters the quality of our atmosphere, soil, and water and causes species to go extinct.
Fallout protection
During the Cold War, the governments of the U.S., the USSR, Great Britain, and China attempted to educate their citizens about surviving a nuclear attack by providing procedures on minimizing short-term exposure to fallout. This effort commonly became known as Civil Defense.
Fallout protection is almost exclusively concerned with protection from radiation. Radiation from a fallout is encountered in the forms of alpha, beta, and gamma radiation, and as ordinary clothing affords protection from alpha and beta radiation, most fallout protection measures deal with reducing exposure to gamma radiation. For the purposes of radiation shielding, many materials have a characteristic halving thickness: the thickness of a layer of a material sufficient to reduce gamma radiation exposure by 50%. Halving thicknesses of common materials include: 1 cm (0.4 inch) of lead, 6 cm (2.4 inches) of concrete, 9 cm (3.6 inches) of packed earth or 150 m (500 ft) of air. When multiple thicknesses are built, the shielding multiplies. A practical fallout shield is ten halving-thicknesses of a given material, such as 90 cm (36 inches) of packed earth, which reduces gamma ray exposure by approximately 1024 times (210). A shelter built with these materials for the purposes of fallout protection is known as a fallout shelter.
Personal protective equipment
As the nuclear energy sector continues to grow, the international rhetoric surrounding nuclear warfare intensifies, and the ever-present threat of radioactive materials falling into the hands of dangerous people persists, many scientists are working hard to find the best way to protect human organs from the harmful effects of high energy radiation. Acute radiation syndrome (ARS) is the most immediate risk to humans when exposed to ionizing radiation in dosages greater than around 0.1 Gy/hr. Radiation in the low energy spectrum (alpha and beta radiation) with minimal penetrating power is unlikely to cause significant damage to internal organs (although if contamination is ingested, inhaled or on the skin, and thus in close proximity to tissues and organs, the effect of these 'massive' particles may be catastrophic). The high penetrating power of gamma and neutron radiation, however, easily penetrates the skin and many thin shielding mechanisms to cause cellular degeneration in the stem cells found in bone marrow. While full body shielding in a secure fallout shelter as described above is the most optimal form of radiation protection, it requires being locked in a very thick bunker for a significant amount of time. In the event of a nuclear catastrophe of any kind, it is imperative to have mobile protection equipment for medical and security personnel to perform necessary containment, evacuation, and any number of other important public safety objectives. The mass of the shielding material required to properly protect the entire body from high energy radiation would make functional movement essentially impossible. This has led scientists to begin researching the idea of partial body protection: a strategy inspired by hematopoietic stem cell transplantation (HSCT). The idea is to use enough shielding material to sufficiently protect the high concentration of bone marrow in the pelvic region, which contains enough regenerative stem cells to repopulate the body with unaffected bone marrow. More information on bone marrow shielding can be found in the Health Physics Radiation Safety Journal article Selective Shielding of Bone Marrow: An Approach to Protecting Humans from External Gamma Radiation, or in the Organisation for Economic Co-operation and Development (OECD) and the Nuclear Energy Agency (NEA)'s 2015 report: Occupational Radiation Protection in Severe Accident Management.
The seven-ten rule
The danger of radiation from fallout also decreases rapidly with time due in large part to the exponential decay of the individual radionuclides. A book by Cresson H. Kearny presents data showing that for the first few days after the explosion, the radiation dose rate is reduced by a factor of ten for every seven-fold increase in the number of hours since the explosion. He presents data showing that "it takes about seven times as long for the dose rate to decay from 1000 roentgens per hour (1000 R/hr) to 10 R/hr (48 hours) as to decay from 1000 R/hr to 100 R/hr (7 hours)." This is a rule of thumb based on observed data, not a precise relation.
United States government guides for fallout protection
The United States government, often the Office of Civil Defense in the Department of Defense, provided guides to fallout protection in the 1960s, frequently in the form of booklets. These booklets provided information on how to best survive nuclear fallout. They also included instructions for various fallout shelters, whether for a family, a hospital, or a school shelter were provided. There were also instructions for how to create an improvised fallout shelter, and what to do to best increase a person's chances for survival if they were unprepared.
The central idea in these guides is that materials like concrete, soil, and sand are necessary to shield a person from fallout particles and radiation. A significant amount of materials of this type are necessary to protect a person from fallout radiation, so safety clothing cannot protect a person from fallout radiation. However, protective clothing can keep fallout particles off a person's body, but the radiation from these particles will still permeate through the clothing. For safety clothing to be able to block the fallout radiation, it would have to be so thick and heavy that a person could not function.
These guides indicated that fallout shelters should contain enough resources to keep its occupants alive for up to two weeks. Community shelters were preferred over single-family shelters. The more people in a shelter, the greater quantity and variety of resources that shelter would be equipped with. These communities’ shelters would also help facilitate efforts to recuperate the community in the future. Single family shelters should be built below ground if possible. Many different types of fallout shelters could be made for a relatively small amount of money. A common format for fallout shelters was to build the shelter underground, with solid concrete blocks to act as the roof. If a shelter could only be partially underground, it was recommended to mound over that shelter with as much soil as possible. If a house had a basement, it is best for a fallout shelter to be constructed in a corner of the basement. The center of a basement is where the most radiation will be because the easiest way for radiation to enter a basement is from the floor above. The two of the walls of the shelter in a basement corner will be the basement walls that are surrounded by soil outside. Cinder blocks filled with sand or soil were highly recommended for the other two walls. Concrete blocks, or some other dense material, should be used as a roof for a basement fallout shelter because the floor of a house is not an adequate roof for a fallout shelter. These shelters should contain water, food, tools, and a method for dealing with human waste.
If a person did not have a shelter previously built, these guides recommended trying to get underground. If a person had a basement but no shelter, they should put food, water, and a waste container in the corner of the basement. Then items such as furniture should be piled up to create walls around the person in the corner. If the underground cannot be reached, a tall apartment building at least ten miles from the blast was recommended as a good fallout shelter. People in these buildings should get as close to the center of the building as possible and avoid the top and ground floors.
Schools were the preferred fallout shelters according to the Office of Civil Defense. Schools, not including universities, contained around one-quarter of the population of the United States when they were in session at that time. The distribution of schools across the nation reflected the population density, and they were often the most suitable building in a community to act as a fallout shelter. Schools also already had organization with leaders in place. The Office of Civil Defense recommended altering current schools and the construction of future schools to include thicker walls and roofs, better-protected electrical systems, a purifying ventilation system, and a protected water pump. The Office of Civil Defense determined that around 10 square feet of net area per person were necessary in schools that were to function as a fallout shelter. A normal classroom could provide 180 people with area to sleep. If an attack were to happen, all the unnecessary furniture was to be moved out of the classrooms to make more room for people. It was recommended to keep one or two tables in the room if possible to use as a food-serving station.
The Office of Civil Defense conducted four case studies to find the cost of turning four standing schools into fallout shelters and what their capacity would be. The cost of the schools per occupant in the 1960s were $66.00, $127.00, $50.00, and $180.00. The capacity of people these schools could house as shelters were 735, 511, 484, and 460 respectively.
The US Department of Homeland Security and the Federal Emergency Management Agency in coordination with other agencies concerned with public protection in the aftermath of a nuclear detonation have developed more recent guidance documents that build on the older Civil Defense frameworks. Planning Guidance for Response to a Nuclear Detonation was published in 2022 and provided in-depth analysis and response planning for local government jurisdictions.
Nuclear reactor accident
Fallout can also refer to nuclear accidents, although a nuclear reactor does not explode like a nuclear weapon. The isotopic signature of bomb fallout is very different from the fallout from a serious power reactor accident (such as Chernobyl or Fukushima).
The key differences are in volatility and half-life.
Volatility
The boiling point of an element (or its compounds) determines the percentage of that element that a power reactor accident releases. The ability of an element to form a solid determines the rate it is deposited on the ground after having been injected into the atmosphere by a nuclear detonation or accident.
Half-life
A half life is the time it takes the radiation emitted by a specific substance to decay to half the initial value. A large amount of short-lived isotopes such as 97Zr are present in bomb fallout. This isotope and other short-lived isotopes are constantly generated in a power reactor, but because the criticality occurs over a long length of time, the majority of these short lived isotopes decay before they can be released.
Preventive measures
Nuclear fallout can occur due to a number of different sources. One of the most common potential sources of nuclear fallout is that of nuclear reactors. Because of this, steps must be taken to ensure the risk of nuclear fallout at nuclear reactors is controlled.
In the 1950s and 60's, the United States Atomic Energy Commission (AEC) began developing safety regulations against nuclear fallout for civilian nuclear reactors. Because the effects of nuclear fallout are more widespread and longer lasting than other forms of energy production accidents, the AEC desired a more proactive response towards potential accidents than ever before. One step to prevent nuclear reactor accidents was the Price-Anderson Act. Passed by Congress in 1957, the Price-Anderson Act ensured government assistance above the $60 million covered by private insurance companies in the case of a nuclear reactor accident. The main goal of the Price-Anderson Act was to protect the multi-billion-dollar companies overseeing the production of nuclear reactors. Without this protection, the nuclear reactor industry could potentially come to a halt, and the protective measures against nuclear fallout would be reduced. However, because of the limited experience in nuclear reactor technology, engineers had a difficult time calculating the potential risk of released radiation. Engineers were forced to imagine every unlikely accident, and the potential fallout associated with each accident. The AEC's regulations against potential nuclear reactor fallout were centered on the ability of the power plant to the Maximum Credible Accident (MCA). The MCA involved a "large release of radioactive isotopes after a substantial meltdown of the reactor fuel when the reactor coolant system failed through a Loss-of-Coolant Accident". The prevention of the MCA enabled a number of new nuclear fallout preventive measures. Static safety systems, or systems without power sources or user input, were enabled to prevent potential human error. Containment buildings, for example, were reliably effective at containing a release of radiation and did not need to be powered or turned on to operate. Active protective systems, although far less dependable, can do many things that static systems cannot. For example, a system to replace the escaping steam of a cooling system with cooling water could prevent reactor fuel from melting. However, this system would need a sensor to detect the presence of releasing steam. Sensors can fail, and the results of a lack of preventive measures would result in a local nuclear fallout. The AEC had to choose, then, between active and static systems to protect the public from nuclear fallout. With a lack of set standards and probabilistic calculations, the AEC and the industry became divided on the best safety precautions to use.
This division gave rise to the Nuclear Regulatory Commission (NRC). The NRC was committed to 'regulations through research', which gave the regulatory committee a knowledge bank of research on which to draw their regulations. Much of the research done by the NRC sought to move safety systems from a deterministic viewpoint into a new probabilistic approach. The deterministic approach sought to foresee all problems before they arose. The probabilistic approach uses a more mathematical approach to weigh the risks of potential radiation leaks. Much of the probabilistic safety approach can be drawn from the radiative transfer theory in Physics, which describes how radiation travels in free space and through barriers. Today, the NRC is still the leading regulatory committee on nuclear reactor power plants.
Determining extent of nuclear fallout
The International Nuclear and Radiological Event Scale (INES) is the primary form of categorizing the potential health and environmental effects of a nuclear or radiological event and communicating it to the public. The scale, which was developed in 1990 by the International Atomic Energy Agency and the Nuclear Energy Agency of the Organization for Economic Co-operation and Development, classifies these nuclear accidents based on the potential impact of the fallout:
Defence-in-Depth: This is the lowest form of nuclear accidents and refers to events that have no direct impact on people or the environment but must be taken note of to improve future safety measures.
Radiological Barriers and Control: This category refers to events that have no direct impact on people or the environment and only refer to the damage caused within major facilities.
People and the Environment: This section of the scale consists of more serious nuclear accidents. Events in this category could potentially cause radiation to spread to people close to the location of the accident. This also includes an unplanned, widespread release of the radioactive material.
The INES scale is composed of seven steps that categorize the nuclear events, ranging from anomalies that must be recorded to improve upon safety measures to serious accidents that require immediate action.
Chernobyl
The 1986 nuclear reactor explosion at Chernobyl was categorized as a Level 7 accident, which is the highest possible ranking on the INES scale, due to widespread environmental and health effects and "external release of a significant fraction of reactor core inventory". The nuclear accident still stands as the only accident in commercial nuclear power that led to radiation-related deaths. The steam explosion and fires released approximately 5200 PBq, or at least 5 percent of the reactor core, into the atmosphere. The explosion itself resulted in the deaths of two plant workers, while 28 people died over the weeks that followed of severe radiation poisoning. Furthermore, young children and adolescents in the areas most contaminated by the radiation exposure showed an increase in the risk for thyroid cancer, although the United Nations Scientific Committee on the Effects of Atomic Radiation stated that "there is no evidence of a major public health impact" apart from that. The nuclear accident also took a heavy toll on the environment, including contamination in urban environments caused by the deposition of radionuclides and the contamination of "different crop types, in particular, green leafy vegetables ... depending on the deposition levels, and time of the growing season".
Three Mile Island
The nuclear meltdown at Three Mile Island in 1979 was categorized as a Level 5 accident on the INES scale because of the "severe damage to the reactor core" and the radiation leak caused by the incident. Three Mile Island was the most serious accident in the history of American commercial nuclear power plants, yet the effects were different from those of the Chernobyl accident. A study done by the Nuclear Regulatory Commission following the incident reveals that the nearly 2 million people surrounding the Three Mile Island plant "are estimated to have received an average radiation dose of only 1 millirem above the usual background dose". Furthermore, unlike those affected by radiation in the Chernobyl accident, the development of thyroid cancer in the people around Three Mile Island was "less aggressive and less advanced".
Fukushima
Like the Three Mile Island incident, the incident at Fukushima was initially categorized as a Level 5 accident on the INES scale after a tsunami disabled the power supply and cooling of three reactors, which then suffered significant melting in the days that followed. However, after combining the events at the three reactors rather than assessing them individually, the accident was upgraded to an INES Level 7. The radiation exposure from the incident caused a recommended evacuation for inhabitants up to 30 km away from the plant. However, it was also hard to track such exposure because 23 out of the 24 radioactive monitoring stations were also disabled by the tsunami. Removing contaminated water, both in the plant itself and run-off water that spread into the sea and nearby areas, became a huge challenge for the Japanese government and plant workers. During the containment period following the accident, thousands of cubic meters of slightly contaminated water were released in the sea to free up storage for more contaminated water in the reactor and turbine buildings. However, the fallout from the Fukushima accident had a minimal impact on the surrounding population. According to the Institut de Radioprotection et de Sûreté Nucléaire, over 62 percent of assessed residents within the Fukushima prefecture received external doses of less than 1 mSv in the four months following the accident. In addition, comparing screening campaigns for children inside the Fukushima prefecture and in the rest of the country revealed no significant difference in the risk of thyroid cancer.
International nuclear safety standards
Founded in 1974, the International Atomic Energy Agency (IAEA) was created to set forth international standards for nuclear reactor safety. However, without a proper policing force, the guidelines set forth by the IAEA were often treated lightly or ignored completely. In 1986, the disaster at Chernobyl was evidence that international nuclear reactor safety was not to be taken lightly. Even in the midst of the Cold War, the Nuclear Regulatory Commission sought to improve the safety of Soviet nuclear reactors. As noted by IAEA Director General Hans Blix, "A radiation cloud doesn't know international boundaries." The NRC showed the Soviets the safety guidelines used in the US: capable regulation, safety-minded operations, and effective plant designs. The Soviets, however, had their own priority: keeping the plant running at all costs. In the end, the same shift between deterministic safety designs to probabilistic safety designs prevailed. In 1989, the World Association of Nuclear Operators (WANO) was formed to cooperate with the IAEA to ensure the same three pillars of reactor safety across international borders. In 1991, WANO concluded (using a probabilistic safety approach) that all former communist-controlled nuclear reactors could not be trusted, and should be closed. Compared to a "Nuclear Marshall Plan", efforts were taken throughout the 1990s and 2000s to ensure international standards of safety for all nuclear reactors.
See also
Debris fallout
Dirty bomb
Fallout: An American Nuclear Tragedy
Fallout Protection—U.S. government booklet
Effects of nuclear explosions
Fallout (RTÉ drama)—Irish drama exploring scenarios following a nuclear accident at Sellafield.
Fallout (series)
Fallout shelter
Fission product
Hot particle
Human radiation experiments
List of nuclear accidents
Lists of nuclear disasters and radioactive incidents
Neutron bomb
Mutation breeding#Radiation breeding
Nuclear fallout effects on an ecosystem
Nuclear terrorism
Nuclear War Survival Skills by Cresson Kearny
Nuclear weapon design
Potassium iodide
Project GABRIEL
Protect and Survive, a series of booklets and a public information film series produced for the British government in the 1970s and 1980s.
Radioactive contamination
Radiation poisoning
Radiation biology
Radioactive waste
Radiological weapon
Joseph Rotblat
Salted bomb
Survival Under Atomic Attack, an official U.S. government booklet regarding the effects of a nuclear attack.
References
Further reading
Glasstone, Samuel and Dolan, Philip J., The Effects of Nuclear Weapons (third edition), U.S. Government Printing Office, 1977. (Available Online)
NATO Handbook on the Medical Aspects of NBC Defensive Operations (Part I – Nuclear), Departments of the Army, Navy, and Air Force, Washington, D.C., 1996, (Available Online)
Smyth, H. DeW., Atomic Energy for Military Purposes, Princeton University Press, 1945. (Smyth Report)
The Effects of Nuclear War, Office of Technology Assessment (May 1979), (Available Online )
T. Imanaka, S. Fukutani, M. Yamamoto, A. Sakaguchi and M. Hoshi, J. Radiation Research, 2006, 47, Suppl A121–A127.
Sheldon Novick, The Careless Atom (Boston MA: Houghton Mifflin Co., 1969), p. 98
External links
NUKEMAP3D – a 3D nuclear weapons effects simulator powered by Google Maps. It simulates the effects of nuclear weapons upon geographic areas.
Aftermath of war
Fallout
Radiation health effects
Environmental impact of nuclear power
Radiobiology
Radioactive contamination
Fallout
+
Radiological weapons | Nuclear fallout | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Biology"
] | 9,507 | [
"Nuclear fission",
"Radiation health effects",
"Radioactive contamination",
"Nuclear chemistry",
"Radiobiology",
"Fission products",
"Nuclear fallout",
"Environmental impact of nuclear power",
"nan",
"Nuclear physics",
"Radiation effects",
"Radioactivity"
] |
53,686 | https://en.wikipedia.org/wiki/Archimedes%27%20screw | The Archimedes' screw, also known as the Archimedean screw, hydrodynamic screw, water screw or Egyptian screw, is one of the earliest hydraulic machines named after Greek mathematician Archimedes who first described it around 234 BC, although the device had been used in Ancient Egypt. It is a reversible hydraulic machine, and there are several examples of Archimedes screw installations where the screw can operate at different times as either pump or generator, depending on needs for power and watercourse flow.
As a machine used for lifting water from a low-lying body of water into irrigation ditches, water is lifted by turning a screw-shaped surface inside a pipe. In the modern world, Archimedes screw pumps are widely used in wastewater treatment plants and for dewatering low-lying regions. Run in reverse, Archimedes screw turbines act as a new form of small hydroelectric powerplant that can be applied even in low head sites. Such generators operate in a wide range of flows (0.01 to 14.5 ) and heads (0.1 m to 10 m), including low heads and moderate flow rates that is not ideal for traditional turbines and not occupied by high performance technologies.
History
Earliest records
The screw pump is the oldest positive displacement pump. The first records of a water screw, or screw pump, date back to Hellenistic Egypt before the 3rd century BC. The Egyptian screw, used to lift water from the Nile, was composed of tubes wound round a cylinder; as the entire unit rotates, water is lifted within the spiral tube to the higher elevation. A later screw pump design from Egypt had a spiral groove cut on the outside of a solid wooden cylinder and then the cylinder was covered by boards or sheets of metal closely covering the surfaces between the grooves.
Some researchers have proposed this device was used to irrigate the Hanging Gardens of Babylon, one of the Seven Wonders of the Ancient World. A cuneiform inscription of Assyrian King Sennacherib (704–681 BC) has been interpreted by Stephanie Dalley to describe casting water screws in bronze some 350 years earlier. This is consistent with Greek historian Strabo, who describes the Hanging Gardens as irrigated by screws.
Archimedes' role
The screw pump was later introduced from Hellenistic Egypt to Greece. It was described by Archimedes, on the occasion of his visit to Egypt, circa 234 BC. This tradition may reflect only that the apparatus was unknown to the Greeks before Hellenistic times. Athenaeus of Naucratis quotes a certain Moschion in a description on how Hiero II of Syracuse commissioned the design of the Syracusia, a luxury ship which would be a display of naval power. It is said to have been the largest ship built in classical antiquity and was launched by Archimedes who designed device with a revolving screw-shaped blade inside a cylinder to remove any potential water leaking through the hull. Archimedes' screw was turned by hand, and could also be used to transfer water from a low-lying body of water into irrigation canals.
Archimedes never claimed credit for its invention, but it was attributed to him 200 years later by Diodorus, who believed that Archimedes invented the screw pump in Egypt. Depictions of Greek and Roman water screws show them being powered by a human treading on the outer casing to turn the entire apparatus as one piece, which would require that the casing be rigidly attached to the screw.
Development and modern use
German engineer Konrad Kyeser equipped the Archimedes screw with a crank mechanism in his Bellifortis (1405). This mechanism quickly replaced the ancient practice of working the pipe by treading. The world's first seagoing steamship driven by a screw propeller was the SS Archimedes, which was launched in 1839 and named in honor of Archimedes and his work on the screw. Developments in maritime transport occurred over the next 180 years from the Fawcett, Preston and Company double blade design and patents by Sharrow Marine to address rotary propulsion and flow control on boating vessels through loop propellers. Electricity generation through hydropower pumps such as the Meriden project operated by New England Hydropower also uses Archimedes screw to direct water into the top, rather than the bottom, of the screw which forces it to rotate.
Archimedes screws are used in sewage treatment plants because they cope well with varying rates of flow and with suspended solids. Screw turbines (ASTs) are a new form of generator for small hydroelectric powerplants that could be applied even in low-head sites. The low rotation speed of ASTs reduces negative impacts on aquatic life and fish. This technology is used primarily at fish hatcheries to lift fish safely from ponds and transport them to another location. An Archimedes screw was used in the successful 2001 stabilization of the Leaning Tower of Pisa. Small amounts of subsoil saturated by groundwater were removed from far below the north side of the tower, and the weight of the tower itself corrected the lean.
Other inventions using Archimedes screws include the auger conveyor in a snow blower, grain elevator, concrete mixer and chocolate fountain.
Design
The Archimedes screw consists of a screw (a helical surface surrounding a central cylindrical shaft) inside a hollow pipe. The screw is usually turned by windmill, manual labor, cattle, or by modern means, such as a motor. As the shaft turns, the bottom end scoops up a volume of water. This water is then pushed up the tube by the rotating helicoid until it pours out from the top of the tube.
The contact surface between the screw and the pipe does not need to be perfectly watertight, as long as the amount of water being scooped with each turn is large compared to the amount of water leaking out of each section of the screw per turn. If water from one section leaks into the next lower one, it will be transferred upwards by the next segment of the screw.
In some designs, the screw is fused to the casing and they both rotate together, instead of the screw turning within a stationary casing. The screw could be sealed to the casing with pitch resin or other adhesive, or the screw and casing could be cast together as a single piece in bronze.
The design of the everyday Greek and Roman water screw, in contrast to the heavy bronze device of Sennacherib, with its problematic drive chains, has a powerful simplicity. A double or triple helix was built of wood strips (or occasionally bronze sheeting) around a heavy wooden pole. A cylinder was built around the helices using long, narrow boards fastened to their periphery and waterproofed with pitch.
Studies show that the volume of flow passes through Archimedes screws is a function of inlet depth, diameter and rotation speed of the screw. Therefore, the following analytical equation could be used to design Archimedes screws:
where is in and:
: Rotation speed of the Archimedes screw (rad/s)
: Volumetric flow rate
Based on the common standards that the Archimedes screw designers use this analytical equation could be simplified as:
The value of η could simply determinate using the graph or graph. By determination of , other design parameters of Archimedes screws can be calculated using a step-by-step analytical method.
Variants
A screw conveyor is a similar device which transports bulk materials such as powders and cereal grains. It is contained within a tube and turned by a motor to deliver material from one end of the conveyor to the other and particularly suitable for transport of granular materials such as plastic granules used in injection moulding. It may also be used to transport liquids. In industrial control applications, the conveyor may be used as a rotary feeder or variable rate feeder to deliver a measured rate or quantity of material into a process.
A variant of the Archimedes screw can also be found in some injection moulding machines, die casting machines and extrusion of plastics, which employ a screw of decreasing pitch to compress and melt the material. It is also used in a rotary-screw air compressor. On a much larger scale, Archimedes's screws of decreasing pitch are used for the compaction of waste material.
Reverse action
If water is fed into the top of an Archimedes screw, it will force the screw to rotate. The rotating shaft can then be used to drive an electric generator. Such an installation has the same benefits as using the screw for pumping: the ability to handle very dirty water and widely varying rates of flow at high efficiency. Settle Hydro and Torrs Hydro are two reverse screw micro hydro schemes operating in England. The screw works well as a generator at low heads, commonly found in English rivers, including the Thames, powering Windsor Castle.
See also
Archimedean spiral
Screw-propelled vehicle
Screw (simple machine)
Spiral pump
Toroidal propeller
Vitruvius
Notes
Sources
P. J. Kantert: "Manual for Archimedean Screw Pump", Hirthammer Verlag 2008, .
P. J. Kantert: "Praxishandbuch Schneckenpumpe", Hirthammer Verlag 2008, .
P. J. Kantert: "Praxishandbuch Schneckenpumpe" - 2nd edition 2020, DWA, .
Nuernbergk, D. and Rorres C.: „An Analytical Model for the Water Inflow of an Archimedes Screw Used in Hydropower Generation", ASCE Journal of Hydraulic Engineering, Published: 23 July 2012
Nuernbergk D. M.: "Wasserkraftschnecken – Berechnung und optimaler Entwurf von archimedischen Schnecken als Wasserkraftmaschine", Verlag Moritz Schäfer, Detmold, 1. Edition. 2012, 272 papes,
Rorres C.: "The turn of the Screw: Optimum design of an Archimedes Screw", ASCE Journal of Hydraulic Engineering, Volume 126, Number 1, Jan.2000, pp. 72–80
Nagel, G.; Radlik, K.: Wasserförderschnecken – Planung, Bau und Betrieb von Wasserhebeanlagen; Udo Pfriemer Buchverlag in der Bauverlag GmbH, Wiesbaden, Berlin (1988)
External links
The Turn of the Screw: Optimal Design of an Archimedes Screw, by Chris Rorres, PhD.
"Archimedean Screw" by Sándor Kabai, Wolfram Demonstrations Project, 2007.
"Archimedes Screw Examples Various sources, 2021
Pumps
Screws
Screw, Archimedes
History of mining
Rotating machines
Egyptian inventions
Ancient inventions
Hanging Gardens of Babylon
Ptolemaic Kingdom
Sennacherib | Archimedes' screw | [
"Physics",
"Chemistry",
"Technology"
] | 2,227 | [
"Pumps",
"Machines",
"Turbomachinery",
"Physical systems",
"Rotating machines",
"Hydraulics"
] |
53,702 | https://en.wikipedia.org/wiki/Boltzmann%20constant | The Boltzmann constant ( or ) is the proportionality factor that relates the average relative thermal energy of particles in a gas with the thermodynamic temperature of the gas. It occurs in the definitions of the kelvin (K) and the gas constant, in Planck's law of black-body radiation and Boltzmann's entropy formula, and is used in calculating thermal noise in resistors. The Boltzmann constant has dimensions of energy divided by temperature, the same as entropy and heat capacity. It is named after the Austrian scientist Ludwig Boltzmann.
As part of the 2019 revision of the SI, the Boltzmann constant is one of the seven "defining constants" that have been defined so as to have exact finite decimal values in SI units. They are used in various combinations to define the seven SI base units. The Boltzmann constant is defined to be exactly joules per kelvin. Correspondingly, the SI units for temperature and energy are calibrated to one another so that kelvin = joules.
Roles of the Boltzmann constant
Macroscopically, the ideal gas law states that, for an ideal gas, the product of pressure and volume is proportional to the product of amount of substance and absolute temperature :
where is the molar gas constant (). Introducing the Boltzmann constant as the gas constant per molecule (NA being the Avogadro constant) transforms the ideal gas law into an alternative form:
where is the number of molecules of gas.
Role in the equipartition of energy
Given a thermodynamic system at an absolute temperature , the average thermal energy carried by each microscopic degree of freedom in the system is (i.e., about , or , at room temperature). This is generally true only for classical systems with a large number of particles, and in which quantum effects are negligible.
In classical statistical mechanics, this average is predicted to hold exactly for homogeneous ideal gases. Monatomic ideal gases (the six noble gases) possess three degrees of freedom per atom, corresponding to the three spatial directions. According to the equipartition of energy this means that there is a thermal energy of per atom. This corresponds very well with experimental data. The thermal energy can be used to calculate the root-mean-square speed of the atoms, which turns out to be inversely proportional to the square root of the atomic mass. The root mean square speeds found at room temperature accurately reflect this, ranging from for helium, down to for xenon.
Kinetic theory gives the average pressure for an ideal gas as
Combination with the ideal gas law
shows that the average translational kinetic energy is
Considering that the translational motion velocity vector has three degrees of freedom (one for each dimension) gives the average energy per degree of freedom equal to one third of that, i.e. .
The ideal gas equation is also obeyed closely by molecular gases; but the form for the heat capacity is more complicated, because the molecules possess additional internal degrees of freedom, as well as the three degrees of freedom for movement of the molecule as a whole. Diatomic gases, for example, possess a total of six degrees of simple freedom per molecule that are related to atomic motion (three translational, two rotational, and one vibrational). At lower temperatures, not all these degrees of freedom may fully participate in the gas heat capacity, due to quantum mechanical limits on the availability of excited states at the relevant thermal energy per molecule.
Role in Boltzmann factors
More generally, systems in equilibrium at temperature have probability of occupying a state with energy weighted by the corresponding Boltzmann factor:
where is the partition function. Again, it is the energy-like quantity that takes central importance.
Consequences of this include (in addition to the results for ideal gases above) the Arrhenius equation in chemical kinetics.
Role in the statistical definition of entropy
In statistical mechanics, the entropy of an isolated system at thermodynamic equilibrium is defined as the natural logarithm of , the number of distinct microscopic states available to the system given the macroscopic constraints (such as a fixed total energy ):
This equation, which relates the microscopic details, or microstates, of the system (via ) to its macroscopic state (via the entropy ), is the central idea of statistical mechanics. Such is its importance that it is inscribed on Boltzmann's tombstone.
The constant of proportionality serves to make the statistical mechanical entropy equal to the classical thermodynamic entropy of Clausius:
One could choose instead a rescaled dimensionless entropy in microscopic terms such that
This is a more natural form and this rescaled entropy exactly corresponds to Shannon's subsequent information entropy.
The characteristic energy is thus the energy required to increase the rescaled entropy by one nat.
Thermal voltage
In semiconductors, the Shockley diode equation—the relationship between the flow of electric current and the electrostatic potential across a p–n junction—depends on a characteristic voltage called the thermal voltage, denoted by . The thermal voltage depends on absolute temperature as
where is the magnitude of the electrical charge on the electron with a value Equivalently,
At room temperature , is approximately which can be derived by plugging in the values as follows:
At the standard state temperature of , it is approximately . The thermal voltage is also important in plasmas and electrolyte solutions (e.g. the Nernst equation); in both cases it provides a measure of how much the spatial distribution of electrons or ions is affected by a boundary held at a fixed voltage.
History
The Boltzmann constant is named after its 19th century Austrian discoverer, Ludwig Boltzmann. Although Boltzmann first linked entropy and probability in 1877, the relation was never expressed with a specific constant until Max Planck first introduced , and gave a more precise value for it (, about 2.5% lower than today's figure), in his derivation of the law of black-body radiation in 1900–1901. Before 1900, equations involving Boltzmann factors were not written using the energies per molecule and the Boltzmann constant, but rather using a form of the gas constant , and macroscopic energies for macroscopic quantities of the substance. The iconic terse form of the equation on Boltzmann's tombstone is in fact due to Planck, not Boltzmann. Planck actually introduced it in the same work as his eponymous .
In 1920, Planck wrote in his Nobel Prize lecture:
This "peculiar state of affairs" is illustrated by reference to one of the great scientific debates of the time. There was considerable disagreement in the second half of the nineteenth century as to whether atoms and molecules were real or whether they were simply a heuristic tool for solving problems. There was no agreement whether chemical molecules, as measured by atomic weights, were the same as physical molecules, as measured by kinetic theory. Planck's 1920 lecture continued:
In versions of SI prior to the 2019 revision of the SI, the Boltzmann constant was a measured quantity rather than having a fixed numerical value. Its exact definition also varied over the years due to redefinitions of the kelvin (see ) and other SI base units (see ).
In 2017, the most accurate measures of the Boltzmann constant were obtained by acoustic gas thermometry, which determines the speed of sound of a monatomic gas in a triaxial ellipsoid chamber using microwave and acoustic resonances. This decade-long effort was undertaken with different techniques by several laboratories; it is one of the cornerstones of the 2019 revision of the SI. Based on these measurements, the CODATA recommended to be the final fixed value of the Boltzmann constant to be used for the International System of Units.
As a precondition for redefining the Boltzmann constant, there must be one experimental value with a relative uncertainty below 1 ppm, and at least one measurement from a second technique with a relative uncertainty below 3 ppm. The acoustic gas thermometry reached 0.2 ppm, and Johnson noise thermometry reached 2.8 ppm.
Value in different units
Since is a proportionality factor between temperature and energy, its numerical value depends on the choice of units for energy and temperature. The small numerical value of the Boltzmann constant in SI units means a change in temperature by 1 K only changes a particle's energy by a small amount. A change of is defined to be the same as a change of . The characteristic energy is a term encountered in many physical relationships.
The Boltzmann constant sets up a relationship between wavelength and temperature (dividing hc/k by a wavelength gives a temperature) with one micrometer being related to , and also a relationship between voltage and temperature (kT in units of eV corresponds to a voltage) with one volt being related to . The ratio of these two temperatures, / ≈ 1.239842, is the numerical value of hc in units of eV⋅μm.
Natural units
The Boltzmann constant provides a mapping from the characteristic microscopic energy to the macroscopic temperature scale . In fundamental physics, this mapping is often simplified by using the natural units of setting to unity. This convention means that temperature and energy quantities have the same dimensions. In particular, the SI unit kelvin becomes superfluous, being defined in terms of joules as . With this convention, temperature is always given in units of energy, and the Boltzmann constant is not explicitly needed in formulas.
This convention simplifies many physical relationships and formulas. For example, the equipartition formula for the energy associated with each classical degree of freedom ( above) becomes
As another example, the definition of thermodynamic entropy coincides with the form of information entropy:
where is the probability of each microstate.
See also
Committee on Data of the International Science Council
Thermodynamic beta
List of scientists whose names are used in physical constants
Notes
References
External links
Draft Chapter 2 for SI Brochure, following redefinitions of the base units (prepared by the Consultative Committee for Units)
Big step towards redefining the kelvin: Scientists find new way to determine Boltzmann constant
Constant
Fundamental constants
Statistical mechanics
Thermodynamics | Boltzmann constant | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,091 | [
"Physical quantities",
"Physical constants",
"Thermodynamics",
"Statistical mechanics",
"Fundamental constants",
"Dynamical systems"
] |
53,741 | https://en.wikipedia.org/wiki/Symmetry | Symmetry () in everyday life refers to a sense of harmonious and beautiful proportion and balance. In mathematics, the term has a more precise definition and is usually used to refer to an object that is invariant under some transformations, such as translation, reflection, rotation, or scaling. Although these two meanings of the word can sometimes be told apart, they are intricately related, and hence are discussed together in this article.
Mathematical symmetry may be observed with respect to the passage of time; as a spatial relationship; through geometric transformations; through other kinds of functional transformations; and as an aspect of abstract objects, including theoretic models, language, and music.
This article describes symmetry from three perspectives: in mathematics, including geometry, the most familiar type of symmetry for many people; in science and nature; and in the arts, covering architecture, art, and music.
The opposite of symmetry is asymmetry, which refers to the absence of symmetry.
In mathematics
In geometry
A geometric shape or object is symmetric if it can be divided into two or more identical pieces that are arranged in an organized fashion. This means that an object is symmetric if there is a transformation that moves individual pieces of the object, but doesn't change the overall shape. The type of symmetry is determined by the way the pieces are organized, or by the type of transformation:
An object has reflectional symmetry (line or mirror symmetry) if there is a line (or in 3D a plane) going through it which divides it into two pieces that are mirror images of each other.
An object has rotational symmetry if the object can be rotated about a fixed point (or in 3D about a line) without changing the overall shape.
An object has translational symmetry if it can be translated (moving every point of the object by the same distance) without changing its overall shape.
An object has helical symmetry if it can be simultaneously translated and rotated in three-dimensional space along a line known as a screw axis.
An object has scale symmetry if it does not change shape when it is expanded or contracted. Fractals also exhibit a form of scale symmetry, where smaller portions of the fractal are similar in shape to larger portions.
Other symmetries include glide reflection symmetry (a reflection followed by a translation) and rotoreflection symmetry (a combination of a rotation and a reflection).
In logic
A dyadic relation R = S × S is symmetric if for all elements a, b in S, whenever it is true that Rab, it is also true that Rba. Thus, the relation "is the same age as" is symmetric, for if Paul is the same age as Mary, then Mary is the same age as Paul.
In propositional logic, symmetric binary logical connectives include and (∧, or &), or (∨, or |) and if and only if (↔), while the connective if (→) is not symmetric. Other symmetric logical connectives include nand (not-and, or ⊼), xor (not-biconditional, or ⊻), and nor (not-or, or ⊽).
Other areas of mathematics
Generalizing from geometrical symmetry in the previous section, one can say that a mathematical object is symmetric with respect to a given mathematical operation, if, when applied to the object, this operation preserves some property of the object. The set of operations that preserve a given property of the object form a group.
In general, every kind of structure in mathematics will have its own kind of symmetry. Examples include even and odd functions in calculus, symmetric groups in abstract algebra, symmetric matrices in linear algebra, and Galois groups in Galois theory. In statistics, symmetry also manifests as symmetric probability distributions, and as skewness—the asymmetry of distributions.
In science and nature
In physics
Symmetry in physics has been generalized to mean invariance—that is, lack of change—under any kind of transformation, for example arbitrary coordinate transformations. This concept has become one of the most powerful tools of theoretical physics, as it has become evident that practically all laws of nature originate in symmetries. In fact, this role inspired the Nobel laureate PW Anderson to write in his widely read 1972 article More is Different that "it is only slightly overstating the case to say that physics is the study of symmetry." See Noether's theorem (which, in greatly simplified form, states that for every continuous mathematical symmetry, there is a corresponding conserved quantity such as energy or momentum; a conserved current, in Noether's original language); and also, Wigner's classification, which says that the symmetries of the laws of physics determine the properties of the particles found in nature.
Important symmetries in physics include continuous symmetries and discrete symmetries of spacetime; internal symmetries of particles; and supersymmetry of physical theories.
In biology
In biology, the notion of symmetry is mostly used explicitly to describe body shapes. Bilateral animals, including humans, are more or less symmetric with respect to the sagittal plane which divides the body into left and right halves. Animals that move in one direction necessarily have upper and lower sides, head and tail ends, and therefore a left and a right. The head becomes specialized with a mouth and sense organs, and the body becomes bilaterally symmetric for the purpose of movement, with symmetrical pairs of muscles and skeletal elements, though internal organs often remain asymmetric.
Plants and sessile (attached) animals such as sea anemones often have radial or rotational symmetry, which suits them because food or threats may arrive from any direction. Fivefold symmetry is found in the echinoderms, the group that includes starfish, sea urchins, and sea lilies.
In biology, the notion of symmetry is also used as in physics, that is to say to describe the properties of the objects studied, including their interactions. A remarkable property of biological evolution is the changes of symmetry corresponding to the appearance of new parts and dynamics.
In chemistry
Symmetry is important to chemistry because it undergirds essentially all specific interactions between molecules in nature (i.e., via the interaction of natural and human-made chiral molecules with inherently chiral biological systems). The control of the symmetry of molecules produced in modern chemical synthesis contributes to the ability of scientists to offer therapeutic interventions with minimal side effects. A rigorous understanding of symmetry explains fundamental observations in quantum chemistry, and in the applied areas of spectroscopy and crystallography. The theory and application of symmetry to these areas of physical science draws heavily on the mathematical area of group theory.
In psychology and neuroscience
For a human observer, some symmetry types are more salient than others, in particular the most salient is a reflection with a vertical axis, like that present in the human face. Ernst Mach made this observation in his book "The analysis of sensations" (1897), and this implies that perception of symmetry is not a general response to all types of regularities. Both behavioural and neurophysiological studies have confirmed the special sensitivity to reflection symmetry in humans and also in other animals. Early studies within the Gestalt tradition suggested that bilateral symmetry was one of the key factors in perceptual grouping. This is known as the Law of Symmetry. The role of symmetry in grouping and figure/ground organization has been confirmed in many studies. For instance, detection of reflectional symmetry is faster when this is a property of a single object. Studies of human perception and psychophysics have shown that detection of symmetry is fast, efficient and robust to perturbations. For example, symmetry can be detected with presentations between 100 and 150 milliseconds.
More recent neuroimaging studies have documented which brain regions are active during perception of symmetry. Sasaki et al. used functional magnetic resonance imaging (fMRI) to compare responses for patterns with symmetrical or random dots. A strong activity was present in extrastriate regions of the occipital cortex but not in the primary visual cortex. The extrastriate regions included V3A, V4, V7, and the lateral occipital complex (LOC). Electrophysiological studies have found a late posterior negativity that originates from the same areas. In general, a large part of the visual system seems to be involved in processing visual symmetry, and these areas involve similar networks to those responsible for detecting and recognising objects.
In social interactions
People observe the symmetrical nature, often including asymmetrical balance, of social interactions in a variety of contexts. These include assessments of reciprocity, empathy, sympathy, apology, dialogue, respect, justice, and revenge.
Reflective equilibrium is the balance that may be attained through deliberative mutual adjustment among general principles and specific judgments.
Symmetrical interactions send the moral message "we are all the same" while asymmetrical interactions may send the message "I am special; better than you." Peer relationships, such as can be governed by the Golden Rule, are based on symmetry, whereas power relationships are based on asymmetry. Symmetrical relationships can to some degree be maintained by simple (game theory) strategies seen in symmetric games such as tit for tat.
In the arts
There exists a list of journals and newsletters known to deal, at least in part, with symmetry and the arts.
In architecture
Symmetry finds its ways into architecture at every scale, from the overall external views of buildings such as Gothic cathedrals and The White House, through the layout of the individual floor plans, and down to the design of individual building elements such as tile mosaics. Islamic buildings such as the Taj Mahal and the Lotfollah mosque make elaborate use of symmetry both in their structure and in their ornamentation. Moorish buildings like the Alhambra are ornamented with complex patterns made using translational and reflection symmetries as well as rotations.
It has been said that only bad architects rely on a "symmetrical layout of blocks, masses and structures"; Modernist architecture, starting with International style, relies instead on "wings and balance of masses".
In pottery and metal vessels
Since the earliest uses of pottery wheels to help shape clay vessels, pottery has had a strong relationship to symmetry. Pottery created using a wheel acquires full rotational symmetry in its cross-section, while allowing substantial freedom of shape in the vertical direction. Upon this inherently symmetrical starting point, potters from ancient times onwards have added patterns that modify the rotational symmetry to achieve visual objectives.
Cast metal vessels lacked the inherent rotational symmetry of wheel-made pottery, but otherwise provided a similar opportunity to decorate their surfaces with patterns pleasing to those who used them. The ancient Chinese, for example, used symmetrical patterns in their bronze castings as early as the 17th century BC. Bronze vessels exhibited both a bilateral main motif and a repetitive translated border design.
In carpets and rugs
A long tradition of the use of symmetry in carpet and rug patterns spans a variety of cultures. American Navajo Indians used bold diagonals and rectangular motifs. Many Oriental rugs have intricate reflected centers and borders that translate a pattern. Not surprisingly, rectangular rugs have typically the symmetries of a rectangle—that is, motifs that are reflected across both the horizontal and vertical axes (see ).
In quilts
As quilts are made from square blocks (usually 9, 16, or 25 pieces to a block) with each smaller piece usually consisting of fabric triangles, the craft lends itself readily to the application of symmetry.
In other arts and crafts
Symmetries appear in the design of objects of all kinds. Examples include beadwork, furniture, sand paintings, knotwork, masks, and musical instruments. Symmetries are central to the art of M.C. Escher and the many applications of tessellation in art and craft forms such as wallpaper, ceramic tilework such as in Islamic geometric decoration, batik, ikat, carpet-making, and many kinds of textile and embroidery patterns.
Symmetry is also used in designing logos. By creating a logo on a grid and using the theory of symmetry, designers can organize their work, create a symmetric or asymmetrical design, determine the space between letters, determine how much negative space is required in the design, and how to accentuate parts of the logo to make it stand out.
In music
Symmetry is not restricted to the visual arts. Its role in the history of music touches many aspects of the creation and perception of music.
Musical form
Symmetry has been used as a formal constraint by many composers, such as the arch (swell) form (ABCBA) used by Steve Reich, Béla Bartók, and James Tenney. In classical music, Johann Sebastian Bach used the symmetry concepts of permutation and invariance.
Pitch structures
Symmetry is also an important consideration in the formation of scales and chords, traditional or tonal music being made up of non-symmetrical groups of pitches, such as the diatonic scale or the major chord. Symmetrical scales or chords, such as the whole tone scale, augmented chord, or diminished seventh chord (diminished-diminished seventh), are said to lack direction or a sense of forward motion, are ambiguous as to the key or tonal center, and have a less specific diatonic functionality. However, composers such as Alban Berg, Béla Bartók, and George Perle have used axes of symmetry and/or interval cycles in an analogous way to keys or non-tonal tonal centers. George Perle explains that "C–E, D–F♯, [and] Eb–G, are different instances of the same interval … the other kind of identity. … has to do with axes of symmetry. C–E belongs to a family of symmetrically related dyads as follows:"
Thus in addition to being part of the interval-4 family, C–E is also a part of the sum-4 family (with C equal to 0).
Interval cycles are symmetrical and thus non-diatonic. However, a seven pitch segment of C5 (the cycle of fifths, which are enharmonic with the cycle of fourths) will produce the diatonic major scale. Cyclic tonal progressions in the works of Romantic composers such as Gustav Mahler and Richard Wagner form a link with the cyclic pitch successions in the atonal music of Modernists such as Bartók, Alexander Scriabin, Edgard Varèse, and the Vienna school. At the same time, these progressions signal the end of tonality.
The first extended composition consistently based on symmetrical pitch relations was probably Alban Berg's Quartet, Op. 3 (1910).
Equivalency
Tone rows or pitch class sets which are invariant under retrograde are horizontally symmetrical, under inversion vertically. See also Asymmetric rhythm.
In aesthetics
The relationship of symmetry to aesthetics is complex. Humans find bilateral symmetry in faces physically attractive; it indicates health and genetic fitness. Opposed to this is the tendency for excessive symmetry to be perceived as boring or uninteresting. Rudolf Arnheim suggested that people prefer shapes that have some symmetry, and enough complexity to make them interesting.
In literature
Symmetry can be found in various forms in literature, a simple example being the palindrome where a brief text reads the same forwards or backwards. Stories may have a symmetrical structure, such as the rise and fall pattern of Beowulf.
See also
Automorphism
Burnside's lemma
Chirality
Even and odd functions
Fixed points of isometry groups in Euclidean space – center of symmetry
Isotropy
Palindrome
Spacetime symmetries
Spontaneous symmetry breaking
Symmetry-breaking constraints
Symmetric relation
Symmetries of polyiamonds
Symmetries of polyominoes
Symmetry group
Wallpaper group
Explanatory notes
References
Further reading
The Equation That Couldn't Be Solved: How Mathematical Genius Discovered the Language of Symmetry, Mario Livio, Souvenir Press, 2006, .
External links
International Symmetry Association (ISA)
Dutch: Symmetry Around a Point in the Plane
Chapman: Aesthetics of Symmetry
ISIS Symmetry
Symmetry, BBC Radio 4 discussion with Fay Dowker, Marcus du Sautoy & Ian Stewart (In Our Time, Apr. 19, 2007)
Aesthetics
Artistic techniques
Geometry
Theoretical physics | Symmetry | [
"Physics",
"Mathematics"
] | 3,318 | [
"Theoretical physics",
"Geometry",
"Symmetry"
] |
53,781 | https://en.wikipedia.org/wiki/Relative%20permittivity | The relative permittivity (in older texts, dielectric constant) is the permittivity of a material expressed as a ratio with the electric permittivity of a vacuum. A dielectric is an insulating material, and the dielectric constant of an insulator measures the ability of the insulator to store electric energy in an electrical field.
Permittivity is a material's property that affects the Coulomb force between two point charges in the material. Relative permittivity is the factor by which the electric field between the charges is decreased relative to vacuum.
Likewise, relative permittivity is the ratio of the capacitance of a capacitor using that material as a dielectric, compared with a similar capacitor that has vacuum as its dielectric. Relative permittivity is also commonly known as the dielectric constant, a term still used but deprecated by standards organizations in engineering as well as in chemistry.
Definition
Relative permittivity is typically denoted as (sometimes , lowercase kappa) and is defined as
where ε(ω) is the complex frequency-dependent permittivity of the material, and ε0 is the vacuum permittivity.
Relative permittivity is a dimensionless number that is in general complex-valued; its real and imaginary parts are denoted as:
The relative permittivity of a medium is related to its electric susceptibility, , as .
In anisotropic media (such as non cubic crystals) the relative permittivity is a second rank tensor.
The relative permittivity of a material for a frequency of zero is known as its static relative permittivity.
Terminology
The historical term for the relative permittivity is dielectric constant. It is still commonly used, but has been deprecated by standards organizations, because of its ambiguity, as some older reports used it for the absolute permittivity ε. The permittivity may be quoted either as a static property or as a frequency-dependent variant, in which case it is also known as the dielectric function. It has also been used to refer to only the real component ε′r of the complex-valued relative permittivity.
Physics
In the causal theory of waves, permittivity is a complex quantity. The imaginary part corresponds to a phase shift of the polarization relative to and leads to the attenuation of electromagnetic waves passing through the medium. By definition, the linear relative permittivity of vacuum is equal to 1, that is , although there are theoretical nonlinear quantum effects in vacuum that become non-negligible at high field strengths.
The following table gives some typical values.
The relative low frequency permittivity of ice is ~96 at −10.8 °C, falling to 3.15 at high frequency, which is independent of temperature. It remains in the range 3.12–3.19 for frequencies between about 1 MHz and the far infrared region.
Measurement
The relative static permittivity, εr, can be measured for static electric fields as follows: first the capacitance of a test capacitor, C0, is measured with vacuum between its plates. Then, using the same capacitor and distance between its plates, the capacitance C with a dielectric between the plates is measured. The relative permittivity can be then calculated as
For time-variant electromagnetic fields, this quantity becomes frequency-dependent. An indirect technique to calculate εr is conversion of radio frequency S-parameter measurement results. A description of frequently used S-parameter conversions for determination of the frequency-dependent εr of dielectrics can be found in this bibliographic source. Alternatively, resonance based effects may be employed at fixed frequencies.
Applications
Energy
The relative permittivity is an essential piece of information when designing capacitors, and in other circumstances where a material might be expected to introduce capacitance into a circuit. If a material with a high relative permittivity is placed in an electric field, the magnitude of that field will be measurably reduced within the volume of the dielectric. This fact is commonly used to increase the capacitance of a particular capacitor design. The layers beneath etched conductors in printed circuit boards (PCBs) also act as dielectrics.
Communication
Dielectrics are used in radio frequency (RF) transmission lines. In a coaxial cable, polyethylene can be used between the center conductor and outside shield. It can also be placed inside waveguides to form filters. Optical fibers are examples of dielectric waveguides. They consist of dielectric materials that are purposely doped with impurities so as to control the precise value of εr within the cross-section. This controls the refractive index of the material and therefore also the optical modes of transmission. However, in these cases it is technically the relative permittivity that matters, as they are not operated in the electrostatic limit.
Environment
The relative permittivity of air changes with temperature, humidity, and barometric pressure. Sensors can be constructed to detect changes in capacitance caused by changes in the relative permittivity. Most of this change is due to effects of temperature and humidity as the barometric pressure is fairly stable. Using the capacitance change, along with the measured temperature, the relative humidity can be obtained using engineering formulas.
Chemistry
The relative static permittivity of a solvent is a relative measure of its chemical polarity. For example, water is very polar, and has a relative static permittivity of 80.10 at 20 °C while n-hexane is non-polar, and has a relative static permittivity of 1.89 at 20 °C. This information is important when designing separation, sample preparation and chromatography techniques in analytical chemistry.
The correlation should, however, be treated with caution. For instance, dichloromethane has a value of εr of 9.08 (20 °C) and is rather poorly soluble in water (13g/L or 9.8mL/L at 20 °C); at the same time, tetrahydrofuran has its εr = 7.52 at 22 °C, but it is completely miscible with water. In the case of tetrahydrofuran, the oxygen atom can act as a hydrogen bond acceptor; whereas dichloromethane cannot form hydrogen bonds with water.
This is even more remarkable when comparing the εr values of acetic acid (6.2528) and that of iodoethane (7.6177). The large numerical value of εr is not surprising in the second case, as the iodine atom is easily polarizable; nevertheless, this does not imply that it is polar, too (electronic polarizability prevails over the orientational one in this case).
Lossy medium
Again, similar as for absolute permittivity, relative permittivity for lossy materials can be formulated as:
in terms of a "dielectric conductivity" σ (units S/m, siemens per meter), which "sums over all the dissipative effects of the material; it may represent an actual [electrical] conductivity caused by migrating charge carriers and it may also refer to an energy loss associated with the dispersion of ε′ [the real-valued permittivity]" ( p. 8). Expanding the angular frequency and the electric constant , which reduces to:
where λ is the wavelength, c is the speed of light in vacuum and = 59.95849 Ω ≈ 60.0 Ω is a newly introduced constant (units ohms, or reciprocal siemens, such that σλκ = εr remains unitless).
Metals
Permittivity is typically associated with dielectric materials, however metals are described as having an effective permittivity, with real relative permittivity equal to one. In the high-frequency region, which extends from radio frequencies to the far infrared and terahertz region, the plasma frequency of the electron gas is much greater than the electromagnetic propagation frequency, so the refractive index n of a metal is very nearly a purely imaginary number. In the low frequency regime, the effective relative permittivity is also almost purely imaginary: It has a very large imaginary value related to the conductivity and a comparatively insignificant real-value.
See also
Curie temperature
Dielectric spectroscopy
Dielectric strength
Electret
Ferroelectricity
Green–Kubo relations
High-κ dielectric
Kramers–Kronig relation
Linear response function
Low-κ dielectric
Loss tangent
Permittivity
Refractive index
Permeability (electromagnetism)
References
Electricity
Electric and magnetic fields in matter
Colloidal chemistry | Relative permittivity | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,804 | [
"Colloidal chemistry",
"Electric and magnetic fields in matter",
"Colloids",
"Surface science",
"Materials science",
"Condensed matter physics"
] |
10,938,074 | https://en.wikipedia.org/wiki/Sch%C3%B6nberg%E2%80%93Chandrasekhar%20limit | In stellar astrophysics, the Schönberg–Chandrasekhar limit is the maximum mass of a non-fusing, isothermal core that can support an enclosing envelope. It is expressed as the ratio of the core mass to the total mass of the core and envelope. Estimates of the limit depend on the models used and the assumed chemical compositions of the core and envelope; typical values given are from 0.10 to 0.15 (10% to 15% of the total stellar mass). This is the maximum to which a helium-filled core can grow, and if this limit is exceeded, as can only happen in massive stars, the core collapses, releasing energy that causes the outer layers of the star to expand to become a red giant. It is named after the astrophysicists Subrahmanyan Chandrasekhar and Mario Schönberg, who estimated its value in a 1942 paper. They estimated it to be:
where is the mass, is the mean molecular weight, index c denotes the core, and index e is the envelope.
The Schönberg–Chandrasekhar limit comes into play when fusion in a main-sequence star exhausts the hydrogen at the center of the star. The star then contracts until hydrogen fuses in a shell surrounding a helium-rich core, both of which are surrounded by an envelope consisting primarily of hydrogen. The core increases in mass as the shell burns its way outwards through the star. If the star's mass is less than approximately 1.5 solar masses, the core will become degenerate before the Schönberg–Chandrasekhar limit is reached, and, on the other hand, if the mass is greater than approximately 6 solar masses, the star leaves the main sequence with a core mass already greater than the Schönberg–Chandrasekhar limit so its core is never isothermal before helium fusion. In the remaining case, where the mass is between 1.5 and 6 solar masses, the core will grow until the limit is reached, at which point it will contract rapidly until helium starts to fuse in the core.
References
Astrophysics
Stellar astronomy
Stellar dynamics | Schönberg–Chandrasekhar limit | [
"Physics",
"Astronomy"
] | 433 | [
"Stellar astronomy",
"Astronomical sub-disciplines",
"Astrophysics",
"Stellar dynamics"
] |
10,939,045 | https://en.wikipedia.org/wiki/Carboxypeptidase%20E | Carboxypeptidase E (CPE), also known as carboxypeptidase H (CPH) and enkephalin convertase, is an enzyme that in humans is encoded by the CPE gene. This enzyme catalyzes the release of C-terminal arginine or lysine residues from polypeptides.
CPE is involved in the biosynthesis of most neuropeptides and peptide hormones. The production of neuropeptides and peptide hormones typically requires two sets of enzymes that cleave the peptide precursors, which are small proteins. First, proprotein convertases cut the precursor at specific sites to generate intermediates containing C-terminal basic residues (lysine and/or arginine). These intermediates are then cleaved by CPE to remove the basic residues. For some peptides, additional processing steps, such as C-terminal amidation, are subsequently required to generate the bioactive peptide, although for many peptides the action of the proprotein convertases and CPE is sufficient to produce the bioactive peptide.
Tissue distribution
Carboxypeptidase E is found in brain and throughout the neuroendocrine system, including the endocrine pancreas, pituitary, and adrenal gland chromaffin cells. Within cells, carboxypeptidase E is present in the secretory granules along with its peptide substrates and products. Carboxypeptidase E is a glycoprotein that exists in both membrane-associated and soluble forms. The membrane-binding is due to an amphiphilic α-helix within the C-terminal region of the protein.
Species distribution
Carboxypeptidase E is found in all species of vertebrates that have been examined, and is also present in many other organisms that have been studied (nematode, sea slug). Carboxypeptidase E is not found in the fruit fly (Drosophila), and another enzyme (presumably carboxypeptidase D) fills in for carboxypeptidase E in this organism. In humans, CPE is encoded by the CPE gene.
Function
Carboxypeptidase E functions in the production of nearly all neuropeptides and peptide hormones. The enzyme acts as an exopeptidase to activate neuropeptides. It does that by cleaving off basic C-terminal amino acids, producing the active form of the peptide. Products of carboxypeptidase E include insulin, the enkephalins, vasopressin, oxytocin, and most other neuroendocrine peptide hormones and neuropeptides.
It has been proposed that membrane-associated carboxypeptidase E acts as a sorting signal for regulated secretory proteins in the trans-Golgi network of the pituitary and in secretory granules; regulated secretory proteins are mostly hormones and neuropeptides. However, this role for carboxypeptidase E remains controversial, and evidence shows that this enzyme is not necessary for the sorting of regulated secretory proteins.
Clinical significance
Mice with mutant carboxypeptidase E, Cpefat, display endocrine disorders like obesity and infertility. In some strains of mice, the fat mutation also causes hyperproinsulinemia in adult male mice, but this is not found in all strains of mice. The obesity and infertility in the Cpefat mice develop with age; young mice (<8 weeks of age) are fertile and have normal body weight. Peptide processing in Cpefat mice is impaired, with a large accumulation of peptides with C-terminal lysine and/or arginine extensions. Levels of the mature forms of peptides are generally reduced in these mice, but not eliminated. It is thought that a related enzyme (carboxypeptidase D) also contributes to neuropeptide processing and gives rise to the mature peptides in the Cpefat mice.
Mutations in the CPE gene are not common within the human population, but have been identified. One patient with extreme obesity (Body Mass Index >50) was found to have a mutation that deleted nearly the entire CPE gene. This patient had intellectual disability (inability to read or write) and had abnormal glucose homeostasis, similar to mice lacking CPE activity.
In obesity, high levels of circulating free fatty acids have been reported to cause a decrease in the amount of carboxypeptidase E protein in pancreatic beta-cells, leading to beta-cell dysfunction (hyperproinsulinemia) and increased beta-cell apoptosis (via an increase in ER stress). However, because CPE is not a rate-limiting enzyme for the production of most neuropeptides and peptide hormones, it is not clear how relatively modest decreases in CPE activity can cause physiological effects.
See also
Carboxypeptidase
Carboxypeptidase A
References
Further reading
External links
The MEROPS online database for peptidases and their inhibitors: M14.005
Proteins
EC 3.4.17
Metabolism | Carboxypeptidase E | [
"Chemistry",
"Biology"
] | 1,079 | [
"Biomolecules by chemical classification",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Proteins",
"Metabolism"
] |
10,940,032 | https://en.wikipedia.org/wiki/Hydrogen%20anion | The hydrogen anion, H−, is a negative ion of hydrogen, that is, a hydrogen atom that has captured an extra electron. The hydrogen anion is an important constituent of the atmosphere of stars, such as the Sun. In chemistry, this ion is called hydride. The ion has two electrons bound by the electromagnetic force to a nucleus containing one proton.
The binding energy of H− equals the binding energy of an extra electron to a hydrogen atom, called electron affinity of hydrogen. It is measured to be or (see Electron affinity (data page)). The total ground state energy thus becomes .
Occurrence
The hydrogen anion is the dominant bound-free opacity source at visible and near-infrared wavelengths in the atmospheres of stars like the Sun and cooler; its importance was first noted in the 1930s. The ion absorbs photons with energies in the range 0.75–4.0 eV, which ranges from the infrared into the visible spectrum. Most of the electrons in these negative ions come from the ionization of metals with low first ionization potentials, including the alkali metals and alkaline earths. The process which ejects the electron from the ion is properly called photodetachment rather than photoionization because the result is a neutral atom (rather than an ion) and a free electron.
H− also occurs in the Earth's ionosphere and can be produced in particle accelerators.
Its existence was first proven theoretically by Hans Bethe in 1929. H− is unusual because, in its free form, it has no bound excited states, as was finally proven in 1977.
In chemistry, hydrogen has the formal oxidation state −1 in the hydride anion.
The term hydride is probably most often used to describe compounds of hydrogen with other elements in which the hydrogen is in the formal −1 oxidation state. In most such compounds the bonding between the hydrogen and its nearest neighbor is covalent. An example of a hydride is the borohydride anion ().
See also
Hydron (hydrogen cation)
Electride, another very simple anion
Hydrogen ion
References
Hydrogen physics
Astrophysics
Anions | Hydrogen anion | [
"Physics",
"Chemistry",
"Astronomy"
] | 445 | [
"Matter",
"Anions",
"Astrophysics",
"Ions",
"Astronomical sub-disciplines"
] |
10,940,802 | https://en.wikipedia.org/wiki/Precursor%20%28chemistry%29 | In chemistry, a precursor is a compound that participates in a chemical reaction that produces another compound.
In biochemistry, the term "precursor" often refers more specifically to a chemical compound preceding another in a metabolic pathway, such as a protein precursor.
Illicit drug precursors
In 1988, the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances introduced detailed provisions and requirements relating the control of precursors used to produce drugs of abuse.
In Europe the Regulation (EC) No. 273/2004 of the European Parliament and of the Council on drug precursors was adopted on 11 February 2004. (European law on drug precursors)
Illicit explosives precursors
On January 15, 2013, the Regulation (EU) No. 98/2013 of the European Parliament and of the Council on the marketing and use of explosives precursors was adopted.
The Regulation harmonises rules across Europe on the making available, introduction, possession and use, of certain substances or mixtures that could be misused for the illicit manufacture of explosives.
Detection
A portable, advanced sensor based on infrared spectroscopy in a hollow fiber matched to a silicon-micromachined fast gas chromatography column can analyze illegal stimulants and precursors with nanogram-level sensitivity.
Raman spectroscopy has been successfully tested to detect explosives and their precursors.
Technologies able to detect precursors in the environment could contribute to an early location of sites where illegal substances (both explosives and drugs of abuse) are produced.
See also
Binary chemical weapon
Chemical synthesis
DEA list of chemicals
Derivative (chemistry)
Educt, a reagent or reactant
Metabolism#Anabolism
Monoamine precursor
Prodrug
Protein precursor
References
Biochemical reactions
Chemical synthesis
Metabolism | Precursor (chemistry) | [
"Chemistry",
"Biology"
] | 346 | [
"Biochemical reactions",
"Cellular processes",
"nan",
"Chemical synthesis",
"Biochemistry",
"Metabolism"
] |
10,941,726 | https://en.wikipedia.org/wiki/Fructose-bisphosphate%20aldolase | Fructose-bisphosphate aldolase (), often just aldolase, is an enzyme catalyzing a reversible reaction that splits the aldol, fructose 1,6-bisphosphate, into the triose phosphates dihydroxyacetone phosphate (DHAP) and glyceraldehyde 3-phosphate (G3P). Aldolase can also produce DHAP from other (3S,4R)-ketose 1-phosphates such as fructose 1-phosphate and sedoheptulose 1,7-bisphosphate. Gluconeogenesis and the Calvin cycle, which are anabolic pathways, use the reverse reaction. Glycolysis, a catabolic pathway, uses the forward reaction. Aldolase is divided into two classes by mechanism.
The word aldolase also refers, more generally, to an enzyme that performs an aldol reaction (creating an aldol) or its reverse (cleaving an aldol), such as Sialic acid aldolase, which forms sialic acid. See the list of aldolases.
Mechanism and structure
Class I proteins form a protonated Schiff base intermediate linking a highly conserved active site lysine with the DHAP carbonyl carbon. Additionally, tyrosine residues are crucial to this mechanism in acting as stabilizing hydrogen acceptors. Class II proteins use a different mechanism which polarizes the carbonyl group with a divalent cation like Zn2+. The Escherichia coli galactitol operon protein, gatY, and N-acetyl galactosamine operon protein, agaY, which are tagatose-bisphosphate aldolase, are homologs of class II fructose-bisphosphate aldolase. Two histidine residues in the first half of the sequence of these homologs have been shown to be involved in binding zinc.
The protein subunits of both classes each have an α/β domain folded into a TIM barrel containing the active site. Several subunits are assembled into the complete protein. The two classes share little sequence identity.
With few exceptions only class I proteins have been found in animals, plants, and green algae. With few exceptions only class II proteins have been found in fungi. Both classes have been found widely in other eukaryotes and in bacteria. The two classes are often present together in the same organism. Plants and algae have plastidal aldolase, sometimes a relic of endosymbiosis, in addition to the usual cytosolic aldolase. A bifunctional fructose-bisphosphate aldolase/phosphatase, with class I mechanism, has been found widely in archaea and in some bacteria. The active site of this archaeal aldolase is also in a TIM barrel.
In gluconeogenesis and glycolysis
Gluconeogenesis and glycolysis share a series of six reversible reactions. In gluconeogenesis glyceraldehyde-3-phosphate is reduced to fructose 1,6-bisphosphate with aldolase. In glycolysis fructose 1,6-bisphosphate is made into glyceraldehyde-3-phosphate and dihydroxyacetone phosphate through the use of aldolase. The aldolase used in gluconeogenesis and glycolysis is a cytoplasmic protein.
Three forms of class I protein are found in vertebrates.
Aldolase A is preferentially expressed in muscle and brain; aldolase B in liver, kidney, and in enterocytes; and aldolase C in brain. Aldolases A and C are mainly involved in glycolysis, while aldolase B is involved in both glycolysis and gluconeogenesis. Some defects in aldolase B cause hereditary fructose intolerance. The metabolism of free fructose in liver exploits the ability of aldolase B to use fructose 1-phosphate as a substrate. Archaeal fructose-bisphosphate aldolase/phosphatase is presumably involved in gluconeogenesis because its product is fructose 6-phosphate.
In the Calvin cycle
The Calvin cycle is a carbon fixation pathway; it is part of photosynthesis, which convert carbon dioxide and other compounds into glucose. It and gluconeogenesis share a series of four reversible reactions. In both pathways 3-phosphoglycerate (3-PGA or 3-PG) is reduced to fructose 1,6-bisphosphate with aldolase catalyzing the last reaction. A fifth reaction, catalyzed in both pathways by fructose 1,6-bisphosphatase, hydrolyzes the fructose 1-6-bisphosphate to fructose 6-phosphate and inorganic phosphate. The large decrease in free energy makes this reaction irreversible. In the Calvin cycle aldolase also catalyzes the production of sedoheptulose 1,7-bisphosphate from DHAP and erythrose 4-phosphate. The chief products of the Calvin cycle are triose phosphate (TP), which is a mixture of DHAP and G3P, and fructose 6-phosphate. Both are also needed to regenerate RuBP. The aldolase used by plants and algae in the Calvin cycle is usually a plastid-targeted protein encoded by a nuclear gene.
Reactions
Aldolase catalyzes
fructose 1,6-bisphosphate DHAP + G3P
and also
sedoheptulose 1,7-bisphosphate DHAP + erythrose 4-phosphate
fructose 1-phosphate DHAP + glyceraldehyde
Aldolase is used in the reversible trunk of gluconeogenesis/glycolysis
2(PEP + NADH + H+ + ATP + H2O) fructose 1,6-bisphosphate + 2(NAD+ + ADP + Pi)
Aldolase is also used in the part of the Calvin cycle shared with gluconeogenesis, with the irreversible phosphate hydrolysis at the end catalyzed by fructose 1,6-bisphosphatase
2(3-PG + NADPH + H+ + ATP + H2O) fructose 1,6-bisphosphate + 2(NADP+ + ADP + Pi)
fructose 1,6-bisphosphate + H2O → fructose 6-phosphate + Pi
In gluconeogenesis 3-PG is produced by enolase and phosphoglycerate mutase acting in series
PEP + H2O 2-PG 3-PG
In the Calvin cycle 3-PG is produced by RuBisCO
RuBP + CO2 + H2O → 2(3-PG)
G3P is produced by phosphoglycerate kinase acting in series with glyceraldehyde-3-phosphate dehydrogenase (GAPDH) in gluconeogenesis, and in series with glyceraldehyde-3-phosphate dehydrogenase (NADP+) (phosphorylating) in the Calvin cycle
3-PG + ATP 1,3-bisphosphoglycerate + ADP
1,3-bisphosphoglycerate + NAD(P)H + H+ G3P + Pi + NAD(P)+
Triose-phosphate isomerase maintains DHAP and G3P in near equilibrium, producing the mixture called triose phosphate (TP)
G3P DHAP
Thus both DHAP and G3P are available to aldolase.
Moonlighting properties
Aldolase has also been implicated in many "moonlighting" or non-catalytic functions, based upon its binding affinity for many other proteins including F-actin, α-tubulin, light chain dynein, WASP, Band 3 anion exchanger, phospholipase D (PLD2), glucose transporter GLUT4, inositol trisphosphate, V-ATPase and ARNO (a guanine nucleotide exchange factor of ARF6). These associations are thought to be predominantly involved in cellular structure, however, involvement in endocytosis, parasite invasion, cytoskeleton rearrangement, cell motility, membrane protein trafficking and recycling, signal transduction and tissue compartmentalization have been explored.
References
Further reading
External links
Tolan Laboratory at Boston University
Protein domains
Lyases
Moonlighting proteins
Glycolysis enzymes
Glycolysis | Fructose-bisphosphate aldolase | [
"Chemistry",
"Biology"
] | 1,830 | [
"Carbohydrate metabolism",
"Glycolysis",
"Protein domains",
"Protein classification"
] |
10,943,640 | https://en.wikipedia.org/wiki/Haploscope | A haploscope is an optical device for presenting one image to one eye and another image to the other eye. The word derives from two Greek roots: haploieides, single and skopeo, to view. The word is often used interchangeably with stereoscope, but it is more general than that. A stereoscope is a type of haploscope, but not vice versa. The word has more currency in the medical field than elsewhere, where it refers to instruments designed to test binocular vision. These instruments include Worth's amblyoscope and the synoptophore.
Commonly haploscopes employ front-surfaced mirrors placed at different angles close to the eyes to reflect the images into the eyes. Reputedly the largest haploscope, with images of over a meter (in fact, 4 feet) square and a viewing distance for each eye of nearly five meters (16 feet), was constructed by Vaegan in about 1975 to research stereoacuity. The large images allowed very small retinal disparities to be presented.
See also
Cheiroscope
Diplopia
Amblyopia
Orthoptist
References
External links
International Orthoptics Association
Orthoptics Association of Australia
Optical devices | Haploscope | [
"Materials_science",
"Engineering"
] | 257 | [
"Glass engineering and science",
"Optical devices"
] |
1,566,768 | https://en.wikipedia.org/wiki/Material%20properties%20of%20diamond | Diamond is the allotrope of carbon in which the carbon atoms are arranged in the specific type of cubic lattice called diamond cubic. It is a crystal that is transparent to opaque and which is generally isotropic (no or very weak birefringence). Diamond is the hardest naturally occurring material known. Yet, due to important structural brittleness, bulk diamond's toughness is only fair to good. The precise tensile strength of bulk diamond is little known; however, compressive strength up to has been observed, and it could be as high as in the form of micro/nanometer-sized wires or needles (~ in diameter, micrometers long), with a corresponding maximum tensile elastic strain in excess of 9%. The anisotropy of diamond hardness is carefully considered during diamond cutting. Diamond has a high refractive index (2.417) and moderate dispersion (0.044) properties that give cut diamonds their brilliance. Scientists classify diamonds into four main types according to the nature of crystallographic defects present. Trace impurities substitutionally replacing carbon atoms in a diamond's crystal structure, and in some cases structural defects, are responsible for the wide range of colors seen in diamond. Most diamonds are electrical insulators and extremely efficient thermal conductors. Unlike many other minerals, the specific gravity of diamond crystals (3.52) has rather small variation from diamond to diamond.
Hardness and crystal structure
Known to the ancient Greeks as (, 'proper, unalterable, unbreakable') and sometimes called adamant, diamond is the hardest known naturally occurring material, and serves as the definition of 10 on the Mohs scale of mineral hardness. Diamond is extremely strong owing to its crystal structure, known as diamond cubic, in which each carbon atom has four neighbors covalently bonded to it. Bulk cubic boron nitride (c-BN) is nearly as hard as diamond. Diamond reacts with some materials, such as steel, and c-BN wears less when cutting or abrading such material. (Its zincblende structure is like the diamond cubic structure, but with alternating types of atoms.) A currently hypothetical material, beta carbon nitride (β-), may also be as hard or harder in one form. It has been shown that some diamond aggregates having nanometer grain size are harder and tougher than conventional large diamond crystals, thus they perform better as abrasive material. Owing to the use of those new ultra-hard materials for diamond testing, more accurate values are now known for diamond hardness. A surface perpendicular to the [111] crystallographic direction (that is the longest diagonal of a cube) of a pure (i.e., type IIa) diamond has a hardness value of when scratched with a nanodiamond tip, while the nanodiamond sample itself has a value of when tested with another nanodiamond tip. Because the test only works properly with a tip made of harder material than the sample being tested, the true value for nanodiamond is likely somewhat lower than .
The precise tensile strength of diamond is unknown, though strength up to has been observed, and theoretically it could be as high as depending on the sample volume/size, the perfection of diamond lattice and on its orientation: Tensile strength is the highest for the [100] crystal direction (normal to the cubic face), smaller for the [110] and the smallest for the [111] axis (along the longest cube diagonal). Diamond also has one of the smallest compressibilities of any material.
Cubic diamonds have a perfect and easy octahedral cleavage, which means that they only have four planes—weak directions following the faces of the octahedron where there are fewer bonds—along which diamond can easily split upon blunt impact to leave a smooth surface. Similarly, diamond's hardness is markedly directional: the hardest direction is the diagonal on the cube face, 100 times harder than the softest direction, which is the dodecahedral plane. The octahedral plane is intermediate between the two extremes. The diamond cutting process relies heavily on this directional hardness, as without it a diamond would be nearly impossible to fashion. Cleavage also plays a helpful role, especially in large stones where the cutter wishes to remove flawed material or to produce more than one stone from the same piece of rough (e.g. Cullinan Diamond).
Diamonds crystallize in the diamond cubic crystal system (space group Fdm) and consist of tetrahedrally, covalently bonded carbon atoms. A second form called lonsdaleite, with hexagonal symmetry, has also been found, but it is extremely rare and forms only in meteorites or in laboratory synthesis. The local environment of each atom is identical in the two structures. From theoretical considerations, lonsdaleite is expected to be harder than diamond, but the size and quality of the available stones are insufficient to test this hypothesis. In terms of crystal habit, diamonds occur most often as euhedral (well-formed) or rounded octahedra and twinned, flattened octahedra with a triangular outline. Other forms include dodecahedra and (rarely) cubes. There is evidence that nitrogen impurities play an important role in the formation of well-shaped euhedral crystals. The largest diamonds found, such as the Cullinan Diamond, were shapeless. These diamonds are pure (i.e. type II) and therefore contain little if any nitrogen.
The faces of diamond octahedrons are highly lustrous owing to their hardness; triangular shaped growth defects (trigons) or etch pits are often present on the faces. A diamond's fracture is irregular. Diamonds which are nearly round, due to the formation of multiple steps on octahedral faces, are commonly coated in a gum-like skin (nyf). The combination of stepped faces, growth defects, and nyf produces a "scaly" or corrugated appearance. Many diamonds are so distorted that few crystal faces are discernible. Some diamonds found in Brazil and the Democratic Republic of the Congo are polycrystalline and occur as opaque, darkly colored, spherical, radial masses of tiny crystals; these are known as ballas and are important to industry as they lack the cleavage planes of single-crystal diamond. Carbonado is a similar opaque microcrystalline form which occurs in shapeless masses. Like ballas diamond, carbonado lacks cleavage planes and its specific gravity varies widely from 2.9 to 3.5. Bort diamonds, found in Brazil, Venezuela, and Guyana, are the most common type of industrial-grade diamond. They are also polycrystalline and often poorly crystallized; they are translucent and cleave easily.
Hydrophobia and lipophilia
Due to great hardness and strong molecular bonding, a cut diamond's facets and facet edges appear the flattest and sharpest. A curious side effect of a natural diamond's surface perfection is hydrophobia combined with lipophilia. The former property means a drop of water placed on a diamond forms a coherent droplet, whereas in most other minerals the water would spread out to cover the surface. Additionally, diamond is unusually lipophilic, meaning grease and oil readily collect and spread on a diamond's surface, whereas in other minerals oil would form coherent drops. This property is exploited in the use of grease pencils, which apply a line of grease to the surface of a suspect diamond simulant. Diamond surfaces are hydrophobic when the surface carbon atoms terminate with a hydrogen atom and hydrophilic when the surface atoms terminate with an oxygen atom or hydroxyl radical. Treatment with gases or plasmas containing the appropriate gas, at temperatures of or higher, can change the surface property completely. Naturally occurring diamonds have a surface with less than a half monolayer coverage of oxygen, the balance being hydrogen and the behavior is moderately hydrophobic. This allows for separation from other minerals at the mine using the so-called "grease-belt".
Toughness
Unlike hardness, which denotes only resistance to scratching, diamond's toughness or tenacity is only fair to good. Toughness relates to the ability to resist breakage from falls or impacts. Because of diamond's perfect and easy cleavage, it is vulnerable to breakage. A diamond will shatter if hit with an ordinary hammer. The toughness of natural diamond has been measured as , which is good compared to other gemstones like aquamarine (blue colored), but poor compared to most engineering materials. As with any material, the macroscopic geometry of a diamond contributes to its resistance to breakage. Diamond has a cleavage plane and is therefore more fragile in some orientations than others. Diamond cutters use this attribute to cleave some stones, prior to faceting.
Ballas and carbonado diamond are exceptional, as they are polycrystalline and therefore much tougher than single-crystal diamond; they are used for deep-drilling bits and other demanding industrial applications. Particular faceting shapes of diamonds are more prone to breakage and thus may be uninsurable by reputable insurance companies. The brilliant cut of gemstones is designed specifically to reduce the likelihood of breakage or splintering.
Solid foreign crystals are commonly present in diamond. They are mostly minerals, such as olivine, garnets, ruby, and many others. These and other inclusions, such as internal fractures or "feathers", can compromise the structural integrity of a diamond. Cut diamonds that have been enhanced to improve their clarity via glass infilling of fractures or cavities are especially fragile, as the glass will not stand up to ultrasonic cleaning or the rigors of the jeweler's torch. Fracture-filled diamonds may shatter if treated improperly.
Pressure resistance
Used in so-called diamond anvil experiments to create high-pressure environments, diamonds withstand crushing pressures in excess of 600 gigapascals (6 million atmospheres).
Optical properties
Color and its causes
Diamonds occur in various colors: black, brown, yellow, gray, white, blue, orange, purple to pink, and red. Colored diamonds contain crystallographic defects, including substitutional impurities and structural defects, that cause the coloration. Theoretically, pure diamonds would be transparent and colorless. Diamonds are scientifically classed into two main types and several subtypes, according to the nature of defects present and how they affect light absorption:
Type I diamond has nitrogen (N) atoms as the main impurity, at a concentration of up to 1%. If the N atoms are in pairs or larger aggregates, they do not affect the diamond's color; these are Type Ia. About 98% of gem diamonds are type Ia: these diamonds belong to the Cape series, named after the diamond-rich region formerly known as Cape Province in South Africa, whose deposits are largely Type Ia. If the nitrogen atoms are dispersed throughout the crystal in isolated sites (not paired or grouped), they give the stone an intense yellow or occasionally brown tint (type Ib); the rare canary diamonds belong to this type, which represents only ~0.1% of known natural diamonds. Synthetic diamond containing nitrogen is usually of type Ib. Type Ia and Ib diamonds absorb in both the infrared and ultraviolet region of the electromagnetic spectrum, from . They also have a characteristic fluorescence and visible absorption spectrum.
Type II diamonds have very few if any nitrogen impurities. Pure (type IIa) diamond can be colored pink, red, or, brown owing to structural anomalies arising through plastic deformation during crystal growth; these diamonds are rare (1.8% of gem diamonds), but constitute a large percentage of Australian diamonds. Type IIb diamonds, which account for ~0.1% of gem diamonds, are usually a steely blue or gray due to boron atoms scattered within the crystal matrix. These diamonds are also semiconductors, unlike other diamond types (see Electrical properties). Most blue-gray diamonds coming from the Argyle mine of Australia are not of type IIb, but of Ia type. Those diamonds contain large concentrations of defects and impurities (especially hydrogen and nitrogen) and the origin of their color is yet uncertain. Type II diamonds weakly absorb in a different region of the infrared (the absorption is due to the diamond lattice rather than impurities), and transmit in the ultraviolet below 225 nm, unlike type I diamonds. They also have differing fluorescence characteristics, but no discernible visible absorption spectrum.
Certain diamond enhancement techniques are commonly used to artificially produce an array of colors, including blue, green, yellow, red, and black. Color enhancement techniques usually involve irradiation, including proton bombardment via cyclotrons; neutron bombardment in the piles of nuclear reactors; and electron bombardment by Van de Graaff generators. These high-energy particles physically alter the diamond's crystal lattice, knocking carbon atoms out of place and producing color centers. The depth of color penetration depends on the technique and its duration, and in some cases the diamond may be left radioactive to some degree.
Some irradiated diamonds are completely natural; one famous example is the Dresden Green Diamond. In these natural stones the color is imparted by "radiation burns" (natural irradiation by alpha particles originating from uranium ore) in the form of small patches, usually only micrometers deep. Additionally, Type IIa diamonds can have their structural deformations "repaired" via a high-pressure high-temperature (HPHT) process, removing much or all of the diamond's color.
Luster
The luster of a diamond is described as "adamantine", which simply means diamond-like. Reflections on a properly cut diamond's facets are undistorted, due to their flatness. The refractive index of diamond (as measured via sodium light, ) is 2.417. Because it is cubic in structure, diamond is also isotropic. Its high dispersion of 0.044 (variation of refractive index across the visible spectrum) manifests in the perceptible fire of cut diamonds. This fire—flashes of prismatic colors seen in transparent stones—is perhaps diamond's most important optical property from a jewelry perspective. The prominence or amount of fire seen in a stone is heavily influenced by the choice of diamond cut and its associated proportions (particularly crown height), although the body color of fancy (i.e., unusual) diamonds may hide their fire to some degree.
More than 20 other minerals have higher dispersion (that is difference in refractive index for blue and red light) than diamond, such as titanite 0.051, andradite 0.057, cassiterite 0.071, strontium titanate 0.109, sphalerite 0.156, synthetic rutile 0.330, cinnabar 0.4, etc. (see Dispersion (optics)). However, the combination of dispersion with extreme hardness, wear and chemical resistivity, as well as clever marketing, determines the exceptional value of diamond as a gemstone.
Fluorescence
Diamonds exhibit fluorescence, that is, they emit light of various colors and intensities under long-wave ultraviolet light (365 nm): Cape series stones (type Ia) usually fluoresce blue, and these stones may also phosphoresce yellow, a unique property among gemstones. Other possible long-wave fluorescence colors are green (usually in brown stones), yellow, mauve, or red (in type IIb diamonds). In natural diamonds, there is typically little if any response to short-wave ultraviolet, but the reverse is true of synthetic diamonds. Some natural type IIb diamonds phosphoresce blue after exposure to short-wave ultraviolet. In natural diamonds, fluorescence under X-rays is generally bluish-white, yellowish or greenish. Some diamonds, particularly Canadian diamonds, show no fluorescence.
The origin of the luminescence colors is often unclear and not unique. Blue emission from type IIa and IIb diamonds is reliably identified with dislocations by directly correlating the emission with dislocations in an electron microscope. However, blue emission in type Ia diamond could be either due to dislocations or the N3 defects (three nitrogen atoms bordering a vacancy). Green emission in natural diamond is usually due to the H3 center (two substitutional nitrogen atoms separated by a vacancy), whereas in synthetic diamond it usually originates from nickel used as a catalyst (see figure). Orange or red emission could be due to various reasons, one being the nitrogen-vacancy center which is present in sufficient quantities in all types of diamond, even type IIb.
Optical absorption
Cape series (Ia) diamonds have a visible absorption spectrum (as seen through a direct-vision spectroscope) consisting of a fine line in the violet at ; however, this line is often invisible until the diamond has been cooled to very low temperatures. Associated with this are weaker lines at , , , , and .
All those lines are labeled as N3 and N2 optical centers and associated with a defect consisting of three nitrogen atoms bordering a vacancy. Other stones show additional bands: brown, green, or yellow diamonds show a band in the green at (H3 center, see above), sometimes accompanied by two additional weak bands at and (H4 center, a large complex presumably involving 4 substitutional nitrogen atoms and 2 lattice vacancies). Type IIb diamonds may absorb in the far red due to the substitutional boron, but otherwise show no observable visible absorption spectrum.
Gemological laboratories make use of spectrophotometer machines that can distinguish natural, artificial, and color-enhanced diamonds. The spectrophotometers analyze the infrared, visible, and ultraviolet absorption and luminescence spectra of diamonds cooled with liquid nitrogen to detect tell-tale absorption lines that are not normally discernible.
Electrical properties
Diamond is a good electrical insulator, having a resistivity of to ( – ), and is famous for its wide bandgap of 5.47 eV. High carrier mobilities and high electric breakdown field at room temperature are also important characteristics of diamond. Those characteristics allow single crystalline diamond to be one of the promising materials for semiconductors. A wide bandgap is advantageous in semiconductors because it allows them to maintain high resistivity even at high temperature, important for high power applications. Semiconductors whose carrier mobilities are high such as diamond are easier to utilize in industry because they do not need high input voltage. High breakdown voltage avoids a huge current suddenly occurring at typical input voltages.
Most natural blue diamonds are an exception and are semiconductors due to substitutional boron impurities replacing carbon atoms. Natural blue or blue-gray diamonds, common for the Argyle diamond mine in Australia, are rich in hydrogen; these diamonds are not semiconductors and it is unclear whether hydrogen is actually responsible for their blue-gray color. Natural blue diamonds containing boron and synthetic diamonds doped with boron are p-type semiconductors. N-type diamond films are reproducibly synthesized by phosphorus doping during chemical vapor deposition. Diode p-n junctions and UV light emitting diodes (LEDs, at ) have been produced by sequential deposition of p-type (boron-doped) and n-type (phosphorus-doped) layers.
Diamond's electronic properties can be also modulated by strain engineering.
Diamond transistors have been produced (for research purposes). In January 2024, a Japanese research team fabricated a MOSFET using phosphorus-doped n-type diamond, which would have superior characteristics to silicon-based technology in high-temperature, high-frequency or high-electron mobility applications. FETs with SiN dielectric layers, and SC-FETs have been made.
In April 2004, research published in the journal Nature reported that below , synthetic boron-doped diamond is a bulk superconductor. Superconductivity was later observed in heavily boron-doped films grown by various chemical vapor deposition techniques, and the highest reported transition temperature (by 2009) is . (See also Covalent superconductor#Diamond)
Uncommon magnetic properties (spin glass state) were observed in diamond nanocrystals intercalated with potassium. Unlike paramagnetic host material, magnetic susceptibility measurements of intercalated nanodiamond revealed distinct ferromagnetic behavior at . This is essentially different from results of potassium intercalation in graphite or C60 fullerene, and shows that sp3 bonding promotes magnetic ordering in carbon. The measurements presented first experimental evidence of intercalation-induced spin-glass state in a nanocrystalline diamond system.
Thermal conductivity
Unlike most electrical insulators, diamond is a good conductor of heat because of the strong covalent bonding and low phonon scattering. Thermal conductivity of natural diamond was measured to be about 2,200 W/(m·K), which is five times more than silver, the most thermally conductive metal. Monocrystalline synthetic diamond enriched to 99.9% the isotope 12C had the highest thermal conductivity of any known solid at room temperature: 3,320 W/(m·K), though reports exist of superior thermal conductivity in both carbon nanotubes and graphene. Because diamond has such high thermal conductance it is already used in semiconductor manufacture to prevent silicon and other semiconducting materials from overheating. At lower temperatures conductivity becomes even better, and reaches 41,000 W/(m·K) at (12C-enriched diamond).
Diamond's high thermal conductivity is used by jewelers and gemologists who may employ an electronic thermal probe to distinguish diamonds from their imitations. These probes consist of a pair of battery-powered thermistors mounted in a fine copper tip. One thermistor functions as a heating device while the other measures the temperature of the copper tip: if the stone being tested is a diamond, it will conduct the tip's thermal energy rapidly enough to produce a measurable temperature drop. This test takes about 2–3 seconds. However, older probes will be fooled by moissanite, a crystalline mineral form of silicon carbide introduced in 1998 as an alternative to diamonds, which has a similar thermal conductivity.
Technologically, the high thermal conductivity of diamond is used for the efficient heat removal in high-end power electronics. Diamond is especially appealing in situations where electrical conductivity of the heat sinking material cannot be tolerated e.g. for the thermal management of high-power radio-frequency () microcoils that are used to produce strong and local RF fields.
Thermal stability
If heated over in air, diamond, being a form of carbon, oxidizes and its surface blackens, but the surface can be restored by re-polishing. In absence of oxygen, e.g. in a flow of high-purity argon gas, diamond can be heated up to about . At high pressure (~) diamond can be heated up to , and a report published in 2009 suggests that diamond can withstand temperatures of and above.
Diamonds are carbon crystals that form under high temperatures and extreme pressures such as deep within the Earth. At surface air pressure (one atmosphere), diamonds are not as stable as graphite, and so the decay of diamond is thermodynamically favorable (δH = ). However, owing to a very large kinetic energy barrier, diamonds are metastable; they will not decay into graphite under normal conditions.
See also
Chemical vapor deposition of diamond
Crystallographic defects in diamond
Nitrogen-vacancy center
Synthetic diamond
References
Further reading
Pagel-Theisen, Verena. (2001). Diamond grading ABC: The manual (9th ed.), pp. 84–85. Rubin & Son n.v.; Antwerp, Belgium.
Webster, Robert, and Jobbins, E. Allan (Ed.). (1998). Gemmologist's compendium, p. 21, 25, 31. St Edmundsbury Press Ltd, Bury St Edwards.
External links
Properties of diamond
Properties of diamond (S. Sque, PhD thesis, 2005, University of Exeter, UK)
Material properties of diamond
Allotropes of carbon
Native element minerals
Superhard materials | Material properties of diamond | [
"Physics",
"Chemistry"
] | 5,014 | [
"Allotropes of carbon",
"Allotropes",
"Materials",
"Superhard materials",
"Matter"
] |
1,567,681 | https://en.wikipedia.org/wiki/Froth%20flotation | Froth flotation is a process for selectively separating hydrophobic materials from hydrophilic. This is used in mineral processing, paper recycling and waste-water treatment industries. Historically this was first used in the mining industry, where it was one of the great enabling technologies of the 20th century. It has been described as "the single most important operation used for the recovery and upgrading of sulfide ores". The development of froth flotation has improved the recovery of valuable minerals, such as copper- and lead-bearing minerals. Along with mechanized mining, it has allowed the economic recovery of valuable metals from much lower-grade ore than previously possible.
Industries
Froth flotation is applied to a wide range of separations. An estimated one billion tons of materials are processed in this manner annually.
Mineral processing
Froth flotation is a process for separating minerals from gangue by exploiting differences in their hydrophobicity. Hydrophobicity differences between valuable minerals and waste gangue are increased through the use of surfactants and wetting agents. The flotation process is used for the separation of a large range of sulfides, carbonates and oxides prior to further refinement. Phosphates and coal are also upgraded (purified) by flotation technology. "Grade-recovery curves" are tools for weighing the trade-off of producing a high grade of concentrate vs cost. These curves only compare the grade-recovery relations of a specific feed grade and feed rate.
Waste water treatment
The flotation process is also widely used in industrial waste water treatment plants, where it removes fats, oil, grease and suspended solids from waste water. These units are called dissolved air flotation (DAF) units. In particular, dissolved air flotation units are used in removing oil from the wastewater effluents of oil refineries, petrochemical and chemical plants, natural gas processing plants and similar industrial facilities.
Principle of operation
The ore to be treated is ground into particles (comminution). In the idealized case, the individual minerals are physically separated, a process known as full liberation. The particle sizes are typically in the range 2–500 micrometers in diameter. For froth flotation, an aqueous slurry of the ground ore is treated with the frothing agent. An example is sodium ethyl xanthate as a collector in the flotation of galena (lead sulfide) to separate it from sphalerite (zinc sulfide). The polar part of xanthate anion attaches to the ore particles and the non-polar hydrocarbon part forms a hydrophobic layer. The particles are brought to the water surface by air bubbles. About 300 g/t of ore is required for efficient separation. With increasing length of the hydrocarbon chain in xanthates, the efficiency of the hydrophobic action increases, but the selectivity to ore type decreases. The chain is shortest in sodium ethyl xanthate that makes it highly selective to copper, nickel, lead, gold, and zinc ores. Aqueous solutions (10%) with pH = 7–11 are normally used in the process. This slurry (more properly called the pulp) of hydrophobic particles and hydrophilic particles is then introduced to tanks known as flotation cells that are aerated to produce bubbles. The hydrophobic particles attach to the air bubbles, which rise to the surface, forming a froth. The froth is skimmed from the cell, producing a concentrate ("conc") of the target mineral.
The minerals that do not float into the froth are referred to as the flotation tailings or flotation tails. These tailings may also be subjected to further stages of flotation to recover the valuable particles that did not float the first time. This is known as scavenging. The final tailings after scavenging are normally pumped for disposal as mine fill or to tailings disposal facilities for long-term storage.
Flotation is normally undertaken in several stages to maximize the recovery of the target mineral or minerals and the concentration of those minerals in the concentrate, while minimizing the energy input.
Flotation stages
The first stage is called roughing, which produces a rougher concentrate. The objective is to remove the maximum amount of the valuable mineral at as coarse a particle size as practical. Grinding costs energy. The goal is to release enough gangue from the valuable mineral to get a high recovery. Some concentrators use a preflotation step to remove low density impurities such as carbonaceous dust. The rougher concentrate is normally subjected to further stages of flotation to reject more of the undesirable minerals that also reported to the froth, in a process known as cleaning. The resulting material is often subject to further grinding (usually called regrinding). Regrinding is often undertaken in specialized regrind mills, such as the IsaMill. The rougher flotation step is often followed by a scavenger flotation step that is applied to the rougher tailings to further recover any of the target minerals.
Science of flotation
To be effective on a given ore slurry, the collectors are chosen based upon their selective wetting of the types of particles to be separated. A good collector will adsorb, physically or chemically, with one of the types of particles. The wetting activity of a surfactant on a particle can in principle be quantified by measuring the contact angles of the liquid/bubble interface. Another important measure for attachment of bubbles to particles is induction time, the time required for the particle and bubble to rupture the thin film separating the particle and bubble. This rupturing is achieved by the surface forces between the particle and bubble.
The mechanisms for the bubble-particle attachment is complex but is viewed as consisting of three steps: collision, attachment, and detachment. The collision is achieved by particles being within the collision tube of a bubble and this is affected by the velocity of the bubble and radius of the bubble. The collision tube corresponds to the region in which a particle will collide with the bubble, with the perimeter of the collision tube corresponding to the grazing trajectory.
The attachment of the particle to the bubble is controlled by the induction time of the particle and bubble. The particle and bubble need to bind and this occurs if the time in which the particle and bubble are in contact with each other is larger than the required induction time. This induction time is affected by the fluid viscosity, particle and bubble size and the forces between the particle and bubbles.
The detachment of a particle and bubble occurs when the force exerted by the surface tension is exceeded by shear forces and gravitational forces. These forces are complex and vary within the cell. High shear will be experienced close to the impeller of a mechanical flotation cell and mostly gravitational force in the collection and cleaning zone of a flotation column.
Significant issues of entrainment of fine particles occurs as these particles experience low collision efficiencies as well as sliming and degradation of the particle surfaces. Coarse particles show a low recovery of the valuable mineral due to the low liberation and high detachment efficiencies.
Flotation equipment
Flotation can be performed in rectangular or cylindrical mechanically agitated cells or tanks, flotation columns, Jameson Cells or deinking flotation machines. Classified by the method of air absorption manner, it is fair to state that two distinct groups of flotation equipment have arisen:pneumatic and mechanical machines. Generally pneumatic machines give a low-grade concentrate and little operating troubles.
Mechanical cells use a large mixer and diffuser mechanism at the bottom of the mixing tank to introduce air and provide mixing action. Flotation columns use air spargers to introduce air at the bottom of a tall column while introducing slurry above. The countercurrent motion of the slurry flowing down and the air flowing up provides mixing action. Mechanical cells generally have a higher throughput rate, but produce material that is of lower quality, while flotation columns generally have a low throughput rate but produce higher quality material.
The Jameson cell uses neither impellers nor spargers, instead combining the slurry with air in a downcomer where high shear creates the turbulent conditions required for bubble particle contacting.
Chemicals of flotation
Collectors
For many ores (e.g. those of Cu, Mo, W, Ni), the collectors are anionic sulfur ligands. Particularly popular for sulfide minerals are xanthate salts, including potassium amyl xanthate (PAX), potassium isobutyl xanthate (PIBX), potassium ethyl xanthate (KEX), sodium isobutyl xanthate (SIBX), sodium isopropyl xanthate (SIPX), sodium ethyl xanthate (SEX). Related collectors include related sulfur-based ligands: dithiophosphates, dithiocarbamates. Still other classes of collectors include the thiourea thiocarbanilide. Fatty acid carboxylates, alkyl sulfates, and alkyl sulfonates have also been used for oxide minerals.
For some minerals (e.g., sylvinite for KCl), fatty amines are used as collectors.
Frothers
A variety of compounds are added to stabilize the foams. These additives include pine oil and various alcohols: methyl isobutyl carbinol (MIBC), polyglycols, xylenol (cresylic acid).
Depressants
According to one vendor, depressants "increase the efficiency of the flotation process by selectively inhibiting the interaction of one mineral with the collector." Thus a typical pulverized ore sample consists of many components, of which only one or a few are targets for the collector. Depressants bind to these other components, lest the collector be wasted by doing so. Depressants are selected for particular ores. Typical depressants are starch, polyphenols, lye, and lime. They are cheap, and oxygen-rich typically.
Modifiers
A variety of other compounds are added to optimize the separation process, these additives are called modifiers. Modifying reagents react either with the mineral surfaces or with collectors and other ions in the flotation pulp, resulting in a modified and controlled flotation response.
pH modifiers include lime (used as quicklime CaO, or more commonly as slaked lime, a slurry of Ca(OH)2), soda ash (Na2CO3), caustic soda (NaOH), sulfuric and hydrochloric acid (H2SO4, HCl).
Anionic modifiers include phosphates, silicates, and carbonates.
Organic modifiers include the thickeners dextrin, starch, glue, and carboxymethyl cellulose (CMC).
Specific applications
Sulfide ores
Prior to 1907, nearly all the copper mined in the US came from underground vein deposits, averaging 2.5 percent copper. By 1991, the average grade of copper ore mined in the US had fallen to only 0.6 percent.
Nonsulfide ores
Flotation is used for the purification of potassium chloride from sodium chloride and clay minerals. The crushed mineral is suspended in brine in the presence of fatty ammonium salts. Because the ammonium head group and K+ have very similar ionic radii (ca. 0.135, 0.143 nm respectively), the ammonium centers exchange for the surface potassium sites on the particles of KCl, but not on the NaCl particles. The long alkyl chains then confer hydrophobicity to the particles, which enable them to form foams.
Chemical compounds for deinking of recycled paper
Froth flotation is one of the processes used to recover recycled paper. In the paper industry this step is called deinking or just flotation. The target is to release and remove the hydrophobic contaminants from the recycled paper. The contaminants are mostly printing ink and stickies. Normally the setup is a two-stage system with 3,4 or 5 flotation cells in series.
pH control: sodium silicate and sodium hydroxide
Calcium ion source: hard water, lime or calcium chloride
Collector: fatty acid, fatty acid emulsion, fatty acid soap and/or organo-modified siloxane
Environmental considerations
As in any technology that has long been conducted on the multi-million ton per year scale, flotation technologies have the potential to threaten the environment beyond the disruption caused by mining. Froth flotation employs a host of organic chemicals and relies upon elaborate machinery. Some of the chemicals (cyanide) are acutely toxic but hydrolyze to innocuous products. Naturally occurring fatty acids are widely used. Tailings and effluents are contained in lined ponds. Froth flotation is "poised for increased activity due to their potential usefulness in environmental site cleanup operations" including recycling of plastics and metals, not to mention water treatment.
History
Flotation processes are described in ancient Greek and Persian literature. During the late 19th century, the process basics were discovered through a slow evolutionary phase. During the first decade of the 20th century, a more rapid investigation of oils, froths, and agitation led to proven workplace applications, especially in Broken Hill, Australia, that brought the technological innovation known as “froth flotation.” During the early 20th century, froth flotation revolutionized mineral processing.
Initially, naturally occurring chemicals such as fatty acids and oils were used as flotation reagents in large quantities to increase the hydrophobicity of the valuable minerals. Since then, the process has been adapted and applied to a wide variety of materials to be separated, and additional collector agents, including surfactants and synthetic compounds have been adopted for various applications.
19th century
Englishman William Haynes patented a process in 1860 for separating sulfide and gangue minerals using oil. Later writers have pointed to Haynes's as the first "bulk oil flotation" patent, though there is no evidence of its being field tested, or used commercially. In 1877 the brothers Bessel (Adolph and August) of Dresden, Germany, introduced their commercially successful oil and froth flotation process for extracting graphite, considered by some the root of froth flotation. However, the Bessel process became uneconomical after the discovery of high-grade graphite in Sri Lanka and was largely forgotten.
Inventor Hezekiah Bradford of Philadelphia invented a "method of saving floating material in ore-separation” and received US patent No. 345951 on July 20, 1886. He would later go on to patent the Bradford Breaker, currently in use by the coal industry, in 1893. His "Bradford washer," patented 1870, was used to concentrate iron, copper and lead-zinc ores by specific gravity, but lost some of the metal as float off the concentration process. The 1886 patent was to capture this "float" using surface tension, the first of the skin-flotation process patents that were eclipsed by oil froth flotation.
On August 24, 1886, Carrie Everson received a patent for her process calling for oil[s] but also an acid or a salt, a significant step in the evolution of the process history. By 1890, tests of the Everson process had been made at Georgetown and Silver Cliff, Colorado, and Baker, Oregon. She abandoned the work upon the death of her husband, and before perfecting a commercially successful process. Later, during the height of legal disputes over the validity of various patents during the 1910s, Everson's was often pointed to as the initial flotation patent - which would have meant that the process was not patentable again by later contestants. Much confusion has been clarified recently by historian Dawn Bunyak.
First commercial flotation process
The generally recognized first successful commercial flotation process for mineral sulphides was invented by Frank Elmore who worked on the development with his brother, Stanley. The Glasdir copper mine at Llanelltyd, near Dolgellau in North Wales was bought in 1896 by the Elmore brothers in conjunction with their father, William. In 1897, the Elmore brothers installed the world's first industrial-size commercial flotation process for mineral beneficiation at the Glasdir mine. The process was not froth flotation but used oil to agglomerate (make balls of) pulverised sulphides and buoy them to the surface, and was patented in 1898 (revised 1901). The operation and process was described in the April 25, 1900 Transactions of the Institution of Mining and Metallurgy of England, which was reprinted with comment, June 23, 1900, in the Engineering and Mining Journal, New York City. By this time they had recognized the importance of air bubbles in assisting the oil to carry away the mineral particles. As modifications were made to improve the process, it became a success with base metal ores from Norway to Australia.
The Elmores had formed a company known as the Ore Concentration Syndicate Ltd to promote the commercial use of the process worldwide. In 1900, Charles Butters of Berkeley, California, acquired American rights to the Elmore process after seeing a demonstration at Llanelltyd, Wales. Butters, an expert on the cyanide process, built an Elmore process plant in the basement of the Dooley Building, Salt Lake City, and tested the oil process on gold ores throughout the region and tested the tailings of the Mammoth gold mill, Tintic district, Utah, but without success. Because of Butters' reputation and the news of his failure, as well as the unsuccessful attempt at the LeRoi gold mine at Rossland, B. C., the Elmore process was all but ignored in North America.
Developments elsewhere, particularly in Broken Hill, Australia by Minerals Separation, Limited, led to decades of hard-fought legal battles and litigations (e. g. Minerals Separation, Ltd. v. Hyde) for the Elmores who, ultimately, lost as the Elmore process was superseded by more advanced techniques. Another flotation process was independently invented in 1901 in Australia by Charles Vincent Potter and by Guillaume Daniel Delprat around the same time. Potter was a brewer of beer, as well as a chemist, and was likely inspired by the way beer froth lifted up sediment in the beer. This process did not use oil, but relied upon flotation by the generation of gas formed by the introduction of acid into the pulp. In 1903, Potter sued Delprat, then general manager of BHP, for patent infringement. He lost the case for reasons of utility, with Delpat arguing that while Delprat's process, which used sulphuric acid to generate the bubbles in the process, was not as useful as Delprat's process, which used salt cake. Despite this, after the case was over BHP began using sulphuric acid for its flotation process.
In 1902, Froment combined oil and gaseous flotation using a modification of the Potter-Delprat process. During the first decade of the twentieth century, Broken Hill became the center of innovation leading to the perfection of the froth flotation process by many technologists there borrowing from each other and building on these first successes.
Yet another process was developed in 1902 by Arthur C. Cattermole, who emulsified the pulp with a small quantity of oil, subjected it to violent agitation, and then slow stirring which coagulated the target minerals into nodules which were separated from the pulp by gravity. The Minerals Separation Ltd., formed in Britain in 1903 to acquire the Cattermole patent, found that it proved unsuccessful. Metallurgists on the staff continued to test and combine other discoveries to patent in 1905 their process, called the Sulman-Picard-Ballot process after company officers and patentees. The process proved successful at their Central Block plant, Broken Hill that year. Significant in their "agitation froth flotation" process was the use of less than 1% oil and an agitation step that created small bubbles, which provided more surface to capture the metal and float into a froth at the surface. Useful work was done by Leslie Bradford at Port Pirie and by William Piper, Sir Herbert Gepp and Auguste de Bavay.
Mineral Separation also bought other patents to consolidate ownership of any potential conflicting rights to the flotation process - except for the Elmore patents. In 1910, when the Zinc Corporation replaced its Elmore process with the Minerals Separation (Sulman-Picard-Ballot) froth flotation process at its Broken Hill plant, the primacy of the Minerals Separation over other process contenders was assured. Henry Livingston Sulman was later recognized by his peers in his election as President of the (British) Institution of Mining and Metallurgy, which also awarded him its gold medal.
20th century
Developments in the United States had been less than spectacular. Butters's failures, as well as others, was followed after 1904, with Scotsman Stanley MacQuisten's process (a surface tension based method), which was developed with a modicum of success in Nevada and Idaho, but this would not work when slimes were present, a major fault. Henry E. Wood of Denver had developed his flotation process along the same lines in 1907, patented 1911, with some success on molybdenum ores. For the most part, however, these were isolated attempts without fanfare for what can only be called marginal successes.
In 1911, James M. Hyde, a former employee of Minerals Separation, Ltd., modified the Minerals Separation process and installed a test plant in the Butte and Superior Mill in Basin, Montana, the first such installation in the USA. In 1912, he designed the Butte & Superior zinc works, Butte, Montana, the first great flotation plant in America. Minerals Separation, Ltd., which had set up an office in San Francisco, sued Hyde for infringement as well as the Butte & Superior company, both cases were eventually won by the firm in the U. S. Supreme Court. Daniel Cowan Jackling and partners, who controlled Butte & Superior, also refuted the Minerals Separation patent and funded the ensuing legal battles that lasted over a decade. They - Utah Copper (Kennecott), Nevada Consolidated, Chino Copper, Ray Con and other Jackling firms - eventually settled, in 1922, paying a substantial fee for licenses to use the Minerals Separation process. One unfortunate result of the dispute was professional divisiveness among the mining engineering community for a generation.
In 1913, the Minerals Separation paid for a test plant for the Inspiration Copper Company at Miami, Arizona. Built under the San Francisco office director, Edward Nutter, it proved a success. Inspiration engineer L. D. Ricketts ripped out a gravity concentration mill and replaced it with the Minerals Separation process, the first major use of the process at an American copper mine. A major holder of Inspiration stock were men who controlled the great Anaconda mine of Butte. They immediately followed the Inspiration success to build a Minerals Separation licensed plant at Butte, in 1915–1916, a major statement about the final acceptance of the Minerals Separation patented process.
John M. Callow, of General Engineering of Salt Lake City, had followed flotation from technical papers and the introduction in both the Butte and Superior Mill, and at Inspiration Copper in Arizona and determined that mechanical agitation was a drawback to the existing technology. Introducing a porous brick with compressed air, and a mechanical stirring mechanism, Callow applied for a patent in 1914 (some say that Callow, a Jackling partisan, invented his cell as a means to avoid paying royalties to Minerals Separation, which firms using his cell eventually were forced to do by the courts). This method, known as Pneumatic Flotation, was recognized as an alternative to the Minerals Separation process of flotation concentration. The American Institute of Mining Engineers presented Callow the James Douglas Gold Medal in 1926 for his contributions to the field of flotation. By that time, flotation technology was changing, especially with the discovery of the use of xanthates and other reagents, which made the Callow cell and his process obsolete.
Montana Tech professor Antoine Marc Gaudin defined the early period of flotation as the mechanical phase while by the late 1910s it entered the chemical phase. Discoveries in reagents, especially the use of xanthates patented by Minerals Separations chemist Cornelius H. Keller, not so much increased the capture of minerals through the process as making it far more manageable in day-to-day operations. Minerals Separation's initial flotation patents ended 1923, and new ones for chemical processes gave it a significant position into the 1930s. During this period the company also developed and patented flotation processes for iron out of its Hibbing lab and of phosphate in its Florida lab. Another rapid phase of flotation process innovation did not occur until after 1960.
In the 1960s the froth flotation technique was adapted for deinking recycled paper.
The success of the process is evinced by the number of claimants as "discoverers" of flotation. In 1961, American engineers celebrated "50 years of flotation" and enshrined James Hyde and his Butte & Superior mill. In 1977, German engineers celebrated the "hundredth anniversary of flotation" based on the brothers Bessel patent of 1877. The historic Glasdir copper mine site advertises its tours in Wales as site of the "discovery of flotation" based upon the Elmore brothers work. Recent writers, because of the interest in celebrating women in science, champion Carrie Everson of Denver as mother of the process based on her 1885 patent. Omitted from this list are the engineers, metallurgists and chemists of Minerals Separation, Ltd., which, at least in the American and Australian courts, won control of froth flotation patents as well as right of claimant as discoverers of froth flotation. But, as historian Martin Lynch writes, "Mineral Separation would eventually prevail after taking the case to the US Supreme Court [and the House of Lords], and in so doing earned for itself the cordial detestation of many in the mining world."
Theory
Froth flotation efficiency is determined by a series of probabilities: those of particle–bubble contact, particle–bubble attachment, transport between the pulp and the froth, and froth collection into the product launder. In a conventional mechanically-agitated cell, the void fraction (i.e. volume occupied by air bubbles) is low (5 to 10 percent) and the bubble size is usually greater than 1 mm. This results in a relatively low interfacial area and a low probability of particle–bubble contact. Consequently, several cells in series are required to increase the particle residence time, thus increasing the probability of particle–bubble contact.
Selective adhesion
Froth flotation depends on the selective adhesion of air bubbles to mineral surfaces in a mineral/water slurry. The air bubbles attach to more hydrophobic particles, as determined by the interfacial energies between the solid, liquid, and gas phases. This energy is determined by the Young–Dupré equation:
where:
γlv is the surface energy of the liquid/vapor interface
γsv is the surface energy of the solid/vapor interface
γsl is the surface energy of the solid/liquid interface,
θ is the contact angle, the angle formed at the junction between vapor, solid, and liquid phases.
Minerals targeted for separation may be chemically surface-modified with collectors so that they are more hydrophobic. Collectors are a type of surfactant that increase the natural hydrophobicity of the surface, increasing the separability of the hydrophobic and hydrophilic particles. Collectors either chemically bond via chemisorption to the mineral or adsorb onto the surface via physisorption.
IMFs and surface forces in bubble-particle interactions
Collision
The collision rates for fine particles (50 - 80 μm) can be accurately modeled, but there is no current theory that accurately models bubble-particle collision for particles as large as 300 μm, which are commonly used in flotation processes.
For fine particles, Stokes law underestimates collision probability while the potential equation based on surface charge overestimates collision probability so an intermediate equation is used.
It is important to know the collision rates in the system since this step precedes the adsorption where a three phase system is formed.
Adsorption (attachment)
The effectiveness of a medium to adsorb to a particle is influenced by the relationship between the surfaces of both materials. There are multiple factors that affect the efficiency of adsorption in chemical, thermodynamic, and physical domains. These factors can range from surface energy and polarity to the shape, size, and roughness of the particle. In froth flotation, adsorption is a strong consequence of surface energy, since the small particles have a high surface area to size ratio, resulting in higher energy surfaces to form attractions with adsorbates. The air bubbles must selectively adhere to the desired minerals to elevate them to the surface of the slurry while wetting the other minerals and leaving them in the aqueous slurry medium.
Particles that can be easily wetted by water are called hydrophilic, while particles that are not easily wetted by water are called hydrophobic. Hydrophobic particles have a tendency to form a separate phase in aqueous media. In froth flotation the effectiveness of an air bubble to adhere to a particle is based on how hydrophobic the particle is. Hydrophobic particles have an affinity to air bubbles, leading to adsorption. The bubble-particle combinations are elevated to the froth zone driven by buoyancy forces.
The attachment of the bubbles to the particles is determined by the interfacial energies of between the solid, liquid, and vapor phases, as modeled by the Young/Dupre Equation. The interfacial energies can be based on the natural structure of the materials, or the addition of chemical treatments can improve energy compatibility.
Collectors are the main additives used to improve particle surfaces. They function as surfactants to selectively isolate and aid adsorption between the particles of interest and bubbles rising through the slurry. Common collectors used in flotation are anionic sulfur ligands, which have a bifunctional structure with an ionic portion which shares attraction with metals, and a hydrophobic portion such as a long hydrocarbon tail. These collectors coat a particle's surface with a monolayer of non-polar substance to aid separation from the aqueous phase by decreasing the adsorbed particle solubility in water. The adsorbed ligands can form micelles around the particles and form small-particle colloids improving stability and phase separation further.
Desorption (detachment)
The adsorption of particles to bubbles is essential to separating the minerals from the slurry, but the minerals must be purified from the additives used in separation, such as the collectors, frothers, and modifiers. The product of the cleaning, or desorption process, is known as the cleaner concentrate.
The detachment of a particle and bubble requires adsorption bond cleavage driven by shear forces. Depending on the flotation cell type, shear forces are applied by a variety of mechanical systems. Among the most common are impellers and mixers. Some systems combine the functionalities of these components by placing them at key locations where they can take part in multiple froth flotation mechanisms. Cleaning cells also take advantage of gravitational forces to improve separation efficiency.
Desorption itself is a chemical phenomenon where compounds are just physically attached to each other without having any chemical bond.
Performance calculations
Relevant equations
A common quantity used to describe the collection efficiency of a froth flotation process is flotation recovery (). This quantity incorporates the probabilities of collision and attachment of particles to gas flotation bubbles.
where:
, which is the product of the probability of the particle being collected () and the number of possible particle collisions ()
is particle diameter
is bubble diameter
is a specified height within the flotation which the recovery was calculated
is the particle concentration
The following are several additional mathematical methods often used to evaluate the effectiveness of froth flotation processes. These equations are more simple than the calculation for flotation recovery, as they are based solely on the amounts of inputs and outputs of the processes.
For the following equations:
is the weight percent of feed
is the weight percent concentrate
is the weight percent of tailings
, , and are the metallurgical assays of the concentrate, tailings, and feed, respectively
Ratio of feed weight to concentrate weight (unitless)
Percent of metal recovered () in wt%
Percent of metal lost () in wt%
Percent of weight recovered in wt%
This can be calculated using weights and assays, as . Or, since , the percent of metal recovered () can be calculated from assays alone using .
Percent of metal lost is the opposite of the percent of metal recovered, and represents the material lost to the tailings.
See also
Deinking
Dissolved air flotation (DAF)
Flocculation
List of waste-water treatment technologies
References
Further reading
Froth Flotation: A Century of Innovation, by Maurice C. Fuerstenau et al. 2007, SME, 891 pp. . Google Books preview
D N Nihill, C M Stewart and P Bowen, "The McArthur River mine—the first years of operation," in: AusIMM ’98 – The Mining Cycle, Mount Isa, 19–23 April 1998 (The Australasian Institute of Mining and Metallurgy: Melbourne, 1998), 73–82.
E V Manlapig, C Green, J W Parkinson and A S Murphy, "The technology and economic incentives for recovering coal from tailings impoundments," SME Annual Meeting, Denver, Colorado, 26–28 February 2001, Preprint 01-70 (Society of Mining, Metallurgy and Exploration: Littleton, Colorado, 2001).
Industrial processes
Water treatment
Metallurgical processes
Australian inventions | Froth flotation | [
"Chemistry",
"Materials_science",
"Engineering",
"Environmental_science"
] | 7,175 | [
"Water treatment",
"Metallurgical processes",
"Metallurgy",
"Water pollution",
"Environmental engineering",
"Oil refining",
"Flotation processes",
"Water technology"
] |
1,568,513 | https://en.wikipedia.org/wiki/Dendrite%20%28crystal%29 | A crystal dendrite is a crystal that develops with a typical multi-branching form, resembling a fractal. The name comes from the Ancient Greek word (), which means "tree", since the crystal's structure resembles that of a tree. These crystals can be synthesised by using a supercooled pure liquid, however they are also quite common in nature. The most common crystals in nature exhibit dendritic growth are snowflakes and frost on windows, but many minerals and metals can also be found in dendritic structures.
History
Maximum velocity principle
The first dendritic patterns were discovered in palaeontology and are often mistaken for fossils because of their appearance. The first theory for the creation of these patterns was published by Nash and Glicksman in 1974, they used a very mathematical method and derived a non-linear integro-differential equation for a classical needle growth. However they only found an inaccurate numerical solution close to the tip of the needle and they found that under a given growth condition, the tip velocity has a unique maximum value. This became known as the maximum velocity principle (MVP) but was ruled out by Glicksman and Nash themselves very quickly. In the following two years Glicksman improved the numerical methods used, but did not realise the non-linear integro-differential equation had no mathematical solutions making his results meaningless.
Marginal stability hypothesis
Four years later, in 1978, Langer and Müller-Krumbhaar proposed the marginal stability hypothesis (MSH). This hypothesis used a stability parameter σ which depended on the thermal diffusivity, the surface tension and the radius of the tip of the dendrite. They claimed a system would be unstable for small σ causing it to form dendrites. At the time however Langer and Müller-Krumbhaar were unable to obtain a stability criterion for certain growth systems which lead to the MSH theory being abandoned.
Microscopic solvability condition
A decade later several groups of researchers went back to the Nash-Glicksman problem and focused on simplified versions of it. Through this they found that the problem for isotropic surface tension had no solutions. This result meant that a system with a steady needle growth solution necessarily needed to have some type of anisotropic surface tension. This breakthrough lead to the microscopic solvability condition theory (MSC), however this theory still failed since although for isotropic surface tension there could not be a steady solution, it was experimentally shown that there were nearly steady solutions which the theory did not predict.
Macroscopic continuum model
Nowadays the best understanding for dendritic crystals comes in the form of the macroscopic continuum model which assumes that both the solid and the liquid parts of the system are continuous media and the interface is a surface. This model uses the microscopic structure of the material and uses the general understanding of nucleation to accurately predict how a dendrite will grow.
Dendrite formation
Dendrite formation starts with some nucleation, i.e. the first appearance of solid growth, in the supercooled liquid. This formation will at first grow spherically until this shape is no longer stable. This instability has two causes: anisotropy in the surface energy of the solid/liquid interface and the attachment kinetics of particles to crystallographic planes when they have formed.
On the solid-liquid interface, we can define a surface energy, , which is the excess energy at the liquid-solid interface to accommodate the structural changes at the interface.
For a spherical interface, the Gibbs–Thomson equation then gives a melting point depression compared to a flat interface , which has the relation
where is the radius of the sphere. This curvature undercooling, the effective lowering of the melting point at the interface, sustains the spherical shape for small radii.
However, anisotropy in the surface energy implies that the interface will deform to find the energetically most favourable shape. For cubic symmetry in 2D we can express this anisotropy int the surface energy as
This gives rise to a surface stiffness
where we note that this quantity is positive for all angles when . In this case we speak of "weak anisotropy". For larger values of , the "strong anisotropy" causes the surface stiffness to be negative for some . This means that these orientations cannot appear, leading to so-called 'faceted' crystals, i.e. the interface would be a crystallographic plane inhibiting growth along this part of the interface due to attachment kinetics.
Wulff construction
For both above and below the critical anisotropy the Wulff construction provides a method to determine the shape of the crystal. In principle, we can understand the deformation as an attempt by the system to minimise the area with the highest effective surface energy.
Growth velocity
Taking into account attachment kinetics, we can derive that both for spherical growth and for flat surface growth, the growth velocity decreases with time by . We do however find stable parabolic growth, where the length grows with and the width with . Therefore, growth mainly takes place at the tip the parabolic interface, which draws out longer and longer. Eventually, the sides of this parabolic tip will also exhibit instabilities giving a dendrite its characteristic shape.
Preferred growth direction
When dendrites start to grow with tips in different directions, they display their underlying crystal structure, as this structure causes the anisotropy in surface energy. For instance, a dendrite growing with BCC crystal structure will have a preferred growth direction along the directions. The table below gives an overview of preferred crystallographic directions for dendritic growth. Note that when the strain energy minimisation effect dominates over surface energy minimisation, one might find a different growth direction, such as with Cr, which has as a preferred growth direction , even though it is a BCC latice.
Metal dendrites
For metals the process of forming dendrites is very similar to other crystals, but the kinetics of attachment play a much smaller role. This is because the interface is atomically rough; because of the small difference in structure between the liquid and the solid state, the transition from liquid to solid is somewhat gradual and one observes some interface thickness. Consequently, the surface energy will become nearly isotropic. For this reason, one would not expect faceted crystals as found for atomically smooth interfaces observed in crystals of more complex molecules.
Mineralogy and paleontology
In paleontology, dendritic mineral crystal forms are often mistaken for fossils. These pseudofossils form as naturally occurring fissures in the rock are filled by percolating mineral solutions. They form when water rich in manganese and iron flows along fractures and bedding planes between layers of limestone and other rock types, depositing dendritic crystals as the solution flows through. A variety of manganese oxides and hydroxides are involved, including:
birnessite ()
coronadite ()
cryptomelane ()
hollandite ()
romanechite ()
todorokite () and others.
A three-dimensional form of dendrite develops in fissures in quartz, forming moss agate
NASA microgravity experiment
The Isothermal Dendritic Growth Experiment (IDGE) was a materials science solidification experiment that researchers used on Space Shuttle missions to investigate dendritic growth in an environment where the effect of gravity (convection in the liquid) could be excluded. The experimental results indicated that at lower supercooling (up to 1.3 K), these convective effects are indeed significant. Compared to the growth in microgravity, the tip velocity during dendritic growth under normal gravity was found to be up to several times greater.
See also
Brownian tree
Monocrystalline whisker
Patterns in nature
STS-87—Space Shuttle mission featuring the Isothermal Dendritic Growth Experiment
Whisker (metallurgy)
References
External links
Mindat Manganese Dendrites
What are manganese dendrites?
The Isothermal Dendritic Growth Experiment
Snow crystals
Dendritic Solidification
Dendritic growth in Local-Nonequilibrium Solidification Model
All About Manganese Dendrites
Crystals | Dendrite (crystal) | [
"Chemistry",
"Materials_science"
] | 1,690 | [
"Crystallography",
"Crystals"
] |
1,568,958 | https://en.wikipedia.org/wiki/Inconel | Inconel is a nickel-chromium-based superalloy often utilized in extreme environments where components are subjected to high temperature, pressure or mechanical loads. Inconel alloys are oxidation- and corrosion-resistant. When heated, Inconel forms a thick, stable, passivating oxide layer protecting the surface from further attack. Inconel retains strength over a wide temperature range, attractive for high-temperature applications where aluminum and steel would succumb to creep as a result of thermally-induced crystal vacancies. Inconel's high-temperature strength is developed by solid solution strengthening or precipitation hardening, depending on the alloy.
Inconel alloys are typically used in high temperature applications. Common trade names for various Inconel alloys include:
Alloy 625: Inconel 625, Chronin 625, Altemp 625, Sanicro 625, Haynes 625, Nickelvac 625 Nicrofer 6020 and UNS designation N06625.
Alloy 600: NA14, BS3076, 2.4816, NiCr15Fe (FR), NiCr15Fe (EU), NiCr15Fe8 (DE) and UNS designation N06600.
Alloy 718: Nicrofer 5219, Superimphy 718, Haynes 718, Pyromet 718, Supermet 718, Udimet 718 and UNS designation N07718.
History
The Inconel family of alloys was first developed before December 1932, when its trademark was registered by the US company International Nickel Company of Delaware and New York. A significant early use was found in support of the development of the Whittle jet engine, during the 1940s by research teams at Henry Wiggin & Co of Hereford, England a subsidiary of the Mond Nickel Company, which merged with Inco in 1928. The Hereford Works and its properties including the Inconel trademark were acquired in 1998 by Special Metals Corporation.
Specific data
Composition
Inconel alloys vary widely in their compositions, but all are predominantly nickel, with chromium as the second element.
Properties
When heated, Inconel forms a thick and stable passivating oxide layer protecting the surface from further attack. Inconel retains strength over a wide temperature range, attractive for high-temperature applications where aluminium and steel would succumb to creep as a result of thermally induced crystal vacancies (see Arrhenius equation). Inconel's high temperature strength is developed by solid solution strengthening or precipitation strengthening, depending on the alloy. In age-hardening or precipitation-strengthening varieties, small amounts of niobium combine with nickel to form the intermetallic compound Ni3Nb or gamma double prime (γ″). Gamma prime forms small cubic crystals that inhibit slip and creep effectively at elevated temperatures. The formation of gamma-prime crystals increases over time, especially after three hours of a heat exposure of , and continues to grow after 72 hours of exposure.
Strengthening mechanisms
The most prevalent hardening mechanisms for Inconel alloys are precipitate strengthening and solid solution strengthening. In Inconel alloys, one of the two often dominates. For alloys like Inconel 718, precipitate strengthening is the main strengthening mechanism. The majority of strengthening comes from the presence of gamma double prime (γ″) precipitates. Inconel alloys have a γ matrix phase with an FCC structure. γ″ precipitates are made of Ni and Nb, specifically with a Ni3Nb composition. These precipitates are fine, coherent, disk-shaped, intermetallic particles with a tetragonal structure.
Secondary precipitate strengthening comes from gamma prime (γ') precipitates. The γ' phase can appear in multiple compositions such as Ni3(Al, Ti). The precipitate phase is coherent and has an FCC structure, like the γ matrix; The γ' phase is much less prevalent than γ″. The volume fraction of the γ″ and γ' phases are approximately 15% and 4% after precipitation, respectively. Because of the coherency between the γ matrix and the γ' and γ″ precipitates, strain fields exist that obstruct the motion of dislocations. The prevalence of carbides with MX(Nb, Ti)(C, N) compositions also helps to strengthen the material. For precipitate strengthening, elements like niobium, titanium, and tantalum play a crucial role.
Because the γ″ phase is metastable, over-aging can result in the transformation of γ″ phase precipitates to delta (δ) phase precipitates, their stable counterparts. The δ phase has an orthorhombic structure, a Ni3(Nb, Mo, Ti) composition, and is incoherent. As a result, the transformation of γ″ to δ in Inconel alloys leads to the loss of coherency strengthening, making for a weaker material. That being said, in appropriate quantities, the δ phase is responsible for grain boundary pinning and strengthening.
Another common phase in Inconel alloys is the Laves intermetallic phase. Its compositions are (Ni, Cr, Fe)x(Nb, Mo, Ti)y and NiyNb, it is brittle, and its presence can be detrimental to the mechanical behavior of Inconel alloys. Sites with large amounts of Laves phase are prone to crack propagation because of their higher potential for stress concentration. Additionally, due to its high Nb, Mo, and Ti content, the Laves phase can exhaust the matrix of these elements, ultimately making precipitate and solid-solution strengthening more difficult.
For alloys like Inconel 625, solid-solution hardening is the main strengthening mechanism. Elements like Mo are important in this process. Nb and Ta can also contribute to solid solution strengthening to a lesser extent. In solid solution strengthening, Mo atoms are substituted into the γ matrix of Inconel alloys. Because Mo atoms have a significantly larger radius than those of Ni (209 pm and 163 pm, respectively), the substitution creates strain fields in the crystal lattice, which hinder the motion of dislocations, ultimately strengthening the material.
The combination of elemental composition and strengthening mechanisms is why Inconel alloys can maintain their favorable mechanical and physical properties, such as high strength and fatigue resistance, at elevated temperatures, specifically those up to 650°C.
Machining
Inconel is a difficult metal to shape and to machine using traditional cold forming techniques due to rapid work hardening. After the first machining pass, work hardening tends to plastically deform either the workpiece or the tool on subsequent passes. For this reason, age-hardened Inconels such as 718 are typically machined using an aggressive but slow cut with a hard tool, minimizing the number of passes required. Alternatively, the majority of the machining can be performed with the workpiece in a "solutionized" form, with only the final steps being performed after age hardening. However some claim that Inconel can be machined extremely quickly with very fast spindle speeds using a multifluted ceramic tool with small width of cut at high feed rates as this causes localized heating and softening in front of the flute.
External threads are machined using a lathe to "single-point" the threads or by rolling the threads in the solution treated condition (for hardenable alloys) using a screw machine. Inconel 718 can also be roll-threaded after full aging by using induction heat to without increasing the grain size. Holes with internal threads are made by threadmilling. Internal threads can also be formed using a sinker electrical discharge machining (EDM).
Joining
Welding of some Inconel alloys (especially the gamma prime precipitation hardened family; e.g., Waspaloy and X-750) can be difficult due to cracking and microstructural segregation of alloying elements in the heat-affected zone. However, several alloys such as 625 and 718 have been designed to overcome these problems. The most common welding methods are gas tungsten arc welding and electron-beam welding.
Uses
Inconel is often encountered in extreme environments. It is common in gas turbine blades, seals, and combustors, as well as turbocharger rotors and seals, electric submersible well pump motor shafts, high temperature fasteners, chemical processing and pressure vessels, heat exchanger tubing, steam generators and core components in nuclear pressurized water reactors, natural gas processing with contaminants such as H2S and CO2, firearm sound suppressor blast baffles, and Formula One, NASCAR, NHRA, and APR, LLC exhaust systems. It is also used in the turbo system of the 3rd generation Mazda RX7, and the exhaust systems of high powered Wankel engine and Norton motorcycles where exhaust temperatures reach more than . Inconel is increasingly used in the boilers of waste incinerators. The Joint European Torus and DIII-D tokamaks' vacuum vessels are made of Inconel. Inconel 718 is commonly used for cryogenic storage tanks, downhole shafts, wellhead parts, and in the aerospace industry -- where it has become a prime candidate material for constructing heat resistant turbines.
Aerospace
The Space Shuttle used four Inconel studs to secure the solid rocket boosters to the launch platform, eight total studs supported the entire weight of the ready to fly Shuttle system. Eight frangible nuts are encased on the outside of the solid rocket boosters, at launch explosives separated the nuts releasing the Shuttle from its launch platform.
North American Aviation constructed the skin of the North American X-15 rocket-powered aircraft out of Inconel X/750 alloy.
Rocketdyne used Inconel X-750 for the thrust chamber of the F-1 rocket engine used in the first stage of the Saturn V booster.
SpaceX uses Inconel (Inconel 718) in the engine manifold of their Merlin engine which powers the Falcon 9 launch vehicle.
In a first for 3D printing, the SpaceX SuperDraco rocket engine that provides launch escape system for the Dragon V2 crew-carrying space capsule is fully printed. In particular, the engine combustion chamber is printed of Inconel using a process of direct metal laser sintering, and operates at very high temperature and a chamber pressure of .
SpaceX cast the Raptor rocket engine manifolds from SX300, later SX500, which are nickel superalloys (improvement over older Inconel alloys).
Automotive
Tesla claims to use Inconel in place of steel in the main battery pack contactor of its Model S so that it remains springy under the heat of heavy current. Tesla claims that this allows these upgraded vehicles to safely increase the maximum pack output from 1300 to 1500 amperes, allowing for an increase in power output (acceleration) Tesla refers to as "Ludicrous Mode".
Ford Motor Company is using Inconel to make the turbine wheel in the turbocharger of its EcoBlue diesel engines introduced in 2016.
The exhaust valves on NHRA Top Fuel and Funny Car drag racing engines are often made of Inconel.
Ford Australia used Inconel valves in their turbocharged Barra engines. These valves have been proven very reliable, holding in excess of 1900 horsepower.
BMW has since used Inconel in the exhaust manifold of its high performance luxury car, the BMW M5 E34 with the S38 engine, withstanding higher temperatures and reducing backpressure.
Jaguar Cars has fit, in their Jaguar F-Type SVR high performance sports car, a new lightweight Inconel titanium exhaust system as standard which withstands higher peak temperatures, reduces backpressure and eliminates of mass from the vehicle.
DeLorean Motor Company offers Inconel replacements for failure prone OE trailing arm bolts on the DMC-12. Failure of these bolts can result in loss of the vehicle.
Rolled Inconel was frequently used as the recording medium by engraving in black box recorders on aircraft.
Alternatives to the use of Inconel in chemical applications such as scrubbers, columns, reactors, and pipes are Hastelloy, perfluoroalkoxy (PFA) lined carbon steel or fiber reinforced plastic.
Inconel alloys
Alloys of Inconel include:
Inconel 188: Readily fabricated for commercial gas turbine and aerospace applications.
Inconel 230: Alloy 230 Plate & Sheet mainly used by the power, aerospace, chemical processing and industrial heating industries.
Inconel 600: In terms of high-temperature and corrosion resistance, Inconel 600 excels.
Inconel 601
Inconel 617: Solid solution strengthened (nickel-chromium-cobalt-molybdenum), high-temperature strength, corrosion and oxidation resistant, high workability and weldability. Incorporated in ASME Boiler and Pressure Vessel Code for high temperature nuclear applications such as molten salt reactors April, 2020.
Inconel 625: Acid resistant, good weldability. The LCF version is typically used in bellows. It is commonly used for applications in aeronautic, aerospace, marine, chemical and petrochemical industries. It is also used for reactor-core and control-rod components in pressurized water reactors and as heat exchanger tubes in ammonia cracker plants for heavy water production.
Inconel 690: Low cobalt content for nuclear applications, and low resistivity
Inconel 706
Inconel 713C: Precipitation hardenable nickel-chromium base cast alloy
Inconel 718: Gamma double prime strengthened with good weldability
Inconel 738
Inconel X-750: Commonly used for gas turbine components, including blades, seals and rotors.
Inconel 751: Increased aluminum content for improved rupture strength in the 1600 °F range
Inconel 792: Increased aluminum content for improved high temperature corrosion resistant properties, used especially in gas turbines
Inconel 907
Inconel 909
Inconel 925: Inconel 925 is a nonstabilized austenitic stainless steel with low carbon content.
Inconel 939: Gamma prime strengthened to increase weldability
In age hardening or precipitation strengthening varieties, alloying additions of aluminum and titanium combine with nickel to form the intermetallic compound or gamma prime (γ′). Gamma prime forms small cubic crystals that inhibit slip and creep effectively at elevated temperatures.
See also
Hastelloy
Incoloy
Monel
Nichrome
Nimonic
Stellite
References
Nickel–chromium alloys
Refractory metals
Superalloys
Aerospace materials
Nickel alloys
Chromium alloys | Inconel | [
"Chemistry",
"Engineering"
] | 3,036 | [
"Nickel alloys",
"Aerospace materials",
"Refractory metals",
"Superalloys",
"Alloys",
"Aerospace engineering",
"Chromium alloys"
] |
1,569,089 | https://en.wikipedia.org/wiki/Membrane%20gas%20separation | Gas mixtures can be effectively separated by synthetic membranes made from polymers such as polyamide or cellulose acetate, or from ceramic materials.
While polymeric membranes are economical and technologically useful, they are bounded by their performance, known as the Robeson limit (permeability must be sacrificed for selectivity and vice versa). This limit affects polymeric membrane use for CO2 separation from flue gas streams, since mass transport becomes limiting and CO2 separation becomes very expensive due to low permeabilities. Membrane materials have expanded into the realm of silica, zeolites, metal-organic frameworks, and perovskites due to their strong thermal and chemical resistance as well as high tunability (ability to be modified and functionalized), leading to increased permeability and selectivity. Membranes can be used for separating gas mixtures where they act as a permeable barrier through which different compounds move across at different rates or not move at all. The membranes can be nanoporous, polymer, etc. and the gas molecules penetrate according to their size, diffusivity, or solubility.
Basic process
Gas separation across a membrane is a pressure-driven process, where the driving force is the difference in pressure between inlet of raw material and outlet of product. The membrane used in the process is a generally non-porous layer, so there will not be a severe leakage of gas through the membrane. The performance of the membrane depends on permeability and selectivity. Permeability is affected by the penetrant size. Larger gas molecules have a lower diffusion coefficient. The polymer chain flexibility and free volume in the polymer of the membrane material influence the diffusion coefficient, as the space within the permeable membrane must be large enough for the gas molecules to diffuse across. The solubility is expressed as the ratio of the concentration of the gas in the polymer to the pressure of the gas in contact with it. Permeability is the ability of the membrane to allow the permeating gas to diffuse through the material of the membrane as a consequence of the pressure difference over the membrane, and can be measured in terms of the permeate flow rate, membrane thickness and area and the pressure difference across the membrane. The selectivity of a membrane is a measure of the ratio of permeability of the relevant gases for the membrane. It can be calculated as the ratio of permeability of two gases in binary separation.
The membrane gas separation equipment typically pumps gas into the membrane module and the targeted gases are separated based on difference in diffusivity and solubility. For example, oxygen will be separated from the ambient air and collected at the upstream side, and nitrogen at the downstream side. As of 2016, membrane technology was reported as capable of producing 10 to 25 tonnes of 25 to 40% oxygen per day.
Membrane governing methodology
There are three main diffusion mechanisms. The first (b), Knudsen diffusion holds at very low pressures where lighter molecules can move across a membrane faster than heavy ones, in a material with reasonably large pores. The second (c), molecular sieving, is the case where the pores of the membrane are too small to let one component pass, a process which is typically not practical in gas applications, as the molecules are too small to design relevant pores. In these cases the movement of molecules is best described by pressure-driven convective flow through capillaries, which is quantified by Darcy's law. However, the more general model in gas applications is the solution-diffusion (d) where particles are first dissolved onto the membrane and then diffuse through it both at different rates. This model is employed when the pores in the polymer membrane appear and disappear faster relative to the movement of the particles.
In a typical membrane system the incoming feed stream is separated into two components: permeant and retentate. Permeant is the gas that travels across the membrane and the retentate is what is left of the feed. On both sides of the membrane, a gradient of chemical potential is maintained by a pressure difference which is the driving force for the gas molecules to pass through. The ease of transport of each species is quantified by the permeability, Pi. With the assumptions of ideal mixing on both sides of the membrane, ideal gas law, constant diffusion coefficient and Henry's law, the flux of a species can be related to the pressure difference by Fick's law:
where, (Ji) is the molar flux of species i across the membrane, (l) is membrane thickness, (Pi) is permeability of species i, (Di) is diffusivity, (Ki) is the Henry coefficient, and (pi') and (pi") represent the partial pressures of the species i at the feed and permeant side respectively. The product of DiKi is often expressed as the permeability of the species i, on the specific membrane being used.
The flow of a second species, j, can be defined as:
With the expression above, a membrane system for a binary mixture can be sufficiently defined. it can be seen that the total flow across the membrane is strongly dependent on the relation between the feed and permeate pressures. The ratio of feed pressure (p') over permeate pressure (p") is defined as the membrane pressure ratio (θ).
It is clear from the above, that a flow of species i or j across the membrane can only occur when:
In other words, the membrane will experience flow across it when there exists a concentration gradient between feed and permeate. If the gradient is positive, the flow will go from the feed to the permeate and species i will be separated from the feed.
Therefore, the maximum separation of species i results from:
Another important coefficient when choosing the optimum membrane for a separation process is the membrane selectivity αij defined as the ratio of permeability of species i with relation to the species j.
This coefficient is used to indicate the level to which the membrane is able to separates species i from j. It is obvious from the expression above, that a membrane selectivity of 1 indicates the membrane has no potential to separate the two gases, the reason being, both gases will diffuse equally through the membrane.
In the design of a separation process, normally the pressure ratio and the membrane selectivity are prescribed by the pressures of the system and the permeability of the membrane . The level of separation achieved by the membrane (concentration of the species to be separated) needs to be evaluated based on the aforementioned design parameters in order to evaluate the cost-effectiveness of the system.
Membrane performance
The concentration of species i and j across the membrane can be evaluated based on their respective diffusion flows across it.
In the case of a binary mixture, the concentration of species i across the membrane:
This can be further expanded to obtain an expression of the form:
Using the relations:
The expression can be rewritten as:
Then using
The solution to the above quadratic expression can be expressed as:
Finally, an expression for the permeant concentration is obtained by the following:
Along the separation unit, the feed concentration decays with the diffusion across the membrane causing the concentration at the membrane to drop accordingly. As a result, the total permeant flow (q"out) results from the integration of the diffusion flow across the membrane from the feed inlet (q'in) to feed outlet (q'out). A mass balance across a differential length of the separation unit is therefore:
where:
Because of the binary nature of the mixture, only one species needs to be evaluated. Prescribing a function n'i=n'i(x), the species balance can be rewritten as:
Where:
Lastly, the area required per unit membrane length can be obtained by the following expression:
Membrane materials for carbon capture in flue gas streams
The material of the membrane plays an important role in its ability to provide the desired performance characteristics. It is optimal to have a membrane with a high permeability and sufficient selectivity and it is also important to match the membrane properties to that of the system operating conditions (for example pressures and gas composition).
Synthetic membranes are made from a variety of polymers including polyethylene, polyamides, polyimides, cellulose acetate, polysulphone and polydimethylsiloxane.
Polymer membranes
Polymeric membranes are a common option for use in the capture of CO2 from flue gas because of the maturity of the technology in a variety of industries, namely petrochemicals. The ideal polymer membrane has both a high selectivity and permeability. Polymer membranes are examples of systems that are dominated by the solution-diffusion mechanism. The membrane is considered to have holes which the gas can dissolve (solubility) and the molecules can move from one cavity to the other (diffusion).
It was discovered by Robeson in the early 1990s that polymers with a high selectivity have a low permeability and opposite is true; materials with a low selectivity have a high permeability. This is best illustrated in a Robeson plot where the selectivity is plotted as a function of the CO2 permeation. In this plot, the upper bound of selectivity is approximately a linear function of the permeability. It was found that the solubility in polymers is mostly constant but the diffusion coefficients vary significantly and this is where the engineering of the material occurs. Somewhat intuitively, the materials with the highest diffusion coefficients have a more open pore structure, thus losing selectivity. There are two methods that researchers are using to break the Robeson limit, one of these is the use of glassy polymers whose phase transition and changes in mechanical properties make it appear that the material is absorbing molecules and thus surpasses the upper limit. The second method of pushing the boundaries of the Robeson limit is by the facilitated transport method. As previously stated, the solubility of polymers is typically fairly constant but the facilitated transport method uses a chemical reaction to enhance the permeability of one component without changing the selectivity.
Nanoporous membranes
Nanoporous membranes are fundamentally different from polymer-based membranes in that their chemistry is different and that they do not follow the Robeson limit for a variety of reasons. The simplified figure of a nanoporous membrane shows a small portion of an example membrane structure with cavities and windows. The white portion represents the area where the molecule can move and the blue shaded areas represent the walls of the structure. In the engineering of these membranes, the size of the cavity (Lcy x Lcz) and window region (Lwy x Lwz) can be modified so that the desired permeation is achieved. It has been shown that the permeability of a membrane is the production of adsorption and diffusion. In low loading conditions, the adsorption can be computed by the Henry coefficient.
If the assumption is made that the energy of a particle does not change when moving through this structure, only the entropy of the molecules changes based on the size of the openings. If we first consider changes the cavity geometry, the larger the cavity, the larger the entropy of the absorbed molecules which thus makes the Henry coefficient larger. For diffusion, an increase in entropy will lead to a decrease in free energy which in turn leads to a decrease in the diffusion coefficient. Conversely, changing the window geometry will primarily effect the diffusion of the molecules and not the Henry coefficient.
In summary, by using the above simplified analysis, it is possible to understand why the upper limit of the Robeson line does not hold for nanostructures. In the analysis, both the diffusion and Henry coefficients can be modified without affecting the permeability of the material which thus can exceed the upper limit for polymer membranes.
Silica membranes
Silica membranes are mesoporous and can be made with high uniformity (the same structure throughout the membrane). The high porosity of these membranes gives them very high permeabilities. Synthesized membranes have smooth surfaces and can be modified on the surface to drastically improve selectivity. Functionalizing silica membrane surfaces with amine containing molecules (on the surface silanol groups) allows the membranes to separate CO2 from flue gas streams more effectively. Surface functionalization (and thus chemistry) can be tuned to be more efficient for wet flue gas streams as compared to dry flue gas streams. While previously, silica membranes were impractical due to their technical scalability and cost (they are very difficult to produce in an economical manner on a large scale), there have been demonstrations of a simple method of producing silica membranes on hollow polymeric supports. These demonstrations indicate that economical materials and methods can effectively separate CO2 and N2. Ordered mesoporous silica membranes have shown considerable potential for surface modification that allows for ease of CO2 separation. Surface functionalization with amines leads to the reversible formation of carbamates (during CO2 flow), increasing CO2 selectivity significantly.
Zeolite membranes
Zeolites are crystalline aluminosilicates with a regular repeating structure of molecular-sized pores. Zeolite membranes selectively separate molecules based on pore size and polarity and are thus highly tunable to specific gas separation processes. In general, smaller molecules and those with stronger zeolite-adsorption properties are adsorbed onto zeolite membranes with larger selectivity. The capacity to discriminate based on both molecular size and adsorption affinity makes zeolite membranes an attractive candidate for CO2 separation from N2, CH4, and H2.
Scientists have found that the gas-phase enthalpy (heat) of adsorption on zeolites increases as follows: H2 < CH4 < N2 < CO2. It is generally accepted that CO2 has the largest adsorption energy because it has the largest quadrupole moment, thereby increasing its affinity for charged or polar zeolite pores. At low temperatures, zeolite adsorption-capacity is large and the high concentration of adsorbed CO2 molecules blocks the flow of other gases. Therefore, at lower temperatures, CO2 selectively permeates through zeolite pores. Several recent research efforts have focused on developing new zeolite membranes that maximize the CO2 selectivity by taking advantage of the low-temperature blocking phenomena.
Researchers have synthesized Y-type (Si:Al>3) zeolite membranes which achieve room-temperature separation factors of 100 and 21 for CO2/N2 and CO2/CH4 mixtures respectively. DDR-type and SAPO-34 membranes have also shown promise in separating CO2 and CH4 at a variety of pressures and feed compositions. The SAPO-34 membranes, being nitrogen selective, are also strong contender for natural gas sweetening process.
Researchers have also made an effort to utilize zeolite membranes for the separation of H2 from hydrocarbons. Hydrogen can be separated from larger hydrocarbons such as C4H10 with high selectivity. This is due to the molecular sieving effect since zeolites have pores much larger than H2, but smaller than these large hydrocarbons. Smaller hydrocarbons such as CH4, C2H6, and C3H8 are small enough to not be separated by molecular sieving. Researchers achieved a higher selectivity of hydrogen when performing the separation at high temperatures, likely as a result of a decrease in the competitive adsorption effect.
Metal-organic framework (MOF) membranes
There have been advances in zeolitic-imidazolate frameworks (ZIFs), a subclass of metal-organic frameworks (MOFs), that have allowed them to be useful for carbon dioxide separation from flue gas streams. Extensive modeling has been performed to demonstrate the value of using MOFs as membranes. MOF materials are adsorption-based, and thus can be tuned to achieve selectivity. The drawback to MOF systems is stability in water and other compounds present in flue gas streams. Select materials, such as ZIF-8, have demonstrated stability in water and benzene, contents often present in flue gas mixtures. ZIF-8 can be synthesized as a membrane on a porous alumina support and has proven to be effective at separating CO2 from flue gas streams. At similar CO2/CH4 selectivity to Y-type zeolite membranes, ZIF-8 membranes achieve unprecedented CO2 permeance, two orders of magnitude above the previous standard.
Perovskite membranes
Perovskite are mixed metal oxide with a well-defined cubic structure and a general formula of ABO3, where A is an alkaline earth or lanthanide element and B is a transition metal. These materials are attractive for CO2 separation because of the tunability of the metal sites as well as their stabilities at elevated temperatures.
The separation of CO2 from N2 was investigated with an α-alumina membrane impregnated with BaTiO3. It was found that adsorption of CO2 was favorable at high temperatures due to an endothermic interaction between CO2 and the material, promoting mobile CO2 that enhanced CO2 adsorption-desorption rate and surface diffusion. The experimental separation factor of CO2 to N2 was found to be 1.1-1.2 at 100 °C to 500 °C, which is higher than the separation factor limit of 0.8 predicted by Knudsen diffusion. Though the separation factor was low due to pinholes observed in the membrane, this demonstrates the potential of perovskite materials in their selective surface chemistry for CO2 separation.
Other membrane technologies
In special cases other materials can be utilized; for example, palladium membranes permit transport solely of hydrogen. In addition to palladium membranes (which are typically palladium silver alloys to stop embrittlement of the alloy at lower temperature) there is also a significant research effort looking into finding non-precious metal alternatives. Although slow kinetics of exchange on the surface of the membrane and tendency for the membranes to crack or disintegrate after a number of duty cycles or during cooling are problems yet to be fully solved.
Construction
Membranes are typically contained in one of three modules:
Hollow fibre bundles in a metal module
Spiral wound bundles in a metal module
Plate and frame module constructed like a plate and frame heat exchanger
Uses
Membranes are employed in:
The separation of nitrogen or oxygen from air (generally only up to 99.5%)
Separation of hydrogen from gases like nitrogen and methane
Recovery of hydrogen from product streams of ammonia plants
Recovery of hydrogen in oil refinery processes
Separation of methane from the other components of biogas
Enrichment of air by oxygen for medical or metallurgical purposes. One of the methods used for commercial production of nitrox breathing gas for underwater diving.
Enrichment of ullage by nitrogen in inerting systems designed to prevent fuel tank explosions
Removal of water vapor from natural gas and other gases
Removal of SO2, CO2 and H2S from natural gas (polyamide membranes)
Removal of volatile organic liquids (VOL) from air of exhaust streams
Air separation
Oxygen-enriched air is in high demanded for a range of medical and industrial applications including chemical and combustion processes. Cryogenic distillation is the mature technology for commercial air separation for the production of large quantities of high purity oxygen and nitrogen. However, it is a complex process, is energy-intensive, and is generally not suitable for small-scale production. Pressure swing adsorption is also commonly used for air separation and can also produce high purity oxygen at medium production rates, but it still requires considerable space, high investment and high energy consumption. The membrane gas separation method is a relatively low environmental impact and sustainable process providing continuous production, simple operation, lower pressure/temperature requirements, and compact space requirements.
Current status of CO2 capture with membranes
A great deal of research has been undertaken to utilize membranes instead of absorption or adsorption for carbon capture from flue gas streams, however, no current projects exist that utilize membranes. Process engineering along with new developments in materials have shown that membranes have the greatest potential for low energy penalty and cost compared to competing technologies.
Background
Today, membranes are used for commercial separations involving: N2 from air, H2 from ammonia in the Haber-Bosch process, natural gas purification, and tertiary-level enhanced oil recovery supply.
Single-stage membrane operations involve a single membrane with one selectivity value. Single-stage membranes were first used in natural gas purification, separating CO2 from methane. A disadvantage of single-stage membranes is the loss of product in the permeate due to the constraints imposed by the single selectivity value. Increasing the selectivity reduces the amount of product lost in the permeate, but comes at the cost of requiring a larger pressure difference to process an equivalent amount of a flue stream. In practice, the maximum pressure ratio economically possible is around 5:1.
To combat the loss of product in the membrane permeate, engineers use “cascade processes” in which the permeate is recompressed and interfaced with additional, higher selectivity membranes. The retentate streams can be recycled, which achieves a better yield of product.
Need for multi-stage process
Single-stage membranes devices are not feasible for obtaining a high concentration of separated material in the permeate stream. This is due to the pressure ratio limit that is economically unrealistic to exceed. Therefore, the use of multi-stage membranes is required to concentrate the permeate stream. The use of a second stage allows for less membrane area and power to be used. This is because of the higher concentration that passes the second stage, as well as the lower volume of gas for the pump to process. Other factors, such as adding another stage that uses air to concentrate the stream further reduces cost by increasing concentration within the feed stream. Additional methods such as combining multiple types of separation methods allow for variation in creating economical process designs.
Membrane use in hybrid processes
Hybrid processes have long-standing history with gas separation. Typically, membranes are integrated into already existing processes such that they can be retrofitted into already existing carbon capture systems.
MTR, Membrane Technology and Research Inc., and UT Austin have worked to create hybrid processes, utilizing both absorption and membranes, for CO2 capture. First, an absorption column using piperazine as a solvent absorbs about half the carbon dioxide in the flue gas, then the use of a membrane results in 90% capture. A parallel setup is also, with the membrane and absorption processes occurring simultaneously. Generally, these processes are most effective when the highest content of carbon dioxide enters the amine absorption column. Incorporating hybrid design processes allows for retrofitting into fossil fuel power plants.
Hybrid processes can also use cryogenic distillation and membranes. For example, hydrogen and carbon dioxide can be separated, first using cryogenic gas separation, whereby most of the carbon dioxide exits first, then using a membrane process to separate the remaining carbon dioxide, after which it is recycled for further attempts at cryogenic separation.
Cost analysis
Cost limits the pressure ratio in a membrane CO2 separation stage to a value of 5; higher pressure ratios eliminate any economic viability for CO2 capture using membrane processes. Recent studies have demonstrated that multi-stage CO2 capture/separation processes using membranes can be economically competitive with older and more common technologies such as amine-based absorption. Currently, both membrane and amine-based absorption processes can be designed to yield a 90% CO2 capture rate. For carbon capture at an average 600 MW coal-fired power plant, the cost of CO2 capture using amine-based absorption is in the $40–100 per ton of CO2 range, while the cost of CO2 capture using current membrane technology (including current process design schemes) is about $23 per ton of CO2. Additionally, running an amine-based absorption process at an average 600 MW coal-fired power plant consumes about 30% of the energy generated by the power plant, while running a membrane process requires about 16% of the energy generated. CO2 transport (e.g. to geologic sequestration sites, or to be used for EOR) costs about $2–5 per ton of CO2. This cost is the same for all types of CO2 capture/separation processes such as membrane separation and absorption. In terms of dollars per ton of captured CO2, the least expensive membrane processes being studied at this time are multi-step counter-current flow/sweep processes.
See also
References
Separation processes
Gas technologies
Membrane technology
Industrial gases
de:Gastrennung#Membranverfahren | Membrane gas separation | [
"Chemistry"
] | 5,084 | [
"Separation processes",
"Membrane technology",
"Industrial gases",
"nan",
"Chemical process engineering"
] |
1,569,292 | https://en.wikipedia.org/wiki/Stochastic%20electrodynamics | Stochastic electrodynamics (SED) extends classical electrodynamics (CED) of theoretical physics by adding the hypothesis of a classical Lorentz invariant radiation field having statistical properties similar to that of the electromagnetic zero-point field (ZPF) of quantum electrodynamics (QED).
Key ingredients
Stochastic electrodynamics combines two conventional classical ideas – electromagnetism derived from point charges obeying Maxwell's equations and particle motion driven by Lorentz forces – with one unconventional hypothesis: the classical field has radiation even at T=0. This zero-point radiation is inferred from observations of the (macroscopic) Casimir effect forces at low temperatures. As temperature approaches zero, experimental measurements of the force between two uncharged, conducting plates in a vacuum do not go to zero as classical electrodynamics would predict. Taking this result as evidence of classical zero-point radiation leads to the stochastic electrodynamics model.
Brief history
Stochastic electrodynamics is a term for a collection of research efforts of many different styles based on the ansatz that there exists a Lorentz invariant random electromagnetic radiation. The basic ideas have been around for a long time, but Marshall (1963) and Brafford seem to have originated the more concentrated efforts that started in the 1960s. Thereafter Timothy Boyer, Luis de la Peña and Ana María Cetto were perhaps the most prolific contributors in the 1970s and beyond.
Others have made contributions, alterations, and proposals concentrating on applying SED to problems in QED. A separate thread has been the investigation of an earlier proposal by Walther Nernst attempting to use the SED notion of a classical ZPF to explain inertial mass as due to a vacuum reaction.
In 2010, Cavalleri et al. introduced SEDS ('pure' SED, as they call it, plus spin) as a fundamental improvement that they claim potentially overcomes all the known drawbacks of SED. They also claim SEDS resolves four observed effects that are so far unexplained by QED, i.e., 1) the physical origin of the ZPF and its natural upper cutoff; 2) an anomaly in experimental studies of the neutrino rest mass; 3) the origin and quantitative treatment of 1/f noise; and 4) the high-energy tail (~ 1021 eV) of cosmic rays. Two double-slit electron diffraction experiments are proposed to discriminate between QM and SEDS.
In 2013, Auñon et al. showed that Casimir and Van der Waals interactions are a particular case of stochastic forces from electromagnetic sources when the broad Planck's spectrum is chosen, and the wavefields are non-correlated. Addressing fluctuating partially coherent light emitters with a tailored spectral energy distribution in the optical range, this establishes the link between stochastic electrodynamics and coherence theory; henceforth putting forward a way to optically create and control both such zero-point fields as well as Lifshitz forces of thermal fluctuations. In addition, this opens the path to build many more stochastic forces on employing narrow-band light sources for bodies with frequency-dependent responses.
Scope of SED
SED has been used in attempts to provide a classical explanation for effects previously considered to require quantum mechanics (here restricted to the Schrödinger equation and the Dirac equation and QED) for their explanation. It has also motivated a classical ZPF-based underpinning for gravity and inertia. There is no universal agreement on the successes and failures of SED, either in its congruence with standard theories of quantum mechanics, QED, and gravity or in its compliance with observation. The following SED-based explanations are relatively uncontroversial and are free of criticism at the time of writing:
The Van der Waals force
Diamagnetism
The Unruh effect
The following SED-based calculations and SED-related claims are more controversial, and some have been subject to published criticism:
The ground state of the harmonic oscillator
The ground state of the hydrogen atom
De Broglie waves
Inertia
Gravitation
See also
References
Fringe physics
Quantum field theory
Emergence | Stochastic electrodynamics | [
"Physics"
] | 875 | [
"Quantum field theory",
"Quantum mechanics"
] |
1,569,600 | https://en.wikipedia.org/wiki/Thermal%20expansion | Thermal expansion is the tendency of matter to increase in length, area, or volume, changing its size and density, in response to an increase in temperature (usually excluding phase transitions).
Substances usually contract with decreasing temperature (thermal contraction), with rare exceptions within limited temperature ranges (negative thermal expansion).
Temperature is a monotonic function of the average molecular kinetic energy of a substance. As energy in particles increases, they start moving faster and faster, weakening the intermolecular forces between them and therefore expanding the substance.
When a substance is heated, molecules begin to vibrate and move more, usually creating more distance between themselves.
The relative expansion (also called strain) divided by the change in temperature is called the material's coefficient of linear thermal expansion and generally varies with temperature.
Prediction
If an equation of state is available, it can be used to predict the values of the thermal expansion at all the required temperatures and pressures, along with many other state functions.
Contraction effects (negative expansion)
A number of materials contract on heating within certain temperature ranges; this is usually called negative thermal expansion, rather than "thermal contraction". For example, the coefficient of thermal expansion of water drops to zero as it is cooled to and then becomes negative below this temperature; this means that water has a maximum density at this temperature, and this leads to bodies of water maintaining this temperature at their lower depths during extended periods of sub-zero weather.
Other materials are also known to exhibit negative thermal expansion. Fairly pure silicon has a negative coefficient of thermal expansion for temperatures between about . ALLVAR Alloy 30, a titanium alloy, exhibits anisotropic negative thermal expansion across a wide range of temperatures.
Factors
Unlike gases or liquids, solid materials tend to keep their shape when undergoing thermal expansion.
Thermal expansion generally decreases with increasing bond energy, which also has an effect on the melting point of solids, so high melting point materials are more likely to have lower thermal expansion. In general, liquids expand slightly more than solids. The thermal expansion of glasses is slightly higher compared to that of crystals. At the glass transition temperature, rearrangements that occur in an amorphous material lead to characteristic discontinuities of coefficient of thermal expansion and specific heat. These discontinuities allow detection of the glass transition temperature where a supercooled liquid transforms to a glass.
Absorption or desorption of water (or other solvents) can change the size of many common materials; many organic materials change size much more due to this effect than due to thermal expansion. Common plastics exposed to water can, in the long term, expand by many percent.
Effect on density
Thermal expansion changes the space between particles of a substance, which changes the volume of the substance while negligibly changing its mass (the negligible amount comes from mass–energy equivalence), thus changing its density, which has an effect on any buoyant forces acting on it. This plays a crucial role in convection of unevenly heated fluid masses, notably making thermal expansion partly responsible for wind and ocean currents.
Coefficients
The coefficient of thermal expansion describes how the size of an object changes with a change in temperature. Specifically, it measures the fractional change in size per degree change in temperature at a constant pressure, such that lower coefficients describe lower propensity for change in size. Several types of coefficients have been developed: volumetric, area, and linear. The choice of coefficient depends on the particular application and which dimensions are considered important. For solids, one might only be concerned with the change along a length, or over some area.
The volumetric thermal expansion coefficient is the most basic thermal expansion coefficient, and the most relevant for fluids. In general, substances expand or contract when their temperature changes, with expansion or contraction occurring in all directions. Substances that expand at the same rate in every direction are called isotropic. For isotropic materials, the area and volumetric thermal expansion coefficient are, respectively, approximately twice and three times larger than the linear thermal expansion coefficient.
In the general case of a gas, liquid, or solid, the volumetric coefficient of thermal expansion is given by
The subscript "p" to the derivative indicates that the pressure is held constant during the expansion, and the subscript V stresses that it is the volumetric (not linear) expansion that enters this general definition. In the case of a gas, the fact that the pressure is held constant is important, because the volume of a gas will vary appreciably with pressure as well as temperature. For a gas of low density this can be seen from the ideal gas law.
For various materials
This section summarizes the coefficients for some common materials.
For isotropic materials the coefficients linear thermal expansion α and volumetric thermal expansion αV are related by .
For liquids usually the coefficient of volumetric expansion is listed and linear expansion is calculated here for comparison.
For common materials like many metals and compounds, the thermal expansion coefficient is inversely proportional to the melting point.
In particular, for metals the relation is:
for halides and oxides
In the table below, the range for α is from 10−7 K−1 for hard solids to 10−3 K−1 for organic liquids. The coefficient α varies with the temperature and some materials have a very high variation; see for example the variation vs. temperature of the volumetric coefficient for a semicrystalline polypropylene (PP) at different pressure, and the variation of the linear coefficient vs. temperature for some steel grades (from bottom to top: ferritic stainless steel, martensitic stainless steel, carbon steel, duplex stainless steel, austenitic steel). The highest linear coefficient in a solid has been reported for a Ti-Nb alloy.
(The formula is usually used for solids.)
In solids
When calculating thermal expansion it is necessary to consider whether the body is free to expand or is constrained. If the body is free to expand, the expansion or strain resulting from an increase in temperature can be simply calculated by using the applicable coefficient of thermal expansion.
If the body is constrained so that it cannot expand, then internal stress will be caused (or changed) by a change in temperature. This stress can be calculated by considering the strain that would occur if the body were free to expand and the stress required to reduce that strain to zero, through the stress/strain relationship characterised by the elastic or Young's modulus. In the special case of solid materials, external ambient pressure does not usually appreciably affect the size of an object and so it is not usually necessary to consider the effect of pressure changes.
Common engineering solids usually have coefficients of thermal expansion that do not vary significantly over the range of temperatures where they are designed to be used, so where extremely high accuracy is not required, practical calculations can be based on a constant, average, value of the coefficient of expansion.
Length
Linear expansion means change in one dimension (length) as opposed to change in volume (volumetric expansion).
To a first approximation, the change in length measurements of an object due to thermal expansion is related to temperature change by a coefficient of linear thermal expansion (CLTE). It is the fractional change in length per degree of temperature change. Assuming negligible effect of pressure, one may write:
where is a particular length measurement and is the rate of change of that linear dimension per unit change in temperature.
The change in the linear dimension can be estimated to be:
This estimation works well as long as the linear-expansion coefficient does not change much over the change in temperature , and the fractional change in length is small . If either of these conditions does not hold, the exact differential equation (using ) must be integrated.
Effects on strain
For solid materials with a significant length, like rods or cables, an estimate of the amount of thermal expansion can be described by the material strain, given by and defined as:
where is the length before the change of temperature and is the length after the change of temperature.
For most solids, thermal expansion is proportional to the change in temperature:
Thus, the change in either the strain or temperature can be estimated by:
where
is the difference of the temperature between the two recorded strains, measured in degrees Fahrenheit, degrees Rankine, degrees Celsius, or kelvin, and is the linear coefficient of thermal expansion in "per degree Fahrenheit", "per degree Rankine", "per degree Celsius", or "per kelvin", denoted by , , , or , respectively. In the field of continuum mechanics, thermal expansion and its effects are treated as eigenstrain and eigenstress.
Area
The area thermal expansion coefficient relates the change in a material's area dimensions to a change in temperature. It is the fractional change in area per degree of temperature change. Ignoring pressure, one may write:
where is some area of interest on the object, and is the rate of change of that area per unit change in temperature.
The change in the area can be estimated as:
This equation works well as long as the area expansion coefficient does not change much over the change in temperature , and the fractional change in area is small . If either of these conditions does not hold, the equation must be integrated.
Volume
For a solid, one can ignore the effects of pressure on the material, and the volumetric (or cubical) thermal expansion coefficient can be written:
where is the volume of the material, and is the rate of change of that volume with temperature.
This means that the volume of a material changes by some fixed fractional amount. For example, a steel block with a volume of 1 cubic meter might expand to 1.002 cubic meters when the temperature is raised by 50 K. This is an expansion of 0.2%. If a block of steel has a volume of 2 cubic meters, then under the same conditions, it would expand to 2.004 cubic meters, again an expansion of 0.2%. The volumetric expansion coefficient would be 0.2% for 50 K, or 0.004% K−1.
If the expansion coefficient is known, the change in volume can be calculated
where is the fractional change in volume (e.g., 0.002) and is the change in temperature (50 °C).
The above example assumes that the expansion coefficient did not change as the temperature changed and the increase in volume is small compared to the original volume. This is not always true, but for small changes in temperature, it is a good approximation. If the volumetric expansion coefficient does change appreciably with temperature, or the increase in volume is significant, then the above equation will have to be integrated:
where is the volumetric expansion coefficient as a function of temperature T, and and are the initial and final temperatures respectively.
Isotropic materials
For isotropic materials the volumetric thermal expansion coefficient is three times the linear coefficient:
This ratio arises because volume is composed of three mutually orthogonal directions. Thus, in an isotropic material, for small differential changes, one-third of the volumetric expansion is in a single axis. As an example, take a cube of steel that has sides of length . The original volume will be and the new volume, after a temperature increase, will be
We can easily ignore the terms as ΔL is a small quantity which on squaring gets much smaller and on cubing gets smaller still.
So
The above approximation holds for small temperature and dimensional changes (that is, when and are small), but it does not hold if trying to go back and forth between volumetric and linear coefficients using larger values of . In this case, the third term (and sometimes even the fourth term) in the expression above must be taken into account.
Similarly, the area thermal expansion coefficient is two times the linear coefficient:
This ratio can be found in a way similar to that in the linear example above, noting that the area of a face on the cube is just . Also, the same considerations must be made when dealing with large values of .
Put more simply, if the length of a cubic solid expands from 1.00 m to 1.01 m, then the area of one of its sides expands from 1.00 m2 to 1.02 m2 and its volume expands from 1.00 m3 to 1.03 m3.
Anisotropic materials
Materials with anisotropic structures, such as crystals (with less than cubic symmetry, for example martensitic phases) and many composites, will generally have different linear expansion coefficients in different directions. As a result, the total volumetric expansion is distributed unequally among the three axes. If the crystal symmetry is monoclinic or triclinic, even the angles between these axes are subject to thermal changes. In such cases it is necessary to treat the coefficient of thermal expansion as a tensor with up to six independent elements. A good way to determine the elements of the tensor is to study the expansion by x-ray powder diffraction. The thermal expansion coefficient tensor for the materials possessing cubic symmetry (for e.g. FCC, BCC) is isotropic.
Temperature dependence
Thermal expansion coefficients of solids usually show little dependence on temperature (except at very low temperatures) whereas liquids can expand at different rates at different temperatures. There are some exceptions: for example, cubic boron nitride exhibits significant variation of its thermal expansion coefficient over a broad range of temperatures. Another example is paraffin which in its solid form has a thermal expansion coefficient that is dependent on temperature.
In gases
Since gases fill the entirety of the container which they occupy, the volumetric thermal expansion coefficient at constant pressure, , is the only one of interest.
For an ideal gas, a formula can be readily obtained by differentiation of the ideal gas law, . This yields
where is the pressure, is the molar volume (, with the total number of moles of gas), is the absolute temperature and is equal to the gas constant.
For an isobaric thermal expansion, , so that and the isobaric thermal expansion coefficient is:
which is a strong function of temperature; doubling the temperature will halve the thermal expansion coefficient.
Absolute zero computation
From 1787 to 1802, it was determined by Jacques Charles (unpublished), John Dalton, and Joseph Louis Gay-Lussac that, at constant pressure, ideal gases expanded or contracted their volume linearly (Charles's law) by about 1/273 parts per degree Celsius of temperature's change up or down, between 0° and 100 °C. This suggested that the volume of a gas cooled at about −273 °C would reach zero.
In October 1848, William Thomson, a 24 year old professor of Natural Philosophy at the University of Glasgow, published the paper On an Absolute Thermometric Scale.
In a footnote Thomson calculated that "infinite cold" (absolute zero) was equivalent to −273 °C (he called the temperature in °C as the "temperature of the air thermometers" of the time). This value of "−273" was considered to be the temperature at which the ideal gas volume reaches zero. By considering a thermal expansion linear with temperature (i.e. a constant coefficient of thermal expansion), the value of absolute zero was linearly extrapolated as the negative reciprocal of 0.366/100 °C – the accepted average coefficient of thermal expansion of an ideal gas in the temperature interval 0–100 °C, giving a remarkable consistency to the currently accepted value of −273.15 °C.
In liquids
The thermal expansion of liquids is usually higher than in solids because the intermolecular forces present in liquids are relatively weak and its constituent molecules are more mobile. Unlike solids, liquids have no definite shape and they take the shape of the container. Consequently, liquids have no definite length and area, so linear and areal expansions of liquids only have significance in that they may be applied to topics such as thermometry and estimates of sea level rising due to global climate change. Sometimes, αL is still calculated from the experimental value of αV.
In general, liquids expand on heating, except cold water; below 4 °C it contracts, leading to a negative thermal expansion coefficient. At higher temperatures it shows more typical behavior, with a positive thermal expansion coefficient.
Apparent and absolute
The expansion of liquids is usually measured in a container. When a liquid expands in a vessel, the vessel expands along with the liquid. Hence the observed increase in volume (as measured by the liquid level) is not the actual increase in its volume. The expansion of the liquid relative to the container is called its apparent expansion, while the actual expansion of the liquid is called real expansion or absolute expansion. The ratio of apparent increase in volume of the liquid per unit rise of temperature to the original volume is called its coefficient of apparent expansion. The absolute expansion can be measured by a variety of techniques, including ultrasonic methods.
Historically, this phenomenon complicated the experimental determination of thermal expansion coefficients of liquids, since a direct measurement of the change in height of a liquid column generated by thermal expansion is a measurement of the apparent expansion of the liquid. Thus the experiment simultaneously measures two coefficients of expansion and measurement of the expansion of a liquid must account for the expansion of the container as well. For example, when a flask with a long narrow stem, containing enough liquid to partially fill the stem itself, is placed in a heat bath, the height of the liquid column in the stem will initially drop, followed immediately by a rise of that height until the whole system of flask, liquid and heat bath has warmed through. The initial drop in the height of the liquid column is not due to an initial contraction of the liquid, but rather to the expansion of the flask as it contacts the heat bath first.
Soon after, the liquid in the flask is heated by the flask itself and begins to expand. Since liquids typically have a greater percent expansion than solids for the same temperature change, the expansion of the liquid in the flask eventually exceeds that of the flask, causing the level of liquid in the flask to rise. For small and equal rises in temperature, the increase in volume (real expansion) of a liquid is equal to the sum of the apparent increase in volume (apparent expansion) of the liquid and the increase in volume of the containing vessel. The absolute expansion of the liquid is the apparent expansion corrected for the expansion of the containing vessel.
Examples and applications
The expansion and contraction of the materials must be considered when designing large structures, when using tape or chain to measure distances for land surveys, when designing molds for casting hot material, and in other engineering applications when large changes in dimension due to temperature are expected.
Thermal expansion is also used in mechanical applications to fit parts over one another, e.g. a bushing can be fitted over a shaft by making its inner diameter slightly smaller than the diameter of the shaft, then heating it until it fits over the shaft, and allowing it to cool after it has been pushed over the shaft, thus achieving a 'shrink fit'. Induction shrink fitting is a common industrial method to pre-heat metal components between 150 °C and 300 °C thereby causing them to expand and allow for the insertion or removal of another component.
There exist some alloys with a very small linear expansion coefficient, used in applications that demand very small changes in physical dimension over a range of temperatures. One of these is Invar 36, with expansion approximately equal to 0.6 K−1. These alloys are useful in aerospace applications where wide temperature swings may occur.
Pullinger's apparatus is used to determine the linear expansion of a metallic rod in the laboratory. The apparatus consists of a metal cylinder closed at both ends (called a steam jacket). It is provided with an inlet and outlet for the steam. The steam for heating the rod is supplied by a boiler which is connected by a rubber tube to the inlet. The center of the cylinder contains a hole to insert a thermometer. The rod under investigation is enclosed in a steam jacket. One of its ends is free, but the other end is pressed against a fixed screw. The position of the rod is determined by a micrometer screw gauge or spherometer.
To determine the coefficient of linear thermal expansion of a metal, a pipe made of that metal is heated by passing steam through it. One end of the pipe is fixed securely and the other rests on a rotating shaft, the motion of which is indicated by a pointer. A suitable thermometer records the pipe's temperature. This enables calculation of the relative change in length per degree temperature change.
The control of thermal expansion in brittle materials is a key concern for a wide range of reasons. For example, both glass and ceramics are brittle and uneven temperature causes uneven expansion which again causes thermal stress and this might lead to fracture. Ceramics need to be joined or work in concert with a wide range of materials and therefore their expansion must be matched to the application. Because glazes need to be firmly attached to the underlying porcelain (or other body type) their thermal expansion must be tuned to 'fit' the body so that crazing or shivering do not occur. Good example of products whose thermal expansion is the key to their success are CorningWare and the spark plug. The thermal expansion of ceramic bodies can be controlled by firing to create crystalline species that will influence the overall expansion of the material in the desired direction. In addition or instead the formulation of the body can employ materials delivering particles of the desired expansion to the matrix. The thermal expansion of glazes is controlled by their chemical composition and the firing schedule to which they were subjected. In most cases there are complex issues involved in controlling body and glaze expansion, so that adjusting for thermal expansion must be done with an eye to other properties that will be affected, and generally trade-offs are necessary.
Thermal expansion can have a noticeable effect on gasoline stored in above-ground storage tanks, which can cause gasoline pumps to dispense gasoline which may be more compressed than gasoline held in underground storage tanks in winter, or less compressed than gasoline held in underground storage tanks in summer.
Heat-induced expansion has to be taken into account in most areas of engineering. A few examples are:
Metal-framed windows need rubber spacers.
Rubber tires need to perform well over a range of temperatures, being passively heated or cooled by road surfaces and weather, and actively heated by mechanical flexing and friction.
Metal hot water heating pipes should not be used in long straight lengths.
Large structures such as railways and bridges need expansion joints in the structures to avoid sun kink.
A gridiron pendulum uses an arrangement of different metals to maintain a more temperature stable pendulum length.
A power line on a hot day is droopy, but on a cold day it is tight. This is because the metals expand under heat.
Expansion joints absorb the thermal expansion in a piping system.
Precision engineering nearly always requires the engineer to pay attention to the thermal expansion of the product. For example, when using a scanning electron microscope small changes in temperature such as 1 degree can cause a sample to change its position relative to the focus point.
Liquid thermometers contain a liquid (usually mercury or alcohol) in a tube, which constrains it to flow in only one direction when its volume expands due to changes in temperature.
A bi-metal mechanical thermometer uses a bimetallic strip and bends due to the differing thermal expansion of the two metals.
See also
References
External links
Glass Thermal Expansion Thermal expansion measurement, definitions, thermal expansion calculation from the glass composition
Water thermal expansion calculator
DoITPoMS Teaching and Learning Package on Thermal Expansion and the Bi-material Strip
Engineering Toolbox – List of coefficients of Linear Expansion for some common materials
Article on how αV is determined
MatWeb: Free database of engineering properties for over 79,000 materials
USA NIST Website – Temperature and Dimensional Measurement workshop
Hyperphysics: Thermal expansion
Understanding Thermal Expansion in Ceramic Glazes
Thermal Expansion Calculators
Thermal expansion via density calculator
Thermodynamics
Heat transfer
Physical properties
Building defects | Thermal expansion | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 4,905 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamics",
"Building defects",
"Mechanical failure",
"Physical properties",
"Dynamical systems"
] |
1,570,072 | https://en.wikipedia.org/wiki/Mathematical%20chemistry | Mathematical chemistry is the area of research engaged in novel applications of mathematics to chemistry; it concerns itself principally with the mathematical modeling of chemical phenomena. Mathematical chemistry has also sometimes been called computer chemistry, but should not be confused with computational chemistry.
Major areas of research in mathematical chemistry include chemical graph theory, which deals with topology such as the mathematical study of isomerism and the development of topological descriptors or indices which find application in quantitative structure-property relationships; and chemical aspects of group theory, which finds applications in stereochemistry and quantum chemistry. Another important area is molecular knot theory and circuit topology that describe the topology of folded linear molecules such as proteins and nucleic acids.
The history of the approach may be traced back to the 19th century. Georg Helm published a treatise titled "The Principles of Mathematical Chemistry: The Energetics of Chemical Phenomena" in 1894. Some of the more contemporary periodical publications specializing in the field are MATCH Communications in Mathematical and in Computer Chemistry, first published in 1975, and the Journal of Mathematical Chemistry, first published in 1987. In 1986 a series of annual conferences MATH/CHEM/COMP taking place in Dubrovnik was initiated by the late Ante Graovac.
The basic models for mathematical chemistry are molecular graph and topological index.
In 2005 the International Academy of Mathematical Chemistry (IAMC) was founded in Dubrovnik (Croatia) by Milan Randić. The Academy has 82 members (2009) from all over the world, including six scientists awarded with a Nobel Prize.
See also
Bibliography
Molecular Descriptors for Chemoinformatics, by R. Todeschini and V. Consonni, Wiley-VCH, Weinheim, 2009.
Mathematical Chemistry Series, by D. Bonchev, D. H. Rouvray (Eds.), Gordon and Breach Science Publisher, Amsterdam, 2000.
Chemical Graph Theory, by N. Trinajstic, CRC Press, Boca Raton, 1992.
Mathematical Concepts in Organic Chemistry, by I. Gutman, O. E. Polansky, Springer-Verlag, Berlin, 1986.
Chemical Applications of Topology and Graph Theory, ed. by R. B. King, Elsevier, 1983.
"Topological approach to the chemistry of conjugated molecules", by A. Graovac, I. Gutman, and N. Trinajstic, Lecture Notes in Chemistry, no.4, Springer-Verlag, Berlin, 1977.
Notes
References
N. Trinajstić, I. Gutman, Mathematical Chemistry, Croatica Chemica Acta, 75(2002), pp. 329–356.
A. T. Balaban, Reflections about Mathematical Chemistry, Foundations of Chemistry, 7(2005), pp. 289–306.
G. Restrepo, J. L. Villaveces, Mathematical Thinking in Chemistry, HYLE, 18(2012), pp. 3–22.
Advances in Mathematical Chemistry and Applications. Volume 2. Basak S. C., Restrepo G., Villaveces J. L. (Bentham Science eBooks, 2015)
External links
Journal of Mathematical Chemistry
MATCH Communications in Mathematical and in Computer Chemistry
International Academy of Mathematical Chemistry
Chemistry
Theoretical chemistry
Application-specific graphs
Cheminformatics | Mathematical chemistry | [
"Chemistry",
"Mathematics"
] | 666 | [
"Drug discovery",
"Applied mathematics",
"Theoretical chemistry",
"Mathematical chemistry",
"Molecular modelling",
"Computational chemistry",
"nan",
"Cheminformatics"
] |
1,570,968 | https://en.wikipedia.org/wiki/Isopropyl%20%CE%B2-D-1-thiogalactopyranoside | {{DISPLAYTITLE:Isopropyl β-D-1-thiogalactopyranoside}}
Isopropyl β--1-thiogalactopyranoside (IPTG) is a molecular biology reagent. This compound is a molecular mimic of allolactose, a lactose metabolite that triggers transcription of the lac operon, and it is therefore used to induce protein expression where the gene is under the control of the lac operator.
Mechanism of action
Like allolactose, IPTG binds to the lac repressor and releases the tetrameric repressor from the lac operator in an allosteric manner, thereby allowing the transcription of genes in the lac operon, such as the gene coding for beta-galactosidase, a hydrolase enzyme that catalyzes the hydrolysis of β-galactosides into monosaccharides. But unlike allolactose, the sulfur (S) atom creates a chemical bond which is non-hydrolyzable by the cell, preventing the cell from metabolizing or degrading the inducer. Therefore, its concentration remains constant during an experiment.
IPTG uptake by E. coli can be independent of the action of lactose permease, since other transport pathways are also involved. At low concentration, IPTG enters cells through lactose permease, but at high concentrations (typically used for protein induction), IPTG can enter the cells independently of lactose permease.
Use in laboratory
When stored as a powder at 4 °C or below, IPTG is stable for 5 years. It is significantly less stable in solution; Sigma recommends storage for no more than a month at room temperature. IPTG is an effective inducer of protein expression in the concentration range of 100 μmol/L to 3.0 mmol/L. Typically, a sterile, filtered 1 mol/L solution of IPTG is added 1:1000 to an exponentially growing bacterial culture, to give a final concentration of 1 mmol/L. The concentration used depends on the strength of induction required, as well as the genotype of cells or plasmid used. If lacIq, a mutant that over-produces the lac repressor, is present, then a higher concentration of IPTG may be necessary.
In blue-white screen, IPTG is used together with X-gal. Blue-white screen allows colonies that have been transformed with the recombinant plasmid rather than a non-recombinant one to be identified in cloning experiments.
References
External links
IPTG bound to proteins in the PDB
Carbohydrates
Molecular biology
Isopropyl compounds
Organosulfur compounds | Isopropyl β-D-1-thiogalactopyranoside | [
"Chemistry",
"Biology"
] | 581 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Organosulfur compounds",
"Organic compounds",
"Carbohydrate chemistry",
"Molecular biology",
"Biochemistry"
] |
166,689 | https://en.wikipedia.org/wiki/Interferometry | Interferometry is a technique which uses the interference of superimposed waves to extract information. Interferometry typically uses electromagnetic waves and is an important investigative technique in the fields of astronomy, fiber optics, engineering metrology, optical metrology, oceanography, seismology, spectroscopy (and its applications to chemistry), quantum mechanics, nuclear and particle physics, plasma physics, biomolecular interactions, surface profiling, microfluidics, mechanical stress/strain measurement, velocimetry, optometry, and making holograms.
Interferometers are devices that extract information from interference. They are widely used in science and industry for the measurement of microscopic displacements, refractive index changes and surface irregularities. In the case with most interferometers, light from a single source is split into two beams that travel in different optical paths, which are then combined again to produce interference; two incoherent sources can also be made to interfere under some circumstances. The resulting interference fringes give information about the difference in optical path lengths. In analytical science, interferometers are used to measure lengths and the shape of optical components with nanometer precision; they are the highest-precision length measuring instruments in existence. In Fourier transform spectroscopy they are used to analyze light containing features of absorption or emission associated with a substance or mixture. An astronomical interferometer consists of two or more separate telescopes that combine their signals, offering a resolution equivalent to that of a telescope of diameter equal to the largest separation between its individual elements.
Basic principles
Interferometry makes use of the principle of superposition to combine waves in a way that will cause the result of their combination to have some meaningful property that is diagnostic of the original state of the waves. This works because when two waves with the same frequency combine, the resulting intensity pattern is determined by the phase difference between the two waves—waves that are in phase will undergo constructive interference while waves that are out of phase will undergo destructive interference. Waves which are not completely in phase nor completely out of phase will have an intermediate intensity pattern, which can be used to determine their relative phase difference. Most interferometers use light or some other form of electromagnetic wave.
Typically (see Fig. 1, the well-known Michelson configuration) a single incoming beam of coherent light will be split into two identical beams by a beam splitter (a partially reflecting mirror). Each of these beams travels a different route, called a path, and they are recombined before arriving at a detector. The path difference, the difference in the distance traveled by each beam, creates a phase difference between them. It is this introduced phase difference that creates the interference pattern between the initially identical waves. If a single beam has been split along two paths, then the phase difference is diagnostic of anything that changes the phase along the paths. This could be a physical change in the path length itself or a change in the refractive index along the path.
As seen in Fig. 2a and 2b, the observer has a direct view of mirror M1 seen through the beam splitter, and sees a reflected image 2 of mirror M2. The fringes can be interpreted as the result of interference between light coming from the two virtual images 1 and 2 of the original source S. The characteristics of the interference pattern depend on the nature of the light source and the precise orientation of the mirrors and beam splitter. In Fig. 2a, the optical elements are oriented so that 1 and 2 are in line with the observer, and the resulting interference pattern consists of circles centered on the normal to M1 and M'2. If, as in Fig. 2b, M1 and 2 are tilted with respect to each other, the interference fringes will generally take the shape of conic sections (hyperbolas), but if 1 and 2 overlap, the fringes near the axis will be straight, parallel, and equally spaced. If S is an extended source rather than a point source as illustrated, the fringes of Fig. 2a must be observed with a telescope set at infinity, while the fringes of Fig. 2b will be localized on the mirrors.
Use of white light will result in a pattern of colored fringes (see Fig. 3). The central fringe representing equal path length may be light or dark depending on the number of phase inversions experienced by the two beams as they traverse the optical system. (See Michelson interferometer for a discussion of this.)
History
The law of interference of light was described by Thomas Young in his 1803 Bakerian Lecture to the Royal Society of London. In preparation for the lecture, Young performed a double-aperture experiment that demonstrated interference fringes. His interpretation in terms of the interference of waves was rejected by most scientists at the time because of the dominance of Isaac Newton's corpuscular theory of light proposed a century before.
The French engineer Augustin-Jean Fresnel, unaware of Young's results, began working on a wave theory of light and interference and was introduced to François Arago. Between 1816 and 1818, Fresnel and Arago performed interference experiments at the Paris Observatory. During this time, Arago designed and built the first interferometer, using it to measure the refractive index of moist air relative to dry air, which posed a potential problem for astronomical observations of star positions. The success of Fresnel's wave theory of light was established in his prize-winning memoire of 1819 that predicted and measured diffraction patterns. The Arago interferometer was later employed in 1850 by Leon Foucault to measure the speed of light in air relative to water, and it was used again in 1851 by Hippolyte Fizeau to measure the effect of Fresnel drag on the speed of light in moving water.
Jules Jamin developed the first single-beam interferometer (not requiring a splitting aperture as the Arago interferometer did) in 1856. In 1881, the American physicist Albert A. Michelson, while visiting Hermann von Helmholtz in Berlin, invented the interferometer that is named after him, the Michelson Interferometer, to search for effects of the motion of the Earth on the speed of light. Michelson's null results performed in the basement of the Potsdam Observatory outside of Berlin (the horse traffic in the center of Berlin created too many vibrations), and his later more-accurate null results observed with Edward W. Morley at Case College in Cleveland, Ohio, contributed to the growing crisis of the luminiferous ether. Einstein stated that it was Fizeau's measurement of the speed of light in moving water using the Arago interferometer that inspired his theory of the relativistic addition of velocities.
Categories
Interferometers and interferometric techniques may be categorized by a variety of criteria:
Homodyne versus heterodyne detection
In homodyne detection, the interference occurs between two beams at the same wavelength (or carrier frequency). The phase difference between the two beams results in a change in the intensity of the light on the detector. The resulting intensity of the light after mixing of these two beams is measured, or the pattern of interference fringes is viewed or recorded. Most of the interferometers discussed in this article fall into this category.
The heterodyne technique is used for (1) shifting an input signal into a new frequency range as well as (2) amplifying a weak input signal (assuming use of an active mixer). A weak input signal of frequency f1 is mixed with a strong reference frequency f2 from a local oscillator (LO). The nonlinear combination of the input signals creates two new signals, one at the sum f1 + f2 of the two frequencies, and the other at the difference f1 − f2. These new frequencies are called heterodynes. Typically only one of the new frequencies is desired, and the other signal is filtered out of the output of the mixer. The output signal will have an intensity proportional to the product of the amplitudes of the input signals.
The most important and widely used application of the heterodyne technique is in the superheterodyne receiver (superhet), invented in 1917-18 by U.S. engineer Edwin Howard Armstrong and French engineer Lucien Lévy. In this circuit, the incoming radio frequency signal from the antenna is mixed with a signal from a local oscillator (LO) and converted by the heterodyne technique to a lower fixed frequency signal called the intermediate frequency (IF). This IF is amplified and filtered, before being applied to a detector which extracts the audio signal, which is sent to the loudspeaker.
Optical heterodyne detection is an extension of the heterodyne technique to higher (visible) frequencies. While optical heterodyne interferometry is usually done at a single point it is also possible to perform this widefield.
Double path versus common path
A double-path interferometer is one in which the reference beam and sample beam travel along divergent paths. Examples include the Michelson interferometer, the Twyman–Green interferometer, and the Mach–Zehnder interferometer. After being perturbed by interaction with the sample under test, the sample beam is recombined with the reference beam to create an interference pattern which can then be interpreted.
A common-path interferometer is a class of interferometer in which the reference beam and sample beam travel along the same path. Fig. 4 illustrates the Sagnac interferometer, the fibre optic gyroscope, the point diffraction interferometer, and the lateral shearing interferometer. Other examples of common path interferometer include the Zernike phase-contrast microscope, Fresnel's biprism, the zero-area Sagnac, and the scatterplate interferometer.
Wavefront splitting versus amplitude splitting
Wavefront splitting inferometers
A wavefront splitting interferometer divides a light wavefront emerging from a point or a narrow slit (i.e. spatially coherent light) and, after allowing the two parts of the wavefront to travel through different paths, allows them to recombine. Fig. 5 illustrates Young's interference experiment and Lloyd's mirror. Other examples of wavefront splitting interferometer include the Fresnel biprism, the Billet Bi-Lens, diffraction-grating Michelson interferometer, and the Rayleigh interferometer.
In 1803, Young's interference experiment played a major role in the general acceptance of the wave theory of light. If white light is used in Young's experiment, the result is a white central band of constructive interference corresponding to equal path length from the two slits, surrounded by a symmetrical pattern of colored fringes of diminishing intensity. In addition to continuous electromagnetic radiation, Young's experiment has been performed with individual photons, with electrons, and with buckyball molecules large enough to be seen under an electron microscope.
Lloyd's mirror generates interference fringes by combining direct light from a source (blue lines) and light from the source's reflected image (red lines) from a mirror held at grazing incidence. The result is an asymmetrical pattern of fringes. The band of equal path length, nearest the mirror, is dark rather than bright. In 1834, Humphrey Lloyd interpreted this effect as proof that the phase of a front-surface reflected beam is inverted.
Amplitude-splitting inferometers
An amplitude splitting interferometer uses a partial reflector to divide the amplitude of the incident wave into separate beams which are separated and recombined.
The Fizeau interferometer is shown as it might be set up to test an optical flat. A precisely figured reference flat is placed on top of the flat being tested, separated by narrow spacers. The reference flat is slightly beveled (only a fraction of a degree of beveling is necessary) to prevent the rear surface of the flat from producing interference fringes. Separating the test and reference flats allows the two flats to be tilted with respect to each other. By adjusting the tilt, which adds a controlled phase gradient to the fringe pattern, one can control the spacing and direction of the fringes, so that one may obtain an easily interpreted series of nearly parallel fringes rather than a complex swirl of contour lines. Separating the plates, however, necessitates that the illuminating light be collimated. Fig 6 shows a collimated beam of monochromatic light illuminating the two flats and a beam splitter allowing the fringes to be viewed on-axis.
The Mach–Zehnder interferometer is a more versatile instrument than the Michelson interferometer. Each of the well separated light paths is traversed only once, and the fringes can be adjusted so that they are localized in any desired plane. Typically, the fringes would be adjusted to lie in the same plane as the test object, so that fringes and test object can be photographed together. If it is decided to produce fringes in white light, then, since white light has a limited coherence length, on the order of micrometers, great care must be taken to equalize the optical paths or no fringes will be visible. As illustrated in Fig. 6, a compensating cell would be placed in the path of the reference beam to match the test cell. Note also the precise orientation of the beam splitters. The reflecting surfaces of the beam splitters would be oriented so that the test and reference beams pass through an equal amount of glass. In this orientation, the test and reference beams each experience two front-surface reflections, resulting in the same number of phase inversions. The result is that light traveling an equal optical path length in the test and reference beams produces a white light fringe of constructive interference.
The heart of the Fabry–Pérot interferometer is a pair of partially silvered glass optical flats spaced several millimeters to centimeters apart with the silvered surfaces facing each other. (Alternatively, a Fabry–Pérot etalon uses a transparent plate with two parallel reflecting surfaces.) As with the Fizeau interferometer, the flats are slightly beveled. In a typical system, illumination is provided by a diffuse source set at the focal plane of a collimating lens. A focusing lens produces what would be an inverted image of the source if the paired flats were not present, i.e., in the absence of the paired flats, all light emitted from point A passing through the optical system would be focused at point A'. In Fig. 6, only one ray emitted from point A on the source is traced. As the ray passes through the paired flats, it is multiply reflected to produce multiple transmitted rays which are collected by the focusing lens and brought to point A' on the screen. The complete interference pattern takes the appearance of a set of concentric rings. The sharpness of the rings depends on the reflectivity of the flats. If the reflectivity is high, resulting in a high Q factor (i.e., high finesse), monochromatic light produces a set of narrow bright rings against a dark background. In Fig. 6, the low-finesse image corresponds to a reflectivity of 0.04 (i.e., unsilvered surfaces) versus a reflectivity of 0.95 for the high-finesse image.
Fig. 6 illustrates the Fizeau, Mach–Zehnder, and Fabry–Pérot interferometers. Other examples of amplitude splitting interferometer include the Michelson, Twyman–Green, Laser Unequal Path, and Linnik interferometer.
Michelson-Morley
Michelson and Morley (1887) and other early experimentalists using interferometric techniques in an attempt to measure the properties of the luminiferous aether, used monochromatic light only for initially setting up their equipment, always switching to white light for the actual measurements. The reason is that measurements were recorded visually. Monochromatic light would result in a uniform fringe pattern. Lacking modern means of environmental temperature control, experimentalists struggled with continual fringe drift even though the interferometer might be set up in a basement. Since the fringes would occasionally disappear due to vibrations by passing horse traffic, distant thunderstorms and the like, it would be easy for an observer to "get lost" when the fringes returned to visibility. The advantages of white light, which produced a distinctive colored fringe pattern, far outweighed the difficulties of aligning the apparatus due to its low coherence length. This was an early example of the use of white light to resolve the "2 pi ambiguity".
Applications
Physics and astronomy
In physics, one of the most important experiments of the late 19th century was the famous "failed experiment" of Michelson and Morley which provided evidence for special relativity. Recent repetitions of the Michelson–Morley experiment perform heterodyne measurements of beat frequencies of crossed cryogenic optical resonators. Fig 7 illustrates a resonator experiment performed by Müller et al. in 2003. Two optical resonators constructed from crystalline sapphire, controlling the frequencies of two lasers, were set at right angles within a helium cryostat. A frequency comparator measured the beat frequency of the combined outputs of the two resonators. , the precision by which anisotropy of the speed of light can be excluded in resonator experiments is at the 10−17 level.
Michelson interferometers are used in tunable narrow band optical filters and as the core hardware component of Fourier transform spectrometers.
When used as a tunable narrow band filter, Michelson interferometers exhibit a number of advantages and disadvantages when compared with competing technologies such as Fabry–Pérot interferometers or Lyot filters. Michelson interferometers have the largest field of view for a specified wavelength, and are relatively simple in operation, since tuning is via mechanical rotation of waveplates rather than via high voltage control of piezoelectric crystals or lithium niobate optical modulators as used in a Fabry–Pérot system. Compared with Lyot filters, which use birefringent elements, Michelson interferometers have a relatively low temperature sensitivity. On the negative side, Michelson interferometers have a relatively restricted wavelength range and require use of prefilters which restrict transmittance.
Fig. 8 illustrates the operation of a Fourier transform spectrometer, which is essentially a Michelson interferometer with one mirror movable. (A practical Fourier transform spectrometer would substitute corner cube reflectors for the flat mirrors of the conventional Michelson interferometer, but for simplicity, the illustration does not show this.) An interferogram is generated by making measurements of the signal at many discrete positions of the moving mirror. A Fourier transform converts the interferogram into an actual spectrum.
Fig. 9 shows a doppler image of the solar corona made using a tunable Fabry-Pérot interferometer to recover scans of the solar corona at a number of wavelengths near the FeXIV green line. The picture is a color-coded image of the doppler shift of the line, which may be associated with the coronal plasma velocity towards or away from the satellite camera.
Fabry–Pérot thin-film etalons are used in narrow bandpass filters capable of selecting a single spectral line for imaging; for example, the H-alpha line or the Ca-K line of the Sun or stars. Fig. 10 shows an Extreme ultraviolet Imaging Telescope (EIT) image of the Sun at 195 Ångströms (19.5 nm), corresponding to a spectral line of multiply-ionized iron atoms. EIT used multilayer coated reflective mirrors that were coated with alternate layers of a light "spacer" element (such as silicon), and a heavy "scatterer" element (such as molybdenum). Approximately 100 layers of each type were placed on each mirror, with a thickness of around 10 nm each. The layer thicknesses were tightly controlled so that at the desired wavelength, reflected photons from each layer interfered constructively.
The Laser Interferometer Gravitational-Wave Observatory (LIGO) uses two 4-km Michelson–Fabry–Pérot interferometers for the detection of gravitational waves. In this application, the Fabry–Pérot cavity is used to store photons for almost a millisecond while they bounce up and down between the mirrors. This increases the time a gravitational wave can interact with the light, which results in a better sensitivity at low frequencies. Smaller cavities, usually called mode cleaners, are used for spatial filtering and frequency stabilization of the main laser. The first observation of gravitational waves occurred on September 14, 2015.
The Mach–Zehnder interferometer's relatively large and freely accessible working space, and its flexibility in locating the fringes has made it the interferometer of choice for visualizing flow in wind tunnels, and for flow visualization studies in general. It is frequently used in the fields of aerodynamics, plasma physics and heat transfer to measure pressure, density, and temperature changes in gases.
Mach–Zehnder interferometers are also used to study one of the most counterintuitive predictions of quantum mechanics, the phenomenon known as quantum entanglement.
An astronomical interferometer achieves high-resolution observations using the technique of aperture synthesis, mixing signals from a cluster of comparatively small telescopes rather than a single very expensive monolithic telescope.
Early radio telescope interferometers used a single baseline for measurement. Later astronomical interferometers, such as the Very Large Array illustrated in Fig 11, used arrays of telescopes arranged in a pattern on the ground. A limited number of baselines will result in insufficient coverage. This was alleviated by using the rotation of the Earth to rotate the array relative to the sky. Thus, a single baseline could measure information in multiple orientations by taking repeated measurements, a technique called Earth-rotation synthesis. Baselines thousands of kilometers long were achieved using very long baseline interferometry.
Astronomical optical interferometry has had to overcome a number of technical issues not shared by radio telescope interferometry. The short wavelengths of light necessitate extreme precision and stability of construction. For example, spatial resolution of 1 milliarcsecond requires 0.5 μm stability in a 100 m baseline. Optical interferometric measurements require high sensitivity, low noise detectors that did not become available until the late 1990s. Astronomical "seeing", the turbulence that causes stars to twinkle, introduces rapid, random phase changes in the incoming light, requiring data collection rates to be faster than the rate of turbulence. Despite these technical difficulties, three major facilities are now in operation offering resolutions down to the fractional milliarcsecond range. This linked video shows a movie assembled from aperture synthesis images of the Beta Lyrae system, a binary star system approximately 960 light-years (290 parsecs) away in the constellation Lyra, as observed by the CHARA array with the MIRC instrument. The brighter component is the primary star, or the mass donor. The fainter component is the thick disk surrounding the secondary star, or the mass gainer. The two components are separated by 1 milli-arcsecond. Tidal distortions of the mass donor and the mass gainer are both clearly visible.
The wave character of matter can be exploited to build interferometers. The first examples of matter interferometers were electron interferometers, later followed by neutron interferometers. Around 1990 the first atom interferometers were demonstrated, later followed by interferometers employing molecules.
Electron holography is an imaging technique that photographically records the electron interference pattern of an object, which is then reconstructed to yield a greatly magnified image of the original object. This technique was developed to enable greater resolution in electron microscopy than is possible using conventional imaging techniques. The resolution of conventional electron microscopy is not limited by electron wavelength, but by the large aberrations of electron lenses.
Neutron interferometry has been used to investigate the Aharonov–Bohm effect, to examine the effects of gravity acting on an elementary particle, and to demonstrate a strange behavior of fermions that is at the basis of the Pauli exclusion principle: Unlike macroscopic objects, when fermions are rotated by 360° about any axis, they do not return to their original state, but develop a minus sign in their wave function. In other words, a fermion needs to be rotated 720° before returning to its original state.
Atom interferometry techniques are reaching sufficient precision to allow laboratory-scale tests of general relativity.
Interferometers are used in atmospheric physics for high-precision measurements of trace gases via remote sounding of the atmosphere. There are several examples of interferometers that utilize either absorption or emission features of trace gases. A typical use would be in continual monitoring of the column concentration of trace gases such as ozone and carbon monoxide above the instrument.
Engineering and applied science
Newton (test plate) interferometry is frequently used in the optical industry for testing the quality of surfaces as they are being shaped and figured. Fig. 13 shows photos of reference flats being used to check two test flats at different stages of completion, showing the different patterns of interference fringes. The reference flats are resting with their bottom surfaces in contact with the test flats, and they are illuminated by a monochromatic light source. The light waves reflected from both surfaces interfere, resulting in a pattern of bright and dark bands. The surface in the left photo is nearly flat, indicated by a pattern of straight parallel interference fringes at equal intervals. The surface in the right photo is uneven, resulting in a pattern of curved fringes. Each pair of adjacent fringes represents a difference in surface elevation of half a wavelength of the light used, so differences in elevation can be measured by counting the fringes. The flatness of the surfaces can be measured to millionths of an inch by this method. To determine whether the surface being tested is concave or convex with respect to the reference optical flat, any of several procedures may be adopted. One can observe how the fringes are displaced when one presses gently on the top flat. If one observes the fringes in white light, the sequence of colors becomes familiar with experience and aids in interpretation. Finally one may compare the appearance of the fringes as one moves ones head from a normal to an oblique viewing position. These sorts of maneuvers, while common in the optical shop, are not suitable in a formal testing environment. When the flats are ready for sale, they will typically be mounted in a Fizeau interferometer for formal testing and certification.
Fabry-Pérot etalons are widely used in telecommunications, lasers and spectroscopy to control and measure the wavelengths of light. Dichroic filters are multiple layer thin-film etalons. In telecommunications, wavelength-division multiplexing, the technology that enables the use of multiple wavelengths of light through a single optical fiber, depends on filtering devices that are thin-film etalons. Single-mode lasers employ etalons to suppress all optical cavity modes except the single one of interest.
The Twyman–Green interferometer, invented by Twyman and Green in 1916, is a variant of the Michelson interferometer widely used to test optical components. The basic characteristics distinguishing it from the Michelson configuration are the use of a monochromatic point light source and a collimator. Michelson (1918) criticized the Twyman–Green configuration as being unsuitable for the testing of large optical components, since the light sources available at the time had limited coherence length. Michelson pointed out that constraints on geometry forced by limited coherence length required the use of a reference mirror of equal size to the test mirror, making the Twyman–Green impractical for many purposes. Decades later, the advent of laser light sources answered Michelson's objections. (A Twyman–Green interferometer using a laser light source and unequal path length is known as a Laser Unequal Path Interferometer, or LUPI.) Fig. 14 illustrates a Twyman–Green interferometer set up to test a lens. Light from a monochromatic point source is expanded by a diverging lens (not shown), then is collimated into a parallel beam. A convex spherical mirror is positioned so that its center of curvature coincides with the focus of the lens being tested. The emergent beam is recorded by an imaging system for analysis.
Mach–Zehnder interferometers are being used in integrated optical circuits, in which light interferes between two branches of a waveguide that are externally modulated to vary their relative phase. A slight tilt of one of the beam splitters will result in a path difference and a change in the interference pattern. Mach–Zehnder interferometers are the basis of a wide variety of devices, from RF modulators to sensors to optical switches.
The latest proposed extremely large astronomical telescopes, such as the Thirty Meter Telescope and the Extremely Large Telescope, will be of segmented design. Their primary mirrors will be built from hundreds of hexagonal mirror segments. Polishing and figuring these highly aspheric and non-rotationally symmetric mirror segments presents a major challenge. Traditional means of optical testing compares a surface against a spherical reference with the aid of a null corrector. In recent years, computer-generated holograms (CGHs) have begun to supplement null correctors in test setups for complex aspheric surfaces. Fig. 15 illustrates how this is done. Unlike the figure, actual CGHs have line spacing on the order of 1 to 10 μm. When laser light is passed through the CGH, the zero-order diffracted beam experiences no wavefront modification. The wavefront of the first-order diffracted beam, however, is modified to match the desired shape of the test surface. In the illustrated Fizeau interferometer test setup, the zero-order diffracted beam is directed towards the spherical reference surface, and the first-order diffracted beam is directed towards the test surface in such a way that the two reflected beams combine to form interference fringes. The same test setup can be used for the innermost mirrors as for the outermost, with only the CGH needing to be exchanged.
Ring laser gyroscopes (RLGs) and fibre optic gyroscopes (FOGs) are interferometers used in navigation systems. They operate on the principle of the Sagnac effect. The distinction between RLGs and FOGs is that in a RLG, the entire ring is part of the laser while in a FOG, an external laser injects counter-propagating beams into an optical fiber ring, and rotation of the system then causes a relative phase shift between those beams. In a RLG, the observed phase shift is proportional to the accumulated rotation, while in a FOG, the observed phase shift is proportional to the angular velocity.
In telecommunication networks, heterodyning is used to move frequencies of individual signals to different channels which may share a single physical transmission line. This is called frequency division multiplexing (FDM). For example, a coaxial cable used by a cable television system can carry 500 television channels at the same time because each one is given a different frequency, so they don't interfere with one another. Continuous wave (CW) doppler radar detectors are basically heterodyne detection devices that compare transmitted and reflected beams.
Optical heterodyne detection is used for coherent Doppler lidar measurements capable of detecting very weak light scattered in the atmosphere and monitoring wind speeds with high accuracy. It has application in optical fiber communications, in various high resolution spectroscopic techniques, and the self-heterodyne method can be used to measure the linewidth of a laser.
Optical heterodyne detection is an essential technique used in high-accuracy measurements of the frequencies of optical sources, as well as in the stabilization of their frequencies. Until a relatively few years ago, lengthy frequency chains were needed to connect the microwave frequency of a cesium or other atomic time source to optical frequencies. At each step of the chain, a frequency multiplier would be used to produce a harmonic of the frequency of that step, which would be compared by heterodyne detection with the next step (the output of a microwave source, far infrared laser, infrared laser, or visible laser). Each measurement of a single spectral line required several years of effort in the construction of a custom frequency chain. Currently, optical frequency combs have provided a much simpler method of measuring optical frequencies. If a mode-locked laser is modulated to form a train of pulses, its spectrum is seen to consist of the carrier frequency surrounded by a closely spaced comb of optical sideband frequencies with a spacing equal to the pulse repetition frequency (Fig. 16). The pulse repetition frequency is locked to that of the frequency standard, and the frequencies of the comb elements at the red end of the spectrum are doubled and heterodyned with the frequencies of the comb elements at the blue end of the spectrum, thus allowing the comb to serve as its own reference. In this manner, locking of the frequency comb output to an atomic standard can be performed in a single step. To measure an unknown frequency, the frequency comb output is dispersed into a spectrum. The unknown frequency is overlapped with the appropriate spectral segment of the comb and the frequency of the resultant heterodyne beats is measured.
One of the most common industrial applications of optical interferometry is as a versatile measurement tool for the high precision examination of surface topography. Popular interferometric measurement techniques include Phase Shifting Interferometry (PSI), and Vertical Scanning Interferometry(VSI), also known as scanning white light interferometry (SWLI) or by the ISO term coherence scanning interferometry (CSI), CSI exploits coherence to extend the range of capabilities for interference microscopy. These techniques are widely used in micro-electronic and micro-optic fabrication. PSI uses monochromatic light and provides very precise measurements; however it is only usable for surfaces that are very smooth. CSI often uses white light and high numerical apertures, and rather than looking at the phase of the fringes, as does PSI, looks for best position of maximum fringe contrast or some other feature of the overall fringe pattern. In its simplest form, CSI provides less precise measurements than PSI but can be used on rough surfaces. Some configurations of CSI, variously known as Enhanced VSI (EVSI), high-resolution SWLI or Frequency Domain Analysis (FDA), use coherence effects in combination with interference phase to enhance precision.
Phase Shifting Interferometry addresses several issues associated with the classical analysis of static interferograms. Classically, one measures the positions of the fringe centers. As seen in Fig. 13, fringe deviations from straightness and equal spacing provide a measure of the aberration. Errors in determining the location of the fringe centers provide the inherent limit to precision of the classical analysis, and any intensity variations across the interferogram will also introduce error. There is a trade-off between precision and number of data points: closely spaced fringes provide many data points of low precision, while widely spaced fringes provide a low number of high precision data points. Since fringe center data is all that one uses in the classical analysis, all of the other information that might theoretically be obtained by detailed analysis of the intensity variations in an interferogram is thrown away. Finally, with static interferograms, additional information is needed to determine the polarity of the wavefront: In Fig. 13, one can see that the tested surface on the right deviates from flatness, but one cannot tell from this single image whether this deviation from flatness is concave or convex. Traditionally, this information would be obtained using non-automated means, such as by observing the direction that the fringes move when the reference surface is pushed.
Phase shifting interferometry overcomes these limitations by not relying on finding fringe centers, but rather by collecting intensity data from every point of the CCD image sensor. As seen in Fig. 17, multiple interferograms (at least three) are analyzed with the reference optical surface shifted by a precise fraction of a wavelength between each exposure using a piezoelectric transducer (PZT). Alternatively, precise phase shifts can be introduced by modulating the laser frequency. The captured images are processed by a computer to calculate the optical wavefront errors. The precision and reproducibility of PSI is far greater than possible in static interferogram analysis, with measurement repeatabilities of a hundredth of a wavelength being routine. Phase shifting technology has been adapted to a variety of interferometer types such as Twyman–Green, Mach–Zehnder, laser Fizeau, and even common path configurations such as point diffraction and lateral shearing interferometers. More generally, phase shifting techniques can be adapted to almost any system that uses fringes for measurement, such as holographic and speckle interferometry.
In coherence scanning interferometry, interference is only achieved when the path length delays of the interferometer are matched within the coherence time of the light source. CSI monitors the fringe contrast rather than the phase of the fringes. Fig. 17 illustrates a CSI microscope using a Mirau interferometer in the objective; other forms of interferometer used with white light include the Michelson interferometer (for low magnification objectives, where the reference mirror in a Mirau objective would interrupt too much of the aperture) and the Linnik interferometer (for high magnification objectives with limited working distance). The sample (or alternatively, the objective) is moved vertically over the full height range of the sample, and the position of maximum fringe contrast is found for each pixel. The chief benefit of coherence scanning interferometry is that systems can be designed that do not suffer from the 2 pi ambiguity of coherent interferometry, and as seen in Fig. 18, which scans a 180μm x 140μm x 10μm volume, it is well suited to profiling steps and rough surfaces. The axial resolution of the system is determined in part by the coherence length of the light source. Industrial applications include in-process surface metrology, roughness measurement, 3D surface metrology in hard-to-reach spaces and in hostile environments, profilometry of surfaces with high aspect ratio features (grooves, channels, holes), and film thickness measurement (semi-conductor and optical industries, etc.).
Fig. 19 illustrates a Twyman–Green interferometer set up for white light scanning of a macroscopic object.
Holographic interferometry is a technique which uses holography to monitor small deformations in single wavelength implementations. In multi-wavelength implementations, it is used to perform dimensional metrology of large parts and assemblies and to detect larger surface defects.
Holographic interferometry was discovered by accident as a result of mistakes committed during the making of holograms. Early lasers were relatively weak and photographic plates were insensitive, necessitating long exposures during which vibrations or minute shifts might occur in the optical system. The resultant holograms, which showed the holographic subject covered with fringes, were considered ruined.
Eventually, several independent groups of experimenters in the mid-60s realized that the fringes encoded important information about dimensional changes occurring in the subject, and began intentionally producing holographic double exposures. The main Holographic interferometry article covers the disputes over priority of discovery that occurred during the issuance of the patent for this method.
Double- and multi- exposure holography is one of three methods used to create holographic interferograms. A first exposure records the object in an unstressed state. Subsequent exposures on the same photographic plate are made while the object is subjected to some stress. The composite image depicts the difference between the stressed and unstressed states.
Real-time holography is a second method of creating holographic interferograms. A holograph of the unstressed object is created. This holograph is illuminated with a reference beam to generate a hologram image of the object directly superimposed over the original object itself while the object is being subjected to some stress. The object waves from this hologram image will interfere with new waves coming from the object. This technique allows real time monitoring of shape changes.
The third method, time-average holography, involves creating a holograph while the object is subjected to a periodic stress or vibration. This yields a visual image of the vibration pattern.
Interferometric synthetic aperture radar (InSAR) is a radar technique used in geodesy and remote sensing. Satellite synthetic aperture radar images of a geographic feature are taken on separate days, and changes that have taken place between radar images taken on the separate days are recorded as fringes similar to those obtained in holographic interferometry. The technique can monitor centimeter- to millimeter-scale deformation resulting from earthquakes, volcanoes and landslides, and also has uses in structural engineering, in particular for the monitoring of subsidence and structural stability. Fig 20 shows Kilauea, an active volcano in Hawaii. Data acquired using the space shuttle Endeavour's X-band Synthetic Aperture Radar on April 13, 1994 and October 4, 1994 were used to generate interferometric fringes, which were overlaid on the X-SAR image of Kilauea.
Electronic speckle pattern interferometry (ESPI), also known as TV holography, uses video detection and recording to produce an image of the object upon which is superimposed a fringe pattern which represents the displacement of the object between recordings. (see Fig. 21) The fringes are similar to those obtained in holographic interferometry.
When lasers were first invented, laser speckle was considered to be a severe drawback in using lasers to illuminate objects, particularly in holographic imaging because of the grainy image produced. It was later realized that speckle patterns could carry information about the object's surface deformations. Butters and Leendertz developed the technique of speckle pattern interferometry in 1970, and since then, speckle has been exploited in a variety of other applications. A photograph is made of the speckle pattern before deformation, and a second photograph is made of the speckle pattern after deformation. Digital subtraction of the two images results in a correlation fringe pattern, where the fringes represent lines of equal deformation. Short laser pulses in the nanosecond range can be used to capture very fast transient events. A phase problem exists: In the absence of other information, one cannot tell the difference between contour lines indicating a peak versus contour lines indicating a trough. To resolve the issue of phase ambiguity, ESPI may be combined with phase shifting methods.
A method of establishing precise geodetic baselines, invented by Yrjö Väisälä, exploited the low coherence length of white light. Initially, white light was split in two, with the reference beam "folded", bouncing back-and-forth six times between a mirror pair spaced precisely 1 m apart. Only if the test path was precisely 6 times the reference path would fringes be seen. Repeated applications of this procedure allowed precise measurement of distances up to 864 meters. Baselines thus established were used to calibrate geodetic distance measurement equipment, leading to a metrologically traceable scale for geodetic networks measured by these instruments. (This method has been superseded by GPS.)
Other uses of interferometers have been to study dispersion of materials, measurement of complex indices of refraction, and thermal properties. They are also used for three-dimensional motion mapping including mapping vibrational patterns of structures.
Biology and medicine
Optical interferometry, applied to biology and medicine, provides sensitive metrology capabilities for the measurement of biomolecules, subcellular components, cells and tissues. Many forms of label-free biosensors rely on interferometry because the direct interaction of electromagnetic fields with local molecular polarizability eliminates the need for fluorescent tags or nanoparticle markers. At a larger scale, cellular interferometry shares aspects with phase-contrast microscopy, but comprises a much larger class of phase-sensitive optical configurations that rely on optical interference among cellular constituents through refraction and diffraction. At the tissue scale, partially-coherent forward-scattered light propagation through the micro aberrations and heterogeneity of tissue structure provides opportunities to use phase-sensitive gating (optical coherence tomography) as well as phase-sensitive fluctuation spectroscopy to image subtle structural and dynamical properties.
Optical coherence tomography (OCT) is a medical imaging technique using low-coherence interferometry to provide tomographic visualization of internal tissue microstructures. As seen in Fig. 22, the core of a typical OCT system is a Michelson interferometer. One interferometer arm is focused onto the tissue sample and scans the sample in an X-Y longitudinal raster pattern. The other interferometer arm is bounced off a reference mirror. Reflected light from the tissue sample is combined with reflected light from the reference. Because of the low coherence of the light source, interferometric signal is observed only over a limited depth of sample. X-Y scanning therefore records one thin optical slice of the sample at a time. By performing multiple scans, moving the reference mirror between each scan, an entire three-dimensional image of the tissue can be reconstructed. Recent advances have striven to combine the nanometer phase retrieval of coherent interferometry with the ranging capability of low-coherence interferometry.
Phase contrast and differential interference contrast (DIC) microscopy are important tools in biology and medicine. Most animal cells and single-celled organisms have very little color, and their intracellular organelles are almost totally invisible under simple bright field illumination. These structures can be made visible by staining the specimens, but staining procedures are time-consuming and kill the cells. As seen in Figs. 24 and 25, phase contrast and DIC microscopes allow unstained, living cells to be studied. DIC also has non-biological applications, for example in the analysis of planar silicon semiconductor processing.
Angle-resolved low-coherence interferometry (a/LCI) uses scattered light to measure the sizes of subcellular objects, including cell nuclei. This allows interferometry depth measurements to be combined with density measurements. Various correlations have been found between the state of tissue health and the measurements of subcellular objects. For example, it has been found that as tissue changes from normal to cancerous, the average cell nuclei size increases.
Phase-contrast X-ray imaging (Fig. 26) refers to a variety of techniques that use phase information of a coherent x-ray beam to image soft tissues. (For an elementary discussion, see Phase-contrast x-ray imaging (introduction). For a more in-depth review, see Phase-contrast X-ray imaging.) It has become an important method for visualizing cellular and histological structures in a wide range of biological and medical studies. There are several technologies being used for x-ray phase-contrast imaging, all utilizing different principles to convert phase variations in the x-rays emerging from an object into intensity variations. These include propagation-based phase contrast, Talbot interferometry, Moiré-based far-field interferometry, refraction-enhanced imaging, and x-ray interferometry. These methods provide higher contrast compared to normal absorption-contrast x-ray imaging, making it possible to see smaller details. A disadvantage is that these methods require more sophisticated equipment, such as synchrotron or microfocus x-ray sources, x-ray optics, or high resolution x-ray detectors.
See also
Coherence
Coherence scanning interferometry
Fine Guidance Sensor (HST) (HST FGS are interferometers)
Holography
Interferometric visibility
Interference lithography
List of types of interferometers
Ramsey interferometry
Seismic interferometry
Superposition principle
Very-long-baseline interferometry
Zero spacing flux
References
Optical instruments
Plasma diagnostics
Articles containing video clips | Interferometry | [
"Physics",
"Technology",
"Engineering"
] | 10,098 | [
"Plasma diagnostics",
"Measuring instruments",
"Plasma physics"
] |
166,796 | https://en.wikipedia.org/wiki/Fissile%20material | In nuclear engineering, fissile material is material that can undergo nuclear fission when struck by a neutron of low energy. A self-sustaining thermal chain reaction can only be achieved with fissile material. The predominant neutron energy in a system may be typified by either slow neutrons (i.e., a thermal system) or fast neutrons. Fissile material can be used to fuel thermal-neutron reactors, fast-neutron reactors and nuclear explosives.
Fissile vs fissionable
The term fissile is distinct from fissionable. A nuclide that can undergo nuclear fission (even with a low probability) after capturing a neutron of high or low energy is referred to as fissionable. A fissionable nuclide that can undergo fission with a high probability after capturing a low-energy thermal neutron is referred to as fissile. Fissionable materials include those (such as uranium-238) for which fission can be induced only by high-energy neutrons. As a result, fissile materials (such as uranium-235) are a subset of fissionable materials.
Uranium-235 fissions with low-energy thermal neutrons because the binding energy resulting from the absorption of a neutron is greater than the critical energy required for fission; therefore uranium-235 is fissile. By contrast, the binding energy released by uranium-238 absorbing a thermal neutron is less than the critical energy, so the neutron must possess additional energy for fission to be possible. Consequently, uranium-238 is fissionable but not fissile.
An alternative definition defines fissile nuclides as those nuclides that can be made to undergo nuclear fission (i.e., are fissionable) and also produce neutrons from such fission that can sustain a nuclear chain reaction in the correct setting. Under this definition, the only nuclides that are fissionable but not fissile are those nuclides that can be made to undergo nuclear fission but produce insufficient neutrons, in either energy or number, to sustain a nuclear chain reaction. As such, while all fissile isotopes are fissionable, not all fissionable isotopes are fissile. In the arms control context, particularly in proposals for a Fissile Material Cutoff Treaty, the term fissile is often used to describe materials that can be used in the fission primary of a nuclear weapon. These are materials that sustain an explosive fast neutron nuclear fission chain reaction.
Under all definitions above, uranium-238 () is fissionable, but not fissile. Neutrons produced by fission of have lower energies than the original neutron (they behave as in an inelastic scattering), usually below 1 MeV (i.e., a speed of about 14,000 km/s), the fission threshold to cause subsequent fission of , so fission of does not sustain a nuclear chain reaction.
Fast fission of in the secondary stage of a thermonuclear weapon, due to the production of high-energy neutrons from nuclear fusion, contributes greatly to the yield and to fallout of such weapons. Fast fission of tampers has also been evident in pure fission weapons. The fast fission of also makes a significant contribution to the power output of some fast-neutron reactors.
Fissile nuclides
In general, most actinide isotopes with an odd neutron number are fissile. Most nuclear fuels have an odd atomic mass number ( = the total number of nucleons), and an even atomic number Z. This implies an odd number of neutrons. Isotopes with an odd number of neutrons gain an extra 1 to 2 MeV of energy from absorbing an extra neutron, from the pairing effect which favors even numbers of both neutrons and protons. This energy is enough to supply the needed extra energy for fission by slower neutrons, which is important for making fissionable isotopes also fissile.
More generally, nuclides with an even number of protons and an even number of neutrons, and located near a well-known curve in nuclear physics of atomic number vs. atomic mass number are more stable than others; hence, they are less likely to undergo fission. They are more likely to "ignore" the neutron and let it go on its way, or else to absorb the neutron but without gaining enough energy from the process to deform the nucleus enough for it to fission. These "even-even" isotopes are also less likely to undergo spontaneous fission, and they also have relatively much longer partial half-lives for alpha or beta decay. Examples of these isotopes are uranium-238 and thorium-232. On the other hand, other than the lightest nuclides, nuclides with an odd number of protons and an odd number of neutrons (odd Z, odd N) are usually short-lived (a notable exception is neptunium-236 with a half-life of 154,000 years) because they readily decay by beta-particle emission to their isobars with an even number of protons and an even number of neutrons (even Z, even N) becoming much more stable. The physical basis for this phenomenon also comes from the pairing effect in nuclear binding energy, but this time from both proton–proton and neutron–neutron pairing. The relatively short half-life of such odd-odd heavy isotopes means that they are not available in quantity and are highly radioactive.
According to the fissility rule proposed by Yigal Ronen, for a heavy element with Z between 90 and 100, an isotope is fissile if and only if } (where N = number of neutrons and Z = number of protons), with a few exceptions. This rule holds for all but fourteen nuclides – seven that satisfy the criterion but are nonfissile, and seven that are fissile but do not satisfy the criterion.
Nuclear fuel
To be a useful fuel for nuclear fission chain reactions, the material must:
Be in the region of the binding energy curve where a fission chain reaction is possible (i.e., above radium)
Have a high probability of fission on neutron capture
Release more than one neutron on average per neutron capture. (Enough of them on each fission, to compensate for non-fissions and absorptions in non-fuel material)
Have a reasonably long half-life
Be available in suitable quantities.
Fissile nuclides in nuclear fuels include:
Uranium-233, bred from thorium-232 by neutron capture with intermediate decays steps omitted.
Uranium-235, which occurs in natural uranium and enriched uranium
Plutonium-239, bred from uranium-238 by neutron capture with intermediate decays steps omitted.
Plutonium-241, bred from plutonium-240 directly by neutron capture.
Fissile nuclides do not have a 100% chance of undergoing fission on absorption of a neutron. The chance is dependent on the nuclide as well as neutron energy. For low and medium-energy neutrons, the neutron capture cross sections for fission (σF), the cross section for neutron capture with emission of a gamma ray (σγ), and the percentage of non-fissions are in the table at right.
Fertile nuclides in nuclear fuels include:
Thorium-232, which breeds uranium-233 by neutron capture with intermediate decays steps omitted.
Uranium-238, which breeds plutonium-239 by neutron capture with intermediate decays steps omitted.
Plutonium-240, which breeds plutonium-241 directly by neutron capture.
See also
Fertile material
Fission product
Special nuclear material
Notes
References
Nuclear physics
Nuclear fission
Nuclear weapon design | Fissile material | [
"Physics",
"Chemistry"
] | 1,558 | [
"Explosive chemicals",
"Nuclear fission",
"Fissile materials",
"Nuclear physics"
] |
166,890 | https://en.wikipedia.org/wiki/Langevin%20equation | In physics, a Langevin equation (named after Paul Langevin) is a stochastic differential equation describing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is to Brownian motion, which models the fluctuating motion of a small particle in a fluid.
Brownian motion as a prototype
The original Langevin equation describes Brownian motion, the apparently random movement of a particle in a fluid due to collisions with the molecules of the fluid,
Here, is the velocity of the particle, is its damping coefficient, and is its mass. The force acting on the particle is written as a sum of a viscous force proportional to the particle's velocity (Stokes' law), and a noise term representing the effect of the collisions with the molecules of the fluid. The force has a Gaussian probability distribution with correlation function
where is the Boltzmann constant, is the temperature and is the i-th component of the vector . The -function form of the time correlation means that the force at a time is uncorrelated with the force at any other time. This is an approximation: the actual random force has a nonzero correlation time corresponding to the collision time of the molecules. However, the Langevin equation is used to describe the motion of a "macroscopic" particle at a much longer time scale, and in this limit the -correlation and the Langevin equation becomes virtually exact.
Another common feature of the Langevin equation is the occurrence of the damping coefficient in the correlation function of the random force, which in an equilibrium system is an expression of the Einstein relation.
Mathematical aspects
A strictly -correlated fluctuating force is not a function in the usual mathematical sense and even the derivative is not defined in this limit. This problem disappears when the Langevin equation is written in integral form
Therefore, the differential form is only an abbreviation for its time integral. The general mathematical term for equations of this type is "stochastic differential equation".
Another mathematical ambiguity occurs for Langevin equations with multiplicative noise, which refers to noise terms that are multiplied by a non-constant function of the dependent variables, e.g., . If a multiplicative noise is intrinsic to the system, its definition is ambiguous, as it is equally valid to interpret it according to Stratonovich- or Ito- scheme (see Itô calculus). Nevertheless, physical observables are independent of the interpretation, provided the latter is applied consistently when manipulating the equation. This is necessary because the symbolic rules of calculus differ depending on the interpretation scheme. If the noise is external to the system, the appropriate interpretation is the Stratonovich one.
Generic Langevin equation
There is a formal derivation of a generic Langevin equation from classical mechanics. This generic equation plays a central role in the theory of critical dynamics, and other areas of nonequilibrium statistical mechanics. The equation for Brownian motion above is a special case.
An essential step in the derivation is the division of the degrees of freedom into the categories slow and fast. For example, local thermodynamic equilibrium in a liquid is reached within a few collision times, but it takes much longer for densities of conserved quantities like mass and energy to relax to equilibrium. Thus, densities of conserved quantities, and in particular their long wavelength components, are slow variable candidates. This division can be expressed formally with the Zwanzig projection operator. Nevertheless, the derivation is not completely rigorous from a mathematical physics perspective because it relies on assumptions that lack rigorous proof, and instead are justified only as plausible approximations of physical systems.
Let denote the slow variables. The generic Langevin equation then reads
The fluctuating force obeys a Gaussian probability distribution with correlation function
This implies the Onsager reciprocity relation for the damping coefficients . The dependence of on is negligible in most cases. The symbol denotes the Hamiltonian of the system, where is the equilibrium probability distribution of the variables . Finally, is the projection of the Poisson bracket of the slow variables and onto the space of slow variables.
In the Brownian motion case one would have , or and . The equation of motion for is exact: there is no fluctuating force and no damping coefficient .
Examples
Thermal noise in an electrical resistor
There is a close analogy between the paradigmatic Brownian particle discussed above and Johnson noise, the electric voltage generated by thermal fluctuations in a resistor. The diagram at the right shows an electric circuit consisting of a resistance R and a capacitance C. The slow variable is the voltage U between the ends of the resistor. The Hamiltonian reads , and the Langevin equation becomes
This equation may be used to determine the correlation function
which becomes white noise (Johnson noise) when the capacitance becomes negligibly small.
Critical dynamics
The dynamics of the order parameter of a second order phase transition slows down near the critical point and can be described with a Langevin equation. The simplest case is the universality class "model A" with a non-conserved scalar order parameter, realized for instance in axial ferromagnets,
Other universality classes (the nomenclature is "model A",..., "model J") contain a diffusing order parameter, order parameters with several components, other critical variables and/or contributions from Poisson brackets.
Harmonic oscillator in a fluid
A particle in a fluid is described by a Langevin equation with a potential energy function, a damping force, and thermal fluctuations given by the fluctuation dissipation theorem. If the potential is quadratic then the constant energy curves are ellipses, as shown in the figure. If there is dissipation but no thermal noise, a particle continually loses energy to the environment, and its time-dependent phase portrait (velocity vs position) corresponds to an inward spiral toward 0 velocity. By contrast, thermal fluctuations continually add energy to the particle and prevent it from reaching exactly 0 velocity. Rather, the initial ensemble of stochastic oscillators approaches a steady state in which the velocity and position are distributed according to the Maxwell–Boltzmann distribution. In the plot below (figure 2), the long time velocity distribution (blue) and position distributions (orange) in a harmonic potential () is plotted with the Boltzmann probabilities for velocity (green) and position (red). In particular, the late time behavior depicts thermal equilibrium.
Trajectories of free Brownian particles
Consider a free particle of mass with equation of motion described by
where is the particle velocity, is the particle mobility, and is a rapidly fluctuating force whose time-average vanishes over a characteristic timescale of particle collisions, i.e. . The general solution to the equation of motion is
where is the correlation time of the noise term. It can also be shown that the autocorrelation function of the particle velocity is given by
where we have used the property that the variables and become uncorrelated for time separations . Besides, the value of is set to be equal to such that it obeys the equipartition theorem. If the system is initially at thermal equilibrium already with , then for all , meaning that the system remains at equilibrium at all times.
The velocity of the Brownian particle can be integrated to yield its trajectory . If it is initially located at the origin with probability 1, then the result is
Hence, the average displacement asymptotes to as the system relaxes. The mean squared displacement can be determined similarly:
This expression implies that , indicating that the motion of Brownian particles at timescales much shorter than the relaxation time of the system is (approximately) time-reversal invariant. On the other hand, , which indicates an irreversible, dissipative process.
Recovering Boltzmann statistics
If the external potential is conservative and the noise term derives from a reservoir in thermal equilibrium, then the long-time solution to the Langevin equation must reduce to the Boltzmann distribution, which is the probability distribution function for particles in thermal equilibrium. In the special case of overdamped dynamics, the inertia of the particle is negligible in comparison to the damping force, and the trajectory is described by the overdamped Langevin equation
where is the damping constant. The term is white noise, characterized by (formally, the Wiener process). One way to solve this equation is to introduce a test function and calculate its average. The average of should be time-independent for finite , leading to
Itô's lemma for the Itô drift-diffusion process says that the differential of a twice-differentiable function is given by
Applying this to the calculation of gives
This average can be written using the probability density function ;
where the second term was integrated by parts (hence the negative sign). Since this is true for arbitrary functions , it follows that
thus recovering the Boltzmann distribution
Equivalent techniques
In some situations, one is primarily interested in the noise-averaged behavior of the Langevin equation, as opposed to the solution for particular realizations of the noise. This section describes techniques for obtaining this averaged behavior that are distinct from—but also equivalent to—the stochastic calculus inherent in the Langevin equation.
Fokker–Planck equation
A Fokker–Planck equation is a deterministic equation for the time dependent probability density of stochastic variables . The Fokker–Planck equation corresponding to the generic Langevin equation described in this article is the following:
The equilibrium distribution is a stationary solution.
Klein–Kramers equation
The Fokker–Planck equation for an underdamped Brownian particle is called the Klein–Kramers equation. If the Langevin equations are written as
where is the momentum, then the corresponding Fokker–Planck equation is
Here and are the gradient operator with respect to and , and is the Laplacian with respect to .
In -dimensional free space, corresponding to on , this equation can be solved using Fourier transforms. If the particle is initialized at with position and momentum , corresponding to initial condition , then the solution is
where
In three spatial dimensions, the mean squared displacement is
Path integral
A path integral equivalent to a Langevin equation can be obtained from the corresponding Fokker–Planck equation or by transforming the Gaussian probability distribution of the fluctuating force to a probability distribution of the slow variables, schematically .
The functional determinant and associated mathematical subtleties drop out if the Langevin equation is discretized in the natural (causal) way, where depends on but not on . It turns out to be convenient to introduce auxiliary response variables . The path integral equivalent to the generic Langevin equation then reads
where is a normalization factor and
The path integral formulation allows for the use of tools from quantum field theory, such as perturbation and renormalization group methods. This formulation is typically referred to as either the Martin-Siggia-Rose formalism or the Janssen-De Dominicis formalism after its developers. The mathematical formalism for this representation can be developed on abstract Wiener space.
See also
Grote–Hynes theory
Langevin dynamics
Stochastic thermodynamics
References
Further reading
W. T. Coffey (Trinity College, Dublin, Ireland) and Yu P. Kalmykov (Université de Perpignan, France, The Langevin Equation: With Applications to Stochastic Problems in Physics, Chemistry and Electrical Engineering (Third edition), World Scientific Series in Contemporary Chemical Physics – Vol 27.
Reif, F. Fundamentals of Statistical and Thermal Physics, McGraw Hill New York, 1965. See section 15.5 Langevin Equation
R. Friedrich, J. Peinke and Ch. Renner. How to Quantify Deterministic and Random Influences on the Statistics of the Foreign Exchange Market, Phys. Rev. Lett. 84, 5224–5227 (2000)
L.C.G. Rogers and D. Williams. Diffusions, Markov Processes, and Martingales, Cambridge Mathematical Library, Cambridge University Press, Cambridge, reprint of 2nd (1994) edition, 2000.
Statistical mechanics
Stochastic differential equations | Langevin equation | [
"Physics"
] | 2,558 | [
"Statistical mechanics"
] |
166,896 | https://en.wikipedia.org/wiki/Fokker%E2%80%93Planck%20equation | In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker–Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc.
It is named after Adriaan Fokker and Max Planck, who described it in 1914 and 1917. It is also known as the Kolmogorov forward equation, after Andrey Kolmogorov, who independently discovered it in 1931. When applied to particle position distributions, it is better known as the Smoluchowski equation (after Marian Smoluchowski), and in this context it is equivalent to the convection–diffusion equation. When applied to particle position and momentum distributions, it is known as the Klein–Kramers equation. The case with zero diffusion is the continuity equation. The Fokker–Planck equation is obtained from the master equation through Kramers–Moyal expansion.
The first consistent microscopic derivation of the Fokker–Planck equation in the single scheme of classical and quantum mechanics was performed by Nikolay Bogoliubov and Nikolay Krylov.
One dimension
In one spatial dimension x, for an Itô process driven by the standard Wiener process and described by the stochastic differential equation (SDE)
with drift and diffusion coefficient , the Fokker–Planck equation for the probability density of the random variable is
In the following, use .
Define the infinitesimal generator (the following can be found in Ref.):
The transition probability , the probability of going from to , is introduced here; the expectation can be written as
Now we replace in the definition of , multiply by and integrate over . The limit is taken on
Note now that
which is the Chapman–Kolmogorov theorem. Changing the dummy variable to , one gets
which is a time derivative. Finally we arrive to
From here, the Kolmogorov backward equation can be deduced. If we instead use the adjoint operator of , , defined such that
then we arrive to the Kolmogorov forward equation, or Fokker–Planck equation, which, simplifying the notation , in its differential form reads
Remains the issue of defining explicitly . This can be done taking the expectation from the integral form of the Itô's lemma:
The part that depends on vanished because of the martingale property.
Then, for a particle subject to an Itô equation, using
it can be easily calculated, using integration by parts, that
which bring us to the Fokker–Planck equation:
While the Fokker–Planck equation is used with problems where the initial distribution is known, if the problem is to know the distribution at previous times, the Feynman–Kac formula can be used, which is a consequence of the Kolmogorov backward equation.
The stochastic process defined above in the Itô sense can be rewritten within the Stratonovich convention as a Stratonovich SDE:
It includes an added noise-induced drift term due to diffusion gradient effects if the noise is state-dependent. This convention is more often used in physical applications. Indeed, it is well known that any solution to the Stratonovich SDE is a solution to the Itô SDE.
The zero-drift equation with constant diffusion can be considered as a model of classical Brownian motion:
This model has discrete spectrum of solutions if the condition of fixed boundaries is added for :
It has been shown that in this case an analytical spectrum of solutions allows deriving a local uncertainty relation for the coordinate-velocity phase volume:
Here is a minimal value of a corresponding diffusion spectrum , while and represent the uncertainty of coordinate–velocity definition.
Higher dimensions
More generally, if
where and are -dimensional vectors, is an matrix and is an M-dimensional standard Wiener process, the probability density for satisfies the Fokker–Planck equationwith drift vector and diffusion tensor , i.e.
If instead of an Itô SDE, a Stratonovich SDE is considered,
the Fokker–Planck equation will read:
Generalization
In general, the Fokker–Planck equations are a special case to the general Kolmogorov forward equation
where the linear operator is the Hermitian adjoint to the infinitesimal generator for the Markov process.
Examples
Wiener process
A standard scalar Wiener process is generated by the stochastic differential equation
Here the drift term is zero and the diffusion coefficient is 1/2. Thus the corresponding Fokker–Planck equation is
which is the simplest form of a diffusion equation. If the initial condition is , the solution is
Boltzmann distribution at the thermodynamic equilibrium
The overdamped Langevin equationgives . The Boltzmann distribution is an equilibrium distribution, and assuming grows sufficiently rapidly (that is, the potential well is deep enough to confine the particle), the Boltzmann distribution is the unique equilibrium.
Ornstein–Uhlenbeck process
The Ornstein–Uhlenbeck process is a process defined as
with . Physically, this equation can be motivated as follows: a particle of mass with velocity moving in a medium, e.g., a fluid, will experience a friction force which resists motion whose magnitude can be approximated as being proportional to particle's velocity with . Other particles in the medium will randomly kick the particle as they collide with it and this effect can be approximated by a white noise term; . Newton's second law is written as
Taking for simplicity and changing the notation as leads to the familiar form .
The corresponding Fokker–Planck equation is
The stationary solution () is
Plasma physics
In plasma physics, the distribution function for a particle species , , takes the place of the probability density function. The corresponding Boltzmann equation is given by
where the third term includes the particle acceleration due to the Lorentz force and the Fokker–Planck term at the right-hand side represents the effects of particle collisions. The quantities and are the average change in velocity a particle of type experiences due to collisions with all other particle species in unit time. Expressions for these quantities are given elsewhere. If collisions are ignored, the Boltzmann equation reduces to the Vlasov equation.
Smoluchowski diffusion equation
Consider an overdamped Brownian particle under external force :where the term is negligible (the meaning of "overdamped"). Thus, it is just . The Fokker–Planck equation for this particle is the Smoluchowski diffusion equation:
Where is the diffusion constant and . The importance of this equation is it allows for both the inclusion of the effect of temperature on the system of particles and a spatially dependent diffusion constant.
Starting with the Langevin Equation of a Brownian particle in external field , where is the friction term, is a fluctuating force on the particle, and is the amplitude of the fluctuation.
At equilibrium the frictional force is much greater than the inertial force, . Therefore, the Langevin equation becomes,
Which generates the following Fokker–Planck equation,
Rearranging the Fokker–Planck equation,
Where . Note, the diffusion coefficient may not necessarily be spatially independent if or are spatially dependent.
Next, the total number of particles in any particular volume is given by,
Therefore, the flux of particles can be determined by taking the time derivative of the number of particles in a given volume, plugging in the Fokker–Planck equation, and then applying Gauss's Theorem.
In equilibrium, it is assumed that the flux goes to zero. Therefore, Boltzmann statistics can be applied for the probability of a particles location at equilibrium, where is a conservative force and the probability of a particle being in a state is given as .
This relation is a realization of the fluctuation–dissipation theorem. Now applying to and using the Fluctuation-dissipation theorem,
Rearranging,
Therefore, the Fokker–Planck equation becomes the Smoluchowski equation,
for an arbitrary force .
Computational considerations
Brownian motion follows the Langevin equation, which can be solved for many different stochastic forcings with results being averaged (canonical ensemble in molecular dynamics). However, instead of this computationally intensive approach, one can use the Fokker–Planck equation and consider the probability of the particle having a velocity in the interval when it starts its motion with at time 0.
1-D linear potential example
Brownian dynamics in one dimension is simple.
Theory
Starting with a linear potential of the form the corresponding Smoluchowski equation becomes,
Where the diffusion constant, , is constant over space and time. The boundary conditions are such that the probability vanishes at with an initial condition of the ensemble of particles starting in the same place, .
Defining and and applying the coordinate transformation,
With the Smoluchowki equation becomes,
Which is the free diffusion equation with solution,
And after transforming back to the original coordinates,
Simulation
The simulation on the right was completed using a Brownian dynamics simulation. Starting with a Langevin equation for the system,
where is the friction term, is a fluctuating force on the particle, and is the amplitude of the fluctuation. At equilibrium the frictional force is much greater than the inertial force, . Therefore, the Langevin equation becomes,
For the Brownian dynamic simulation the fluctuation force is assumed to be Gaussian with the amplitude being dependent of the temperature of the system . Rewriting the Langevin equation,
where is the Einstein relation. The integration of this equation was done using the Euler–Maruyama method to numerically approximate the path of this Brownian particle.
Solution
Being a partial differential equation, the Fokker–Planck equation can be solved analytically only in special cases. A formal analogy of the Fokker–Planck equation with the Schrödinger equation allows the use of advanced operator techniques known from quantum mechanics for its solution in a number of cases. Furthermore, in the case of overdamped dynamics when the Fokker–Planck equation contains second partial derivatives with respect to all spatial variables, the equation can be written in the form of a master equation that can easily be solved numerically.
In many applications, one is only interested in the steady-state probability distribution , which can be found from .
The computation of mean first passage times and splitting probabilities can be reduced to the solution of an ordinary differential equation which is intimately related to the Fokker–Planck equation.
Particular cases with known solution and inversion
In mathematical finance for volatility smile modeling of options via local volatility, one has the problem of deriving a diffusion coefficient consistent with a probability density obtained from market option quotes. The problem is therefore an inversion of the Fokker–Planck equation: Given the density f(x,t) of the option underlying X deduced from the option market, one aims at finding the local volatility consistent with f. This is an inverse problem that has been solved in general by Dupire (1994, 1997) with a non-parametric solution. Brigo and Mercurio (2002, 2003) propose a solution in parametric form via a particular local volatility consistent with a solution of the Fokker–Planck equation given by a mixture model. More information is available also in Fengler (2008), Gatheral (2008), and Musiela and Rutkowski (2008).
Fokker–Planck equation and path integral
Every Fokker–Planck equation is equivalent to a path integral. The path integral formulation is an excellent starting point for the application of field theory methods. This is used, for instance, in critical dynamics.
A derivation of the path integral is possible in a similar way as in quantum mechanics. The derivation for a Fokker–Planck equation with one variable is as follows. Start by inserting a delta function and then integrating by parts:
The -derivatives here only act on the -function, not on . Integrate over a time interval ,
Insert the Fourier integral
for the -function,
This equation expresses as functional of . Iterating times and performing the limit gives a path integral with action
The variables conjugate to are called "response variables".
Although formally equivalent, different problems may be solved more easily in the Fokker–Planck equation or the path integral formulation. The equilibrium distribution for instance may be obtained more directly from the Fokker–Planck equation.
See also
Bogoliubov–Born–Green–Kirkwood–Yvon hierarchy of equations
Boltzmann equation
Convection–diffusion equation
Klein–Kramers equation
Kolmogorov backward equation
Kolmogorov equation
Langevin equation
Master equation
Mean-field game theory
Ornstein–Uhlenbeck process
Vlasov equation
Notes and references
Further reading
Stochastic processes
Equations
Parabolic partial differential equations
Max Planck
Stochastic calculus
Mathematical finance
Transport phenomena | Fokker–Planck equation | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 2,671 | [
"Transport phenomena",
"Physical phenomena",
"Applied mathematics",
"Chemical engineering",
"Mathematical objects",
"Equations",
"Mathematical finance"
] |
167,632 | https://en.wikipedia.org/wiki/Chaperone%20%28protein%29 | In molecular biology, molecular chaperones are proteins that assist the conformational folding or unfolding of large proteins or macromolecular protein complexes. There are a number of classes of molecular chaperones, all of which function to assist large proteins in proper protein folding during or after synthesis, and after partial denaturation. Chaperones are also involved in the translocation of proteins for proteolysis.
The first molecular chaperones discovered were a type of assembly chaperones which assist in the assembly of nucleosomes from folded histones and DNA. One major function of molecular chaperones is to prevent the aggregation of misfolded proteins, thus many chaperone proteins are classified as heat shock proteins, as the tendency for protein aggregation is increased by heat stress.
The majority of molecular chaperones do not convey any steric information for protein folding, and instead assist in protein folding by binding to and stabilizing folding intermediates until the polypeptide chain is fully translated. The specific mode of function of chaperones differs based on their target proteins and location. Various approaches have been applied to study the structure, dynamics and functioning of chaperones. Bulk biochemical measurements have informed us on the protein folding efficiency, and prevention of aggregation when chaperones are present during protein folding. Recent advances in single-molecule analysis have brought insights into structural heterogeneity of chaperones, folding intermediates and affinity of chaperones for unstructured and structured protein chains.
Functions of molecular chaperones
Many chaperones are heat shock proteins, that is, proteins expressed in response to elevated temperatures or other cellular stresses. Heat shock protein chaperones are classified based on their observed molecular weights into Hsp60, Hsp70, Hsp90, Hsp104, and small Hsps. The Hsp60 family of protein chaperones are termed chaperonins, and are characterized by a stacked double-ring structure and are found in prokaryotes, in the cytosol of eukaryotes, and in mitochondria.
Some chaperone systems work as foldases: they support the folding of proteins in an ATP-dependent manner (for example, the GroEL/GroES or the DnaK/DnaJ/GrpE system). Although most newly synthesized proteins can fold in absence of chaperones, a minority strictly requires them for the same. Other chaperones work as holdases: they bind folding intermediates to prevent their aggregation, for example DnaJ or Hsp33.
Chaperones can also work as disaggregases, which interact with aberrant protein assemblies and revert them to monomers. Some chaperones can assist in protein degradation, leading proteins to protease systems, such as the ubiquitin-proteasome system in eukaryotes. Chaperone proteins participate in the folding of over half of all mammalian proteins.
Macromolecular crowding may be important in chaperone function. The crowded environment of the cytosol can accelerate the folding process, since a compact folded protein will occupy less volume than an unfolded protein chain. However, crowding can reduce the yield of correctly folded protein by increasing protein aggregation. Crowding may also increase the effectiveness of the chaperone proteins such as GroEL, which could counteract this reduction in folding efficiency. Some highly specific 'steric chaperones' convey unique structural information onto proteins, which cannot be folded spontaneously. Such proteins violate Anfinsen's dogma, requiring protein dynamics to fold correctly.
Other types of chaperones are involved in transport across membranes, for example membranes of the mitochondria and endoplasmic reticulum (ER) in eukaryotes. A bacterial translocation-specific chaperone SecB maintains newly synthesized precursor polypeptide chains in a translocation-competent (generally unfolded) state and guides them to the translocon.
New functions for chaperones continue to be discovered, such as bacterial adhesin activity, induction of aggregation towards non-amyloid aggregates, suppression of toxic protein oligomers via their clustering, and in responding to diseases linked to protein aggregation and cancer maintenance.
Human chaperone proteins
In human cell lines, chaperone proteins were found to compose ~10% of the gross proteome mass, and are ubiquitously and highly expressed across human tissues.
Chaperones are found extensively in the endoplasmic reticulum (ER), since protein synthesis often occurs in this area.
Endoplasmic reticulum
In the endoplasmic reticulum (ER) there are general, lectin- and non-classical molecular chaperones that moderate protein folding.
General chaperones: GRP78/BiP, GRP94, GRP170.
Lectin chaperones: calnexin and calreticulin
Non-classical molecular chaperones: HSP47 and ERp29
Folding chaperones:
Protein disulfide isomerase (PDI),
Peptidyl prolyl cis-trans isomerase (PPI), Prolyl isomerase
ERp57
Nomenclature and examples of chaperone families
There are many different families of chaperones; each family acts to aid protein folding in a different way. In bacteria like E. coli, many of these proteins are highly expressed under conditions of high stress, for example, when the bacterium is placed in high temperatures, thus heat shock protein chaperones are the most extensive.
A variety of nomenclatures are in use for chaperones. As heat shock proteins, the names are classically formed by "Hsp" followed by the approximate molecular mass in kilodaltons; such names are commonly used for eukaryotes such as yeast. The bacterial names have more varied forms, and refer directly to their apparent function at discovery. For example, "GroEL" originally stands for "phage growth defect, overcome by mutation in phage gene E, large subunit".
Hsp10 and Hsp60
Hsp10/60 (GroEL/GroES complex in E. coli) is the best characterized large (~ 1 MDa) chaperone complex. GroEL (Hsp60) is a double-ring 14mer with a hydrophobic patch at its opening; it is so large it can accommodate native folding of 54-kDa GFP in its lumen. GroES (Hsp10) is a single-ring heptamer that binds to GroEL in the presence of ATP or ADP. GroEL/GroES may not be able to undo previous aggregation, but it does compete in the pathway of misfolding and aggregation. Also acts in the mitochondrial matrix as a molecular chaperone.
Hsp70 and Hsp40
Hsp70 (DnaK in E. coli) is perhaps the best characterized small (~ 70 kDa) chaperone. The Hsp70 proteins are aided by Hsp40 proteins (DnaJ in E. coli), which increase the ATP consumption rate and activity of the Hsp70s. The two proteins are named "Dna" in bacteria because they were initially identified as being required for E. coli DNA replication.
It has been noted that increased expression of Hsp70 proteins in the cell results in a decreased tendency toward apoptosis. Although a precise mechanistic understanding has yet to be determined, it is known that Hsp70s have a high-affinity bound state to unfolded proteins when bound to ADP, and a low-affinity state when bound to ATP.
It is thought that many Hsp70s crowd around an unfolded substrate, stabilizing it and preventing aggregation until the unfolded molecule folds properly, at which time the Hsp70s lose affinity for the molecule and diffuse away. Hsp70 also acts as a mitochondrial and chloroplastic molecular chaperone in eukaryotes.
Hsp90
Hsp90 (HtpG in E. coli) may be the least understood chaperone. Its molecular weight is about 90 kDa, and it is necessary for viability in eukaryotes (possibly for prokaryotes as well). Heat shock protein 90 (Hsp90) is a molecular chaperone essential for activating many signaling proteins in the eukaryotic cell.
Each Hsp90 has an ATP-binding domain, a middle domain, and a dimerization domain. Originally thought to clamp onto their substrate protein (also known as a client protein) upon binding ATP, the recently published structures by Vaughan et al. and Ali et al. indicate that client proteins may bind externally to both the N-terminal and middle domains of Hsp90.
Hsp90 may also require co-chaperones-like immunophilins, Sti1, p50 (Cdc37), and Aha1, and also cooperates with the Hsp70 chaperone system.
Hsp100
Hsp100 (Clp family in E. coli) proteins have been studied in vivo and in vitro for their ability to target and unfold tagged and misfolded proteins.
Proteins in the Hsp100/Clp family form large hexameric structures with unfoldase activity in the presence of ATP. These proteins are thought to function as chaperones by processively threading client proteins through a small 20 Å (2 nm) pore, thereby giving each client protein a second chance to fold.
Some of these Hsp100 chaperones, like ClpA and ClpX, associate with the double-ringed tetradecameric serine protease ClpP; instead of catalyzing the refolding of client proteins, these complexes are responsible for the targeted destruction of tagged and misfolded proteins.
Hsp104, the Hsp100 of Saccharomyces cerevisiae, is essential for the propagation of many yeast prions. Deletion of the HSP104 gene results in cells that are unable to propagate certain prions.
Bacteriophage
The genes of bacteriophage (phage) T4 that encode proteins with a role in determining phage T4 structure were identified using conditional lethal mutants. Most of these proteins proved to be either major or minor structural components of the completed phage particle. However among the gene products (gps) necessary for phage assembly, Snustad identified a group of gps that act catalytically rather than being incorporated themselves into the phage structure. These gps were gp26, gp31, gp38, gp51, gp28, and gp4 [gene 4 is synonymous with genes 50 and 65, and thus the gp can be designated gp4(50)(65)]. The first four of these six gene products have since been recognized as being chaperone proteins. Additionally, gp40, gp57A, gp63 and gpwac have also now been identified as chaperones.
Phage T4 morphogenesis is divided into three independent pathways: the head, the tail and the long tail fiber pathways as detailed by Yap and Rossman. With regard to head morphogenesis, chaperone gp31 interacts with the bacterial host chaperone GroEL to promote proper folding of the major head capsid protein gp23. Chaperone gp40 participates in the assembly of gp20, thus aiding in the formation of the connector complex that initiates head procapsid assembly. Gp4(50)(65), although not specifically listed as a chaperone, acts catalytically as a nuclease that appears to be essential for morphogenesis by cleaving packaged DNA to enable the joining of heads to tails.
During overall tail assembly, chaperone proteins gp26 and gp51 are necessary for baseplate hub assembly. Gp57A is required for correct folding of gp12, a structural component of the baseplate short tail fibers.
Synthesis of the long tail fibers depends on the chaperone protein gp57A that is needed for the trimerization of gp34 and gp37, the major structural proteins of the tail fibers. The chaperone protein gp38 is also required for the proper folding of gp37. Chaperone proteins gp63 and gpwac are employed in attachment of the long tail fibers to the tail baseplate.
History
The investigation of chaperones has a long history. The term "molecular chaperone" appeared first in the literature in 1978, and was invented by Ron Laskey to describe the ability of a nuclear protein called nucleoplasmin to prevent the aggregation of folded histone proteins with DNA during the assembly of nucleosomes. The term was later extended by R. John Ellis in 1987 to describe proteins that mediated the post-translational assembly of protein complexes. In 1988, it was realised that similar proteins mediated this process in both prokaryotes and eukaryotes. The details of this process were determined in 1989, when the ATP-dependent protein folding was demonstrated in vitro.
Clinical significance
There are many disorders associated with mutations in genes encoding chaperones (i.e. multisystem proteinopathy) that can affect muscle, bone and/or the central nervous system.
See also
Biological machines
Chaperome
Chaperonin
Chemical chaperones
Heat shock protein
Heat shock factor 1
Molecular chaperone therapy
Pharmacoperone
Proteasome
Protein dynamics
Notes
References
Protein biosynthesis | Chaperone (protein) | [
"Chemistry"
] | 2,804 | [
"Protein biosynthesis",
"Gene expression",
"Biosynthesis"
] |
167,740 | https://en.wikipedia.org/wiki/Insult | An insult is an expression, statement, or behavior that is often deliberately disrespectful, offensive, scornful, or derogatory towards an individual or a group.
Insults can be intentional or unintentional, and they often aim to belittle, offend, or humiliate the target. While intentional insults can sometimes include factual information, they are typically presented in a pejorative manner, intended to provoke a negative emotional response or have a harmful reaction effect when used harmfully. Insults can also be made unintentionally or in a playful way but could in some cases also have negative impacts and effects even when they were not intended to insult.
Insults can have varying impacts, effects, and meanings depending on intent, use, recipient's understanding of the meaning, and intent behind the action or words, and social setting and social norms including cultural references and meanings.
History
In ancient Rome, political speeches and debates were known to include strong harshness and personal attacks. Historians suggest that insults and verbal attacks were common in the political discourse of the time. This practice reflected the highly confrontational nature of political engagement in ancient Rome.
Many religious texts and beliefs have also contributed to views on insults and the implications of making insults in anger. Buddhism teaches 'Right Speech' is a part of the Noble Eightfold Path.
In Christianity, for example, the Sermon on the Mount delivered by Jesus includes teachings on the significance of anger. Jesus emphasized the importance of managing one's emotions and non judgment in this example.
In addition to political contexts, history also reveals unusual instances of insults. The Cadaver Synod, was an event where Pope Stephen VI held a posthumous trial for Pope Formosus in 897 AD. Stephen became the Pope after Pope Formosus and had his body dug up, dressed, and placed on a throne to stand trial even after his death.
Unintentional Insults
An example of an unintentional insult may be not tasting a dessert made by a host.
Comments made carelessly can also become unintentional insults. Another example could include comments made carelessly about facial features, personality traits, personal taste (e.g. in music), underestimating personal abilities or interests, asking about involvement in something potentially creating stereotypes, jokes, or even walking away from someone outside are among some things that may cause offence accidentally.
Jocular exchange
Lacan considered insults a primary form of social interaction, central to the imaginary order – "a situation that is symbolized in the 'Yah-boo, so are you' of the transitivist quarrel, the original form of aggressive communication".
Erving Goffman points out that every "crack or remark set up the possibility of a counter-riposte, topper, or squelch, that is, a comeback". He cites the example of possible interchanges at a dance in a school gym:
Backhanded compliments
A backhanded (or left-handed) compliment, or asteism, is an insult that is disguised as, or accompanied by, a compliment, especially in situations where the belittling or condescension is intentional.
Examples of backhanded compliments include, but are not limited to:
"I did not expect you to ace that exam. Good for you.", which could impugn the target's success as a fluke.
"That skirt makes you look far thinner.", insinuating hidden fat, with the implication that fat is something to be ashamed of.
"I wish I could be as straightforward as you, but I always try to get along with everyone.", insinuating an overbearing attitude.
"I like you. You have the boldness of a much younger person.", insinuating decline with age.
Negging is a type of backhanded compliment used for emotional manipulation or as a seduction method. The term was coined and prescribed by pickup artists. Negging is often viewed as a straightforward insult rather than as a pick-up line, in spite of the fact that proponents of the technique traditionally stress it is not an insult.
Personal attacks
A personal attack is an insult which is directed at some attribute of the person.
The Federal Communications Commission's personal attack rule defined a personal attack as one made upon the honesty, character, integrity, or like personal qualities in the Communications Act of 1934.
Personal attacks are generally considered a fallacy when used in arguments since they do not attempt to debunk the opposing sides argument, rather attacking the qualities of a person.
Sexuality
Verbal insults often take a phallic or pudendal form. This includes profanity, and may also include insults to one's sexuality. There are also insults pertaining to the extent of one's sexual activity. For example, according to James Bloodworth, "incel" “has gradually crept into the vocabulary of every internet troll, sometimes being used against men who blame and harass women for not wanting to sleep with them.”
Entertainment
Insults in poetic form is practiced throughout history, more often as entertainment rather than maliciousness. Flyting is a contest consisting of the exchange of insults between two parties, often conducted in verse and became public entertainment in Scotland in the 15th and 16th centuries. Senna is a form of Old Norse Eddic poetry consisting of an exchange of insults between participants.
O du eselhafter Peierl (Oh, you asinine Peierl), composed by Wolfgang Amadeus Mozart, was meant for fun, mocking, scatological humor directed at a friend of Mozart's.
More modern versions include poetry slam, dozens, diss song and battle rap. In the 1980s Masters of the Universe franchise, the character of Skeletor became known for insulting those around him with comedic putdowns. There is also now a comedy genre of insult comedy.
Anatomies
Various typologies of insults have been proposed over the years. Ethologist Desmond Morris, noting that "almost any action can operate as an Insult Signal if it is performed out of its appropriate context – at the wrong time or in the wrong place", classes such signals in ten "basic categories":
Uninterest signals
Boredom signals
Impatience signals
Superiority signals
Deformed-compliment signals
Mock-discomfort signals
Rejection signals
Mockery signals
Symbolic insults
Dirt signals
Elizabethans took great interest in such analyses, distinguishing out, for example, the "fleering frump ... when we give a mock with a scornful countenance as in some smiling sort looking aside or by drawing the lip awry, or shrinking up the nose". Shakespeare humorously set up an insult-hierarchy of seven-fold "degrees. The first, the Retort Courteous; the second, the Quip Modest; the third, the Reply Churlish; the fourth, the Reproof Valiant; the fifth, the Countercheck Quarrelsome; the sixth, the Lie with Circumstance; the seventh, the Lie Direct".
Perceptions
What qualifies as an insult is also determined both by the individual social situation and by changing social mores. Thus on one hand the insulting "obscene invitations of a man to a strange girl can be the spicy endearments of a husband to his wife".
See also
References
Further reading
Thomas Conley: Toward a rhetoric of insult. University of Chicago Press, 2010, .
External links
Abuse
Harassment and bullying
Emotions
Pejorative terms | Insult | [
"Biology"
] | 1,520 | [
"Behavior",
"Abuse",
"Harassment and bullying",
"Aggression",
"Human behavior"
] |
168,340 | https://en.wikipedia.org/wiki/Sewage%20sludge | Sewage sludge is the residual, semi-solid material that is produced as a by-product during sewage treatment of industrial or municipal wastewater. The term "septage" also refers to sludge from simple wastewater treatment but is connected to simple on-site sanitation systems, such as septic tanks.
After treatment, and dependent upon the quality of sludge produced (for example with regards to heavy metal content), sewage sludge is most commonly either disposed of in landfills, dumped in the ocean or applied to land for its fertilizing properties, as pioneered by the product Milorganite.
The term "Biosolids" is often used as an alternative to the term sewage sludge in the United States, particularly in conjunction with reuse of sewage sludge as fertilizer after sewage sludge treatment. Biosolids can be defined as organic wastewater solids that can be reused after stabilization processes such as anaerobic digestion and composting. Opponents of sewage sludge reuse reject this term as a public relations term.
Treatment process
Sewage sludge treatment is the process of removing contaminants from wastewater. Sewage sludge is produced from the treatment of wastewater in sewage treatment plants and consists of two basic forms — raw primary sludge and secondary sludge, also known as activated sludge in the case of the activated sludge process.
Sewage sludge is usually treated by one or several of the following treatment steps: lime stabilization, thickening, dewatering, drying, anaerobic digestion or composting. Some treatment processes, such as composting and alkaline stabilization, that involve significant amendments may affect contaminant strength and concentration: depending on the process and the contaminant in question, treatment may decrease or in some cases increase the bioavailability and/or solubility of contaminants. Regarding sludge stabilization processes, anaerobic and aerobic digestion seem to be the most common used methods in EU-27.
When fresh sewage or wastewater enters a primary settling tank, approximately 50% of the suspended solid matter will settle out in an hour and a half. This collection of solids is known as raw sludge or primary solids and is said to be "fresh" before anaerobic processes become active. The sludge will become putrescent in a short time once anaerobic bacteria take over, and must be removed from the sedimentation tank before this happens.
This is accomplished in one of two ways. Most commonly, the fresh sludge is continuously extracted from the bottom of a hopper-shaped tank by mechanical scrapers and passed to separate sludge-digestion tanks. In some treatment plants an Imhoff tank is used: sludge settles through a slot into the lower story or digestion chamber, where it is decomposed by anaerobic bacteria, resulting in liquefaction and reduced volume of the sludge. The secondary treatment process also generates a sludge largely composed of bacteria and protozoa with entrained fine solids, and this is removed by settlement in secondary settlement tanks. Both sludge streams are typically combined and are processed by anaerobic or aerobic treatment process at either elevated or ambient temperatures. After digesting for an extended period, the result is called "digested" sludge and may be disposed of by drying and then landfilling.
Following treatment, sewage sludge is either landfilled, dumped in the ocean, incinerated, applied on agricultural land or, in some cases, retailed or given away for free to the general public. According to a review article published in 2012, sludge reuse (including direct agricultural application and composting) was the predominant choice for sludge management in EU-15 (53% of produced sludge), following by incineration (21% of produced sludge). On the other hand, the most common disposal method in EU-12 countries was landfilling.
Quantities produced
The amount of sewage sludge produced is proportional to the amount and concentration of wastewater treated, and it also depends on the type of wastewater treatment process used. It can be expressed as kg dry solids per cubic metre of wastewater treated. The total sludge production from a wastewater treatment process is the sum of sludge from primary settling tanks (if they are part of the process configuration) plus excess sludge from the biological treatment step. For example, primary sedimentation produces about 110–170 kg/ML of so-called primary sludge, with a value of 150 kg/ML regarded as being typical for municipal wastewater in the U.S. or Europe. The sludge production is expressed as kg of dry solids produced per ML of wastewater treated; one mega litre (ML) is 103 m3. Of the biological treatment processes, the activated sludge process produces about 70–100 kg/ML of waste activated sludge, and a trickling filter process produces slightly less sludge from the biological part of the process: 60–100 kg/ML. This means that the total sludge production of an activated sludge process that uses primary sedimentation tanks is in the range of 180–270 kg/ML, being the sum of primary sludge and waste activated sludge.
United States municipal wastewater treatment plants in 1997 produced about 7.7 million dry tons of sewage sludge, and about 6.8 million dry tons in 1998 according to EPA estimates. As of 2004, about 60% of all sewage sludge was applied to land as a soil amendment and fertilizer for growing crops. In a review article published in 2012, it was reported that a total amount of 10.1 million tn DS/year were produced in EU-27 countries. As of 2023, the EU produced 2 to 3 million tons of sludge each year. Worldwide it is estimated that as much as 75 Million Mg of dry sewage sludge per year.
Production of sewage sludge can be reduced by conversion from flush toilets to dry toilets such as urine-diverting dry toilets and composting toilets.
Disposal
Landfill
Sewage sludge deposition in landfills can circulate human-virulent species of Cryptosporidium and Giardia pathogens. Sonication and quicklime stabilization are most effective in inactivation of these pathogens; microwave energy disintegration and top-soil stabilization were less effective.
A Texas county has launched a first-of-its-kind criminal investigation into waste management giant Synagro over PFAS-contaminated sewage sludge it is selling to Texas farmers as a cheap alternative to fertilizer.
As of 2023, 11% of sludge produced in the EU was disposed of in landfills. The EU is attempting to phase out the disposal of sludge in landfills.
Ocean dumping
It used to be common practice to dump sewage sludge into the ocean, however, this practice has stopped in many nations due to environmental concerns as well to domestic and international laws and treaties. Ronald Reagan signed the law that prohibited ocean dumping as a means of disposal of sewage sludge in the US in 1988.
Incineration
Sludge can also be incinerated in sludge incineration plants which comes with its own set of environmental concerns (air pollution, disposal of the ash). Pyrolysis of the sludge to create syngas and potentially biochar is possible, as is combustion of biofuel produced from drying sewage sludge or incineration in a waste-to-energy facility for direct production of electricity and steam for district heating or industrial uses.
Thermal processes can greatly reduce the volume of the sludge, as well as achieve remediation of all or some of the biological concerns. Direct waste-to-energy incineration and complete combustion systems (such as the Gate 5 Energy System) will require multi-step cleaning of the exhaust gas, to ensure no hazardous substances are released. In addition, the ash produced by incineration or incomplete combustion processes (such as fluidized-bed dryers) may be difficult to use without subsequent treatment due to high heavy metal content; solutions to this include leaching of the ashes to remove heavy metals or in the case of ash produced in a complete-combustion process, or with biochar produced from a pyrolytic process, the heavy metals may be fixed in place and the ash material readily usable as a LEEDs preferred additive to concrete or asphalt.
Examples of other ways to use dried sewage sludge as an energy resource include the Gate 5 Energy System, an innovative process to power a steam turbine using heat from burning milled and dried sewage sludge, or combining dried sewage sludge with coal in coal-fired power stations. In both cases this allows for production of electricity with less carbon-dioxide emissions than conventional coal-fired power stations.
As of 2023, 27% of sludge produced in the EU was incinerated.
Use
Land application
Biosolids is a term widely used to denote the byproduct of domestic and commercial sewage and wastewater treatment that is to be used in agriculture. National regulations that dictate the practice of land application of treated sewage sludge differ widely and e.g. in the US there are widespread disputes about this practice.
Depending on their level of treatment and resultant pollutant content, biosolids can be used in regulated applications for non-food agriculture, food agriculture, or distribution for unlimited use. Treated biosolids can be produced in cake, granular, pellet, or liquid form and are spread over land before being incorporated into the soil or injected directly into the soil by specialist contractors. Such use was pioneered by the production of Milorganite in 1926.
Use of sewage sludge has shown an increase in level of soil available phosphorus and soil salinity.
The findings of a 20-year field study of air, land, and water in Arizona, concluded that use of biosolids is sustainable and improves the soil and crops. Other studies report that plants uptake large quantities of heavy metals and toxic pollutants that are retained by produce, which is then consumed by humans.
A PhD thesis studying the addition of sludge to neutralize soil acidity concluded that the practice was not recommended if large amounts are used because the sludge produces acids when it oxidizes.
Studies have indicated that pharmaceuticals and personal care products, which often adsorb to sludge during wastewater treatment, can persist in agricultural soils following biosolid application. Some of these chemicals, including potential endocrine disruptor triclosan, can also travel through the soil column and leach into agricultural tile drainage at detectable levels. Other studies, however, have shown that these chemicals remain adsorbed to surface soil particles, making them more susceptible to surface erosion than infiltration. These studies are also mixed in their findings regarding the persistence of chemicals such as triclosan, triclocarban, and other pharmaceuticals. The impact of this persistence in soils is unknown, but the link to human and land animal health is likely tied to the capacity for plants to absorb and accumulate these chemicals in their consumed tissues. Studies of this kind are in early stages, but evidence of root uptake and translocation to leaves did occur for both triclosan and triclocarban in soybeans. This effect was not present in corn when tested in a different study.
A cautionary approach to land application of biosolids has been advocated by some for regions where soils have lower capacities for toxics absorption or due to the presence of unknowns in sewage biosolids. In 2007 the Northeast Regional Multi-State Research Committee (NEC 1001) issued conservative guidelines tailored to the soils and conditions typical of the northeastern US.
Use of sewage sludge is prohibited for produce to be labeled USDA-certified organic. In 2014 the United States grocery chain Whole Foods banned produce grown in sewage sludge.
Treated sewage sludge has been used in the UK, Europe and China agriculturally for more than 80 years, though there is increasing pressure in some countries to stop the practice of land application due to farm land contamination and negative public opinion. In the 1990s, there was pressure in some European countries to ban the use of sewage sludge as a fertilizer. Switzerland, Sweden, Austria, and others introduced a ban. Still, the dominant method for disposal of sewage sludge in the EU is via application to agricultural lands. As of 2023, 40% of sludge produced in the EU was used on agricultural land. Since the 1960s there has been cooperative activity with industry to reduce the inputs of persistent substances from factories. This has been very successful and, for example, the content of cadmium in sewage sludge in major European cities is now only 1% of what it was in 1970.
Transformation into products
Sewage sludge is an agglomeration of concentrated wastes, and therefore it contains many potentially extractable and useable components. These can include using sludge to produce energy, create carbon-based components, extract phosphorus and nitrogen, or make bricks or other construction materials.
Recycling of phosphate is regarded as especially important because the phosphate industry predicts that at the current rate of extraction the economic reserves will be exhausted in 100 or at most 250 years. Phosphate can be recovered with minimal capital expenditure as technology currently exists, but municipalities have little political will to attempt nutrient extraction, instead opting for a "take all the other stuff" mentality.
One potential drawback of extracting products from sludge — as opposed to land application — is that only some of the sludge is used and the rest still needs disposal. It can also be very expensive to develop and use appropriate technologies for extracting resources.
Contaminants
The specific content of sewage sludge is affected by what enters the sewage stream, and how the sewage is treated and processed. As wastewater treatment policies are passed or amended to allow or regulate potential contaminants into the sewage stream, the content of the sewage sludge reflects those changes. For example, the EU's Urban Waste Water Treatment Directive shapes the types of contaminants that enter the EU's sewage treatment stream.
Pathogens
Bacteria in treated sludge products can actually regrow under certain environmental conditions. Pathogens could easily remain undetected in untreated sewage sludge. Pathogens are not a significant health issue if sewage sludge is properly treated and site-specific management practices are followed.
Heavy metals
One of the main concerns in the treated sludge is the concentrated metals content (lead, arsenic, cadmium, thallium, etc.); certain metals are regulated while others are not. Leaching methods can be used to reduce the metal content and meet the regulatory limit.
In 2009, the EPA released the Targeted National Sewage Sludge Study, which reports on the level of metals, chemicals, hormones, and other materials present in a statistical sample of sewage sludges. Some highlights include:
Lead, arsenic, chromium, and cadmium are estimated by the EPA to be present in detectable quantities in 100% of national sewage sludges in the US, while thallium is only estimated to be present in 94.1% of sludges.
Silver is present to the degree of 20 mg/kg of sludge, on average, while some sludges have up to 200 milligrams of silver per kilogram of sludge; one outlier demonstrated a silver lode of 800–900 mg per kg of sludge.
Barium is present at the rate of 500 mg/kg, while manganese is present at the rate of 1 g/kg sludge.
Micro-pollutants
Micro-pollutants are compounds which are normally found at concentrations up to microgram per liter and milligram per kilogram in the aquatic and terrestrial environment, respectively, and they are considered to be potential threats to environmental ecosystems. They can become concentrated in sewage sludge. Each of these disposal options comes with myriad potential—and in some cases proven—human health and environment impacts.
Several organic micro-pollutants such as endocrine disrupting compounds, pharmaceuticals and per-fluorinated compounds have been detected in sewage sludge samples around the world at concentrations ranging up to some hundreds mg/kg of dried sludge.
Sterols and other hormones have also been detected.
Other hazardous substances
Sewage treatment plants receive various forms of hazardous waste from hospitals, nursing homes, industry and households. Low levels of constituents such as PCBs, dioxin, and brominated flame retardants, may remain in treated sludge. There are potentially thousands of other components of sludge that remain untested/undetected disposed of from modern society that also end up in sludge (pharmaceuticals, nano particles, etc.) which have been proven to be hazardous to both human and ecological health.
In 2013, in South Carolina PCBs were discovered in very high levels in wastewater sludge. The problem was not discovered until thousands of acres of farm land in South Carolina were discovered to be contaminated by this hazardous material. SCDHEC issued emergency regulatory order banning all PCB laden sewage sludge from being land applied on farm fields or deposited into landfills in South Carolina.
Also in 2013, after DHEC request, the city of Charlotte decided to stop land applying sewage sludge in South Carolina while authorities investigated the source of PCB contamination. In February 2014, the city of Charlotte admitted PCBs have entered their sewage treatment centers as well.
Contaminants of concern in sewage sludge are plasticizers, PDBEs, PFASs ("forever chemicals"),
and others generated by human activities, including personal care products and medicines. Synthetic fibers from fabrics persist in treated sewage sludge as well as in biosolids-treated soils and may thus serve as an indicator of past biosolids application.
Pollutant ceiling concentration
The term "pollutant" is defined as part of the EPA 503 rule. The components of sludge have pollutant limits defined by the EPA. "A Pollutant is an organic substance, an inorganic substance, a combination of organic and inorganic substances, or a pathogenic organism that, after discharge and upon exposure, ingestion, inhalation, or assimilation into an organism either directly from the environment or indirectly by ingestion through the food chain, could, on the basis of information available to the Administrator of EPA, cause death, disease, behavioral abnormalities, cancer, genetic mutations, physiological malfunctions (including malfunction in reproduction), or physical deformations in either organisms or offspring of the organisms."
The maximum component pollutant limits by the US EPA are:
Health risks
In 2011, the EPA commissioned a study at the United States National Research Council (NRC) to determine the health risks of sludge. In this document the NRC pointed out that many of the dangers of sludge are unknown and unassessed.
The NRC published "Biosolids Applied to Land: Advancing Standards and Practices" in July 2002. The NRC concluded that while there is no documented scientific evidence that sewage sludge regulations have failed to protect public health, there is persistent uncertainty on possible adverse health effects. The NRC noted that further research is needed and made about 60 recommendations for addressing public health concerns, scientific uncertainties, and data gaps in the science underlying the sewage sludge standards. The EPA responded with a commitment to conduct research addressing the NRC recommendations.
Residents living near Class B sludge processing sites may experience asthma or pulmonary distress due to bioaerosols released from sludge fields.
A 2004 survey of 48 individuals near affected sites found that most reported irritation symptoms, about half reported an infection within a month of the application, and about a fourth were affected by Staphylococcus aureus, including two deaths. The number of reported S. aureus infections was 25 times as high as in hospitalized patients, a high-risk group. The authors point out that regulations call for protective gear when handling Class B biosolids and that similar protections could be considered for residents in nearby areas given the wind conditions.
In 2007, a health survey of persons living in close proximity to Class B sludged land was conducted. A sample of 437 people exposed to Class B sludge (living within of sludged land) - and using a control group of 176 people not exposed to sludge (not living within of sludged land) reported the following: Although correlation does not imply causation, such extensive correlations may lead reasonable people to conclude that precaution is necessary in dealing with sludge and sludged farmlands.
Harrison and Oakes suggest that, in particular, "until investigations are carried out that answer these questions (...about the safety of Class B sludge...), land application of Class B sludges should be viewed as a practice that subjects neighbors and workers to substantial risk of disease." They further suggest that even Class A treated sludge may have chemical contaminants (including heavy metals, such as lead) or endotoxins present, and a precautionary approach may be justified on this basis, though the vast majority of incidents reported by Lewis, et al. have been correlated with exposure to Class B untreated sludge and not Class A treated sludge.
A 2005 report by the state of North Carolina concluded that "a surveillance program of humans living near application sites should be developed to determine if there are adverse health effects in humans and animals as a result of biosolids application."
Studies of the potential uses of sewage sludge around homes, such as covering lead-contaminated soil in Baltimore, have created debates over whether participants should have been informed about potential risks, when there remains uncertainty about those risks.
The chain of sewage sledge to biosolids to fertilizers has resulted in PFASs ("forever chemicals") contamination of farm produce in Maine in 2021 and beef raised in Michigan in 2022. The EPA PFAS Strategic Roadmap initiative, running from 2021 to 2024, will consider the full lifecycle of PFAS including health risks of PFAS in wastewater sludge.
Regulation and guidelines
European Union
The EC encourages the use of sewage sludge in agriculture because it conserves organic matter and completes nutrient cycles.
European countries that joined the EU after 2004 favor landfills as a means of disposal for sewage sludge. In 2006, the predicted sewage sludge growth rate was 10 million tons of sewage sludge per year. This increase in the amount of sewage sludge accumulation in the EU can be due to the increase in the number of households that are connected to the sewage system. The EU has directives in place to encourage the use of sewage sludge in agriculture, in a way that the soil, humans, and the environment are not harmed. A guideline the EU has put into place it that sewage sludge should not be added to fruit and vegetable crops that are in season. In Austria, in order to dispose of the sewage sludge in a landfill, it must first be treated in a way that reduces its biological reactivity. Sweden no longer allows sewage sludge to be disposed in the land fills. In the EU, regulations regarding sewage sludge disposal differ because legislation regarding landfill disposal in not in the national regulations for the EU.
Sewage Sludge Directive
The EU's Sewage Sludge Directive (86/278/EEC) sets out regulations to pursue the dual purpose of promoting the use of sewage sludge as an agricultural fertilizer, while ensuring environmental protections and human health. These rules include sludge treatment requirements, as well as limits on the time and place of sewage sludge applications, depending on the type of food crop. This is intended to protect human health while maintaining the ecological health of the soil and water. The directive explicitly regulates the allowable levels of seven heavy metals (cadmium, copper, nickel, lead, zinc, mercury, and chromium) in soil and sludge, and regulates any application of sewage sludge that would cause levels of these heavy metals in soil to exceed those limits.
EU member states are tasked with implementing and enforcing the Directive within their borders, as well as monitoring and reporting on sludge production, treatment, characteristics, and use. Member states are allowed to set more stringent limits for heavy metals than set out in the Sewage Sludge Directive, and can set limits for other pollutants. As of 2021, more than half of the EU member states had stricter limits for mercury and cadmium than required under the Directive.
Member states are also allowed to limit or promote the use of sewage sludge for agriculture as they choose, meaning that some countries prohibit the use of sludge in agriculture, while some use up to 50% of the sludge they generate in agriculture. Spain, France, Italy, and the United Kingdom (while it was still part of the EU) have particularly promoted the use of sludge in agriculture. Each of Austria's federal states has its own regulations for the use of sewage sludge in agriculture, including different limits for heavy metals. For example, Tyrol has banned the use of sludge on agricultural lands, while in Salzburg it is only allowed under certain conditions.
Since the Directive's passage, there has been the substantial decrease in heavy metal residues in agricultural soils over time (well below the limits set), though it is not possible to determine what proportion of the decrease is due to the Directive itself, as opposed to other national and EU legislation.
The Sewage Sludge Directive has been evaluated several times under EU proposals to build a circular economy through the reduction and reuse of wastes. In 2014, a European Commission evaluation of the Sewage Sludge Directive suggested it was appropriate for its goals, and did not need revision. In 2023, as part of the European Green Deal and Circular Economy Action Plan, the EU re-evaluated the Sewage Sludge Directive, and found that it should be maintained – as the use of sewage sludge as fertilizer aligns with circular economy goals and potentially reduces the EU carbon emissions – but that the potential pollutants and contaminants regulated under the Directive should be reviewed and potentially revised. This evaluation noted that, as of 2023, the original Directive had not been seriously updated since its original passage in 1986, even though in the intervening decades there had been many developments in both environmental policy, expectations, and research, as well as member states' national policies around sewage sludge. The evaluation particularly emphasized concerns about methane emissions, microplastic contamination, and antibiotic resistances.
The Sewage Sludge Directive has not yet set limits for other contaminants, such as organic pollutants, pathogens, microplastics, pharmaceutical residues, and personal care product residues. With the identification of these new contaminants in sludge since the Sewage Sludge Directive originally passed, several researchers have suggested that the EU should consider revising the Directive to address their potential risks to health and environment.
United States
After the 1991 Congressional ban on ocean dumping, the U.S. Environmental Protection Agency (EPA) instituted a policy of digested sludge reuse on agricultural land. The US EPA promulgated regulations – 40 CFR Part 503 – that continued to allow the use of biosolids on land as fertilizers and soil amendments which had been previously allowed under Part 257. The EPA promoted biosolids recycling throughout the 1990s. The EPA's Part 503 regulations were developed with input from university, EPA, and USDA researchers from around the country and involved an extensive review of the scientific literature and the largest risk assessment the agency had conducted to that time. The Part 503 regulations became effective in 1993.
According to the EPA, biosolids that meet treatment and pollutant content criteria of Part 503.13 "can be safely recycled and applied as fertilizer to sustainably improve and maintain productive soils and stimulate plant growth." However, they can not be disposed of in a sludge only landfill under Part 503.23 because of high chromium levels and boundary restrictions.
Under the Obama Administration, the Biosolids Center of Excellence (headquartered in EPA Region 7) was created to monitor and enforce compliance with biosolids regulation. The Center receives and reviews annual reports from the major producers of biosolids.
Eight U.S. states oversee their own biosolids programs: Arizona, Michigan, Ohio, Oklahoma, South Dakota, Texas, Utah, and Wisconsin; other states' programs are overseen by the EPA.
Classes of sewage sludge in the United States
In the United States, two classes of sewage sludge are defined by the amount of pathogens (i.e. bacteria, viruses) remaining in the sludge, and therefore the types of uses allowed by law. Both classes of sludge may still contain radioactive or pharmaceutical wastes.
Class A sludge must be treated so that specific pathogens (like Salmonella) are no longer detected. This class of sludge can be used for all land applications, including where the public may come into contact with it (i.e. agricultural land, home use, for public sale). Biosolids that meet Class A pathogen reduction requirements or equivalent treatment by a "Process to Further Reduce Pathogens" (PFRP) have the least restrictions on use. PFRPs include pasteurization, heat drying, thermophilic composting (aerobic digestion, most common method), and beta or gamma ray irradiation.
Class B sludge also requires treatment to reduce pathogens, but pathogens are still detectable in the sludge (such as some parasitic worm eggs). This class of sludge has much stricter restrictions on its use. Biosolids that meet the Class B pathogen treatment and pollutant criteria, in accordance with the EPA "Standards for the use or disposal of sewage sludge" (40 CFR Part 503), can be land applied with formal site restrictions and strict record keeping.
Evaluation of the U.S. sewage sludge program
The EPA Office of the Inspector General (OIG) completed two assessments in 2000 and 2002 of the EPA sewage sludge program. The follow-up report in 2002 documented that "the EPA cannot assure the public that current land application practices are protective of human health and the environment." The report also documented that there had been an almost 100% reduction in EPA enforcement resources since the earlier assessment. This is probably the greatest issue with the practice: under both the federal program operated by the EPA and those of the several states, there is limited inspection and oversight by agencies charged with regulating these practices. To some degree, this lack of oversight is a function of the perceived (by the regulatory agencies) benign nature of the practice. However, a greater underlying issue is funding. Few states and the US EPA have the discretionary funds necessary to establish and implement a full enforcement program for biosolids.
As detailed in the 1995 Plain English Guide to the Part 503 Risk Assessment, the EPA's most comprehensive risk assessment was completed for biosolids.
Court cases in the United States
In 2009, James Rosendall of Grand Rapids, MI was sentenced by United States District Judge Avern Cohn to 11 months in prison followed by three years of supervised release for conspiring to commit bribery. Rosendall was the former president of Synagro of Michigan, a subsidiary of Synagro Technologies. His duties included obtaining the approval of the City of Detroit to process and dispose of the city's wastewater.
In 2011, Travis County Commissioners declared that Synagro's solid waste disposal activities would be inappropriate and prohibited land use according to the towns already established ordinances.
A battle between the home rule of local government and states rights/commerce rights has been waged between the small town of Kern County, California, and Los Angeles, California. Kern county passed an ordinance "Keep Kern Clean" ballot initiative which banned sludge from being applied in Kern County. Los Angeles sued and after a protracted verdict, won the case in 2016.
In 2012, two families won a $225,000 tort lawsuit against a sludge company that contaminated their properties.
In 2013 in Pennsylvania, the case Gilbert vs. Synagro, a judge barred a nuisance, negligence and trespass lawsuit under Pennsylvania's Right to Farm Act.
History of sewage sludge disposal in New York City
Since 1884 when sewage was first treated the amount of sludge has increased along with population and more advanced treatment technology (secondary treatment in addition to primary treatment). In the case of New York City, at first the sludge was discharged directly along the banks of rivers surrounding the city, then later piped further into the rivers, and then further still out into the harbor. In 1924, to relieve a dismal condition in New York Harbor, New York City began dumping sludge at sea at a location in the New York Bight called the 12-Mile Site. This was deemed a successful public health measure and not until the late 1960s was there any examination of its consequences to marine life or to humans. There was accumulation of sludge particles on the seafloor and consequent changes in the numbers and types of benthic organisms. In 1970 a large area around the site was closed to shellfishing. From then until 1986, the practice of dumping at the 12-Mile Site came under increasing pressure stemming from a series of untoward environmental crises in the New York Bight that were attributed partly to sludge dumping. In 1986, sludge dumping was moved still further seaward to a site over the deep ocean called the 106-Mile Site. Then, again in response to political pressure arising from events unrelated to ocean dumping, the practice ended entirely in 1992. Since 1992, New York City sludge has been applied to land (outside of New York state). The wider question is whether or not changes on the sea floor caused by the portion of sludge that settles are severe enough to justify the added operational cost and human health concerns of applying sludge to land.
See also
Milorganite
References
Further reading
"Biosolids Applied to Land: Advancing Standards and Practices", National Research Council, July 2002
Biogas substrates
Sewerage
Sanitation | Sewage sludge | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 6,983 | [
"Sewerage",
"Environmental engineering",
"Water pollution"
] |
168,632 | https://en.wikipedia.org/wiki/Siemens | Siemens AG ( ) is a German multinational technology conglomerate. It is focused on industrial automation, distributed energy resources, rail transport and health technology. Siemens is the largest industrial manufacturing company in Europe, and holds the position of global market leader in industrial automation and industrial software.
The origins of the conglomerate can be traced back to 1847 to the Telegraphen Bau-Anstalt von Siemens & Halske established in Berlin by Werner von Siemens and Johann Georg Halske. In 1966, the present-day corporation emerged from the merger of three companies: Siemens & Halske, Siemens-Schuckert, and Siemens-Reiniger-Werke. Today headquartered in Munich and Berlin, Siemens and its subsidiaries employ approximately 320,000 people worldwide and reported a global revenue of around €78 billion in 2023. The company is a component of the DAX and Euro Stoxx 50 stock market indices. As of December 2023, Siemens is the second largest German company by market capitalization.
As of 2023, the principal divisions of Siemens are Digital Industries, Smart Infrastructure, Mobility, and Financial Services, with Siemens Mobility operating as an independent entity. Major business divisions that were once part of Siemens before being spun off include semiconductor manufacturer Infineon Technologies (1999), Siemens Mobile (2005), Gigaset Communications (2008), the photonics business Osram (2013), Siemens Healthineers (2017), and Siemens Energy (2020).
History
1847 to 1901
Siemens & Halske was founded by Werner von Siemens and Johann Georg Halske on 1 October 1847. Based on the telegraph, their invention used a needle to point to the sequence of letters, instead of using Morse code. The company, then called Telegraphen-Bauanstalt von Siemens & Halske, opened its first workshop on 12 October.
In 1848, the company built the first long-distance telegraph line in Europe: 500 km (300 miles) from Berlin to Frankfurt am Main. In 1850, the founder's younger brother, Carl Wilhelm Siemens, later Sir William Siemens, started to represent the company in London. The London agency became a branch office in 1858. In the 1850s, the company was involved in building long-distance telegraph networks in Russia. In 1855, a company branch headed by another brother, Carl Heinrich von Siemens, opened in St Petersburg, Russia. In 1867, Siemens completed the monumental Indo-European telegraph line stretching over 11,000 km (6800 miles) from London to Calcutta.
In 1867, Werner von Siemens described a dynamo without permanent magnets.
A similar system was also independently invented by Ányos Jedlik and Charles Wheatstone, but Siemens became the first company to build such devices. In 1881, a Siemens AC Alternator driven by a watermill was used to power the world's first electric street lighting in the town of Godalming, United Kingdom. The company continued to grow and diversified into electric trains and light bulbs. In 1885, Siemens sold one of its generators to George Westinghouse, thereby enabling Westinghouse to begin experimenting with AC networks in Pittsburgh, Pennsylvania.
In 1887, Siemens opened its first office in Japan. In 1890, the founder retired and left the running of the company to his brother Carl and sons Arnold and Wilhelm. In 1892, Siemens was contracted to construct the Hobart electric tramway in Tasmania, Australia, as it increased its markets. The system opened in 1893 and became the first complete electric tram network in the Southern Hemisphere.
1901 to 1933
Siemens & Halske (S & H) was incorporated in 1897 and then merged parts of its activities with Schuckert & Co., Nuremberg, in 1903 to become Siemens-Schuckert. In 1907, Siemens (Siemens & Halske and Siemens-Schuckert) had 34,324 employees and was the seventh-largest company in the German empire by number of employees. (see List of German companies by employees in 1907)
In 1919, S & H and two other companies jointly formed the Osram lightbulb company.
During the 1920s and 1930s, S & H started to manufacture radios, television sets, and electron microscopes.
In 1932, Reiniger, Gebbert & Schall (Erlangen), Phönix AG (Rudolstadt) and Siemens-Reiniger-Veifa mbH (Berlin) merged to form the Siemens-Reiniger-Werke AG (SRW), the third of the so-called parent companies that merged in 1966 to form the present-day Siemens AG.
In the 1920s, Siemens constructed the Ardnacrusha Hydro Power station on the River Shannon in the then Irish Free State, and it was a world first for its design. The company is remembered for its desire to raise the wages of its underpaid workers, only to be overruled by the Cumann na nGaedheal government.
1933 to 1945
Siemens (at the time: Siemens-Schuckert) exploited the forced labour of deported people in extermination camps. The company owned a plant in Auschwitz concentration camp.
Siemens exploited the forced labour of women deported to the Ravensbrück concentration camp; a Siemens factory was located in front of the camp.
During the final years of World War II, numerous plants and factories in Berlin and other major cities were destroyed by Allied air raids. To prevent further losses, manufacturing was therefore moved to alternative places and regions not affected by the air war. The goal was to secure continued production of important war-related and everyday goods. According to records, Siemens was operating almost 400 alternative or relocated manufacturing plants at the end of 1944 and in early 1945.
In 1972, Siemens sued German satirist F.C. Delius for his satirical history of the company, Unsere Siemens-Welt, and it was determined much of the book contained false claims although the trial itself publicized Siemens's history in Nazi Germany. The company supplied electrical parts to Nazi concentration camps and death camps. The factories had poor working conditions, where malnutrition and death were common. Also, the scholarship has shown that the camp factories were created, run, and supplied by the SS, in conjunction with company officials, sometimes high-level officials.
1945 to 2001
In the 1950s, and from their new base in Bavaria, S&H started to manufacture computers, semiconductor devices, washing machines, and pacemakers. In 1966, Siemens & Halske (S&H, founded in 1847), Siemens-Schuckertwerke (SSW, founded in 1903) and Siemens-Reiniger-Werke (SRW, founded in 1932) merged to form Siemens AG. In 1969, Siemens formed Kraftwerk Union with AEG by pooling their nuclear power businesses.
The company's first digital telephone exchange was produced in 1980, and in 1988, Siemens and GEC acquired the UK defence and technology company Plessey. Plessey's holdings were split, and Siemens took over the avionics, radar and traffic control businesses—as Siemens Plessey.
In 1977, Advanced Micro Devices (AMD) entered into a joint venture with Siemens, which wanted to enhance its technology expertise and enter the American market. Siemens purchased 20% of AMD's stock, giving the company an infusion of cash to increase its product lines. The two companies also jointly established Advanced Micro Computers (AMC), located in Silicon Valley and in Germany, allowing AMD to enter the microcomputer development and manufacturing field, in particular based on AMD's second-source Zilog Z8000 microprocessors. When the two companies' vision for Advanced Micro Computers diverged, AMD bought out Siemens's stake in the American division in 1979. AMD closed Advanced Micro Computers in late 1981 after switching focus to manufacturing second-source Intel x86 microprocessors.
In 1985, Siemens bought Allis-Chalmers' interest in the partnership company Siemens-Allis (formed 1978) which supplied electrical control equipment. It was incorporated into Siemens's Energy and Automation division.
In 1987, Siemens reintegrated Kraftwerk Union, the unit overseeing nuclear power business.
In 1987, Siemens acquired Kongsberg Offshore from the Norwegian Government, selling it on to FMC Technologies in 1993
In 1989, Siemens bought the solar photovoltaic business, including 3 solar module manufacturing plants, from industry pioneer ARCO Solar, owned by oil firm ARCO.
In 1991, Siemens acquired Nixdorf Computer and renamed it Siemens Nixdorf Informationssysteme, in order to produce personal computers.
In October 1991, Siemens acquired the Industrial Systems Division of Texas Instruments, based in Johnson City, Tennessee. This division was organized as Siemens Industrial Automation, and was later absorbed by Siemens Energy and Automation, Inc.
In 1992, Siemens bought out IBM's half of ROLM (Siemens had bought into ROLM five years earlier), thus creating SiemensROLM Communications; eventually dropping ROLM from the name later in the 1990s.
In 1993–1994, Siemens C651 electric trains for Singapore's Mass Rapid Transit (MRT) system were built in Austria.
In 1997, Siemens agreed to sell the defence arm of Siemens Plessey to British Aerospace (BAe) and a German aerospace company, DaimlerChrysler Aerospace. BAe and DASA acquired the British and German divisions of the operation respectively.
In October 1997, Siemens Financial Services (SFS) was founded to act as a competence center for financing issues and as a manager of financial risks within Siemens.
In 1998, Siemens acquired Westinghouse Power Generation for more than $1.5 billion from the CBS Corporation and moving Siemens from third to second in the world power generation market.
In 1999, Siemens's semiconductor operations were spun off into a new company called Infineon Technologies. Its Electromechanical Components operations were converted into a legally independent company: Siemens Electromechanical Components GmbH & Co. KG, (which, later that year, was sold to Tyco International Ltd for approximately $1.1 billion.
In the same year, Siemens Nixdorf Informationssysteme AG became part of Fujitsu Siemens Computers, with its retail banking technology group becoming Wincor Nixdorf.
In 2000, Shared Medical Systems Corporation was acquired by the Siemens's Medical Engineering Group, eventually becoming part of Siemens Medical Solutions.
Also in 2000, Atecs-Mannesman was acquired by Siemens, The sale was finalised in April 2001 with 50% of the shares acquired, acquisition, Mannesmann VDO AG merged into Siemens Automotive forming Siemens VDO Automotive AG, Atecs Mannesmann Dematic Systems merged into Siemens Production and Logistics forming Siemens Dematic AG, Mannesmann Demag Delaval merged into the Power Generation division of Siemens AG. Other parts of the company were acquired by Robert Bosch GmbH at the same time. Also, Moore Products Co. of Spring House, PA USA was acquired by Siemens Energy & Automation, Inc.
2001 to 2005
In 2001, Chemtech Group of Brazil was incorporated into the Siemens Group; it provides industrial process optimisation, consultancy and other engineering services.
Also in 2001, Siemens formed joint venture Framatome with Areva SA of France by merging much of the companies' nuclear businesses.
In 2002, Siemens sold some of its business activities to Kohlberg Kravis Roberts & Co. L.P. (KKR), with its metering business included in the sale package.
In 2002, Siemens abandoned the solar photovoltaic industry by selling its participation in a joint-venture company, established in 2001 with Shell and E.ON, to Shell.
In 2003, Siemens acquired the flow division of Danfoss and incorporated it into the Automation and Drives division. Also in 2003 Siemens acquired IndX software (realtime data organisation and presentation). The same year in an unrelated development Siemens reopened its office in Kabul. Also in 2003 agreed to buy Alstom Industrial Turbines; a manufacturer of small, medium and industrial gas turbines for €1.1 billion.
On 11 February 2003, Siemens planned to shorten phones' shelf life by bringing out annual Xelibri lines, with new devices launched as spring -summer and autumn-winter collections. On 6 March 2003, the company opened an office in San Jose. On 7 March 2003, the company announced that it planned to gain 10 per cent of the mainland China market for handsets. On 18 March 2003, the company unveiled the latest in its series of Xelibri fashion phones.
In 2004, the wind energy company Bonus Energy in Brande, Denmark was acquired, forming Siemens Wind Power division. Also in 2004, Siemens invested in Dasan Networks (South Korea, broadband network equipment) acquiring ~40% of the shares, Nokia Siemens disinvested itself of the shares in 2008. The same year Siemens acquired Photo-Scan (UK, CCTV systems), US Filter Corporation (water and Waste Water Treatment Technologies/ Solutions, acquired from Veolia), Huntsville Electronics Corporation (automobile electronics, acquired from Chrysler), and Chantry Networks (WLAN equipment).
In 2005, Siemens sold the Siemens mobile manufacturing business to BenQ, forming the BenQ-Siemens division. Also in 2005 Siemens acquired Flender Holding GmbH (Bocholt, Germany, gears/industrial drives), Bewator AB (building security systems), Wheelabrator Air Pollution Control, Inc. (Industrial and power station dust control systems), AN Windenergie GmbH. (Wind energy), Power Technologies Inc. (Schenectady, USA, energy industry software and training), CTI Molecular Imaging (Positron emission tomography and molecular imaging systems), Myrio (IPTV systems), Shaw Power Technologies International Ltd (UK/USA, electrical engineering consulting, acquired from Shaw Group), and Transmitton (Ashby de la Zouch UK, rail and other industry control and asset management).
2005 and continuing: worldwide bribery scandal
Beginning in 2005, Siemens became embroiled in a multi-national bribery scandal. Among the various incidents was the Siemens Greek bribery scandal, where the company was accused of deals with Greek government officials during the 2004 Summer Olympics. This case, along with others, triggered legal investigations in Germany, initiated by prosecutors in Italy, Liechtenstein, and Switzerland, and later followed by an American investigation in 2006 due to the company's activities while listed on US stock exchanges.
Investigations found that Siemens had a pattern of bribing officials to secure contracts, with the company spending approximately $1.3 billion on bribes across several countries, and maintaining separate accounting records to conceal this. Following the investigations, Siemens settled in December 2008, paying a combined total of approximately $1.6 billion to the US and Germany in what was, at the time, the largest bribery fine in history. In addition, the company was required to invest $1 billion in developing and maintaining new internal compliance procedures. Siemens admitted to violating the accounting provisions of the Foreign Corrupt Practices Act, while its Bangladesh and Venezuela subsidiaries pleaded guilty to paying bribes.
Despite initial expectations of a fine as high as $5 billion, the final amount was significantly less, in part due to Siemens's cooperation with the investigators, the upcoming change in the US administration, and Siemens's role as a US military contractor. The payments included $450 million in fines and penalties and a forfeiture of $350 million in profits in the US. Siemens also revamped its compliance systems, appointing Peter Y. Solmssen, a US lawyer, as an independent director in charge of compliance and accepting oversight from Theo Waigel, a former German finance minister. Siemens implemented new anti-corruption policies, including a comprehensive anti-corruption handbook, online tools for due diligence and compliance, a confidential communications channel for employees, and a corporate disciplinary committee. This process involved hiring approximately 500 full-time compliance personnel worldwide.
Siemens's bribery culture was not new; it was highlighted as far back as 1914 when both Siemens and Vickers were involved in a scandal over bribes paid to Japanese naval authorities. The company resorted to bribery as it sought to expand its business in the developing world after World War II. Up until 1999, bribes were a tax-deductible business expense in Germany, with no penalties for bribing foreign officials. However, with the implementation of the 1999 OECD Anti-Bribery Convention, Siemens started using off-shore accounts to hide its bribery.
During the investigation, key player Reinhard Siekaczek, a mid-level executive in the telecommunications unit, provided critical evidence. He disclosed that he had managed an annual global bribery budget of $40 to $50 million and provided information about the company's 2,700 worldwide contractors, who were typically used to channel money to government officials. Notable instances of bribery included substantial payments in Argentina, Israel, Venezuela, China, Nigeria, and Russia to secure large contracts.
The investigation resulted in multiple prosecutions and settlements with various governments, as well as legal action against Siemens employees and those who received bribes. Noteworthy cases include the conviction of two former executives in 2007 for bribing Italian energy company Enel, a settlement with the Greek government in 2012 for 330 million euros over the Greek bribery scandal, and a guilty plea in 2014 from former Siemens executive Andres Truppel for channeling nearly $100 million in bribes to Argentine government officials. Siemens also faced repercussions from the World Bank due to fraudulent practices by its Russian affiliate. In 2009, Siemens agreed not to bid on World Bank projects for two years and to establish a $100 million fund at the World Bank to support anti-corruption activities over 15 years, known as the "Siemens Integrity Initiative." Other substantial fines include a payment of ₦7 billion (US$ million) to the Nigerian government in 2010, and a US$42.7 million penalty in Israel in 2014 to avoid charges of securities fraud.
2006 to 2011
In 2006, Siemens purchased Bayer Diagnostics which was incorporated into the Medical Solutions Diagnostics division on 1 January 2007, also in 2006 Siemens acquired Controlotron (New York) (ultrasonic flow meters), and also in 2006 Siemens acquired Diagnostic Products Corp., Kadon Electro Mechanical Services Ltd. (now TurboCare Canada Ltd.), Kühnle, Kopp, & Kausch AG, Opto Control, and VistaScape Security Systems.
In January 2007, Siemens was fined €396 million by the European Commission for price fixing in EU electricity markets through a cartel involving 11 companies, including ABB, Alstom, Fuji Electric, Hitachi Japan, AE Power Systems, Mitsubishi Electric Corp, Schneider, Areva, Toshiba and VA Tech. According to the commission, "between 1988 and 2004, the companies rigged bids for procurement contracts, fixed prices, allocated projects to each other, shared markets and exchanged commercially important and confidential information." Siemens was given the highest fine of €396 million, more than half of the total, for its alleged leadership role in the activity.
In March 2007, a Siemens board member was temporarily arrested and accused of illegally financing AUB, a business-friendly labour association which competes against the trade union IG Metall. He was released on bail. Offices of AUB and Siemens were searched. Siemens denied any wrongdoing.
In April the Fixed Networks, Mobile Networks and Carrier Services divisions of Siemens merged with Nokia's Network Business Group in a 50/50 joint venture, creating a fixed and mobile network company called Nokia Siemens Networks. Nokia delayed the merger due to bribery investigations against Siemens. In October 2007, a court in Munich found that the company had bribed public officials in Libya, Russia, and Nigeria in return for the awarding of contracts; four former Nigerian Ministers of Communications were among those named as recipients of the payments. The company admitted to having paid the bribes and agreed to pay a fine of 201 million euros. In December 2007, the Nigerian government cancelled a contract with Siemens due to the bribery findings.
Also in 2007, Siemens acquired Vai Ingdesi Automation (Argentina, Industrial Automation), UGS Corp., Dade Behring, Sidelco (Quebec, Canada), S/D Engineers Inc., and Gesellschaft für Systemforschung und Dienstleistungen im Gesundheitswesen mbH (GSD) (Germany).
In July 2008, Siemens AG formed a joint venture of the Enterprise Communications business with the Gores Group, renamed Unify in 2013. The Gores Group holding a majority interest of 51% stake, with Siemens AG holding a minority interest of 49%.
In August 2008, Siemens Project Ventures invested $15 million in the Arava Power Company. In a press release published that month, Peter Löscher, president and CEO of Siemens AG said: "This investment is another consequential step in further strengthening our green and sustainable technologies". Siemens now holds a 40% stake in the company.
In January 2009, Siemens sold its 34% stake in Framatome, complaining limited managerial influence. In March, it formed an alliance with Rosatom of Russia to engage in nuclear-power activities.
In April 2009, Fujitsu Siemens Computers became Fujitsu Technology Solutions as a result of Fujitsu buying out Siemens's share of the company.
In June 2009 news broke that Nokia Siemens had supplied telecommunications equipment to the Iranian telecom company that included the ability to intercept and monitor telecommunications, a facility known as "lawful intercept". The equipment was believed to have been used in the suppression of the 2009 Iranian election protests, leading to criticism of the company, including by the European Parliament. Nokia Siemens later divested its call monitoring business, and reduced its activities in Iran.
In October 2009, Siemens signed a $418 million contract to buy Solel Solar Systems, an Israeli company in the solar thermal power business.
In December 2010, Siemens agreed to sell its IT Solutions and Services subsidiary for €850 million to Atos. As part of the deal, Siemens agreed to take a 15% stake in the enlarged Atos, to be held for a minimum of five years. In addition, Siemens concluded a seven-year outsourcing contract worth around €5.5 billion, under which Atos will provide managed services and systems integration to Siemens. At the same time, Germany’s Wegmann Group acquired Siemens's 49-percent stake in armored vehicle manufacturer Krauss-Maffei Wegmann GmbH, establishing Wegmann as the sole shareholder of KMW, pending approval by government authorities.
2011 to present
In March 2011, it was decided to list Osram on the stock market in the autumn, but CEO Peter Löscher said Siemens intended to retain a long-term interest in the company, which was already independent from the technological and managerial viewpoints.
In September 2011, Siemens, which had been responsible for constructing all 17 of Germany's existing nuclear power plants, announced that it would exit the nuclear sector following the Fukushima disaster and the subsequent changes to German energy policy. Chief executive Peter Löscher has supported the German government's planned Energiewende, its transition to renewable energy technologies, calling it a "project of the century" and saying Berlin's target of reaching 35% renewable energy sources by 2020 was feasible.
In November 2012, Siemens acquired the Rail division of Invensys for £1.7 billion. In the same month, Siemens acquired a privately held company, LMS International NV.
In August 2013, Nokia acquired 100% of the company Nokia Siemens Networks, with a buy-out of Siemens AG, ending Siemens role in telecommunication.
In August 2013, Siemens won a $966.8 million order for power plant components from oil firm Saudi Aramco, the largest bid it has ever received from the Saudi company.
In 2014, Siemens announced plans to build a $264 million facility for making offshore wind turbines in Paull, England, as Britain's wind power rapidly expands. Siemens chose the Hull area on the east coast of England because it is close to other large offshore projects planned in coming years. The new plant is expected to begin producing turbine rotor blades in 2016. The plant and the associated service center, in Green Port Hull nearby, will employ about 1,000 workers. The facilities will serve the UK market, where the electricity that major power producers generate from wind grew by about 38 percent in 2013, representing about 6 percent of total electricity, according to government figures. There are also plans to increase Britain's wind-generating capacity at least threefold by 2020, to 14 gigawatts.
In May 2014, Rolls-Royce agreed to sell its gas turbine and compressor energy business to Siemens for £1 billion.
In June 2014, Siemens and Mitsubishi Heavy Industries announced their formation of joint ventures to bid for Alstom's troubled energy and transportation businesses (in locomotives, steam turbines, and aircraft engines). A rival bid by General Electric (GE) has been criticized by French government sources, who consider Alstom's operations as a "vital national interest" at a moment when the French unemployment level stands above 10% and some voters are turning towards the far-right.
In 2015, Siemens acquired U.S. oilfield equipment maker Dresser-Rand Group Inc for $7.6 billion.
In November 2016, Siemens acquired EDA company Mentor Graphics for $4.5 billion.
In November 2017, the U.S. Department of Justice charged three Chinese employees of Guangzhou Bo Yu Information Technology Company Limited with hacking into corporate entities, including Siemens AG.
In December 2017, Siemens acquired the medical technology company Fast Track Diagnostics for an undisclosed amount.
In August 2018, Siemens acquired rapid application development company Mendix for €0.6 billion in cash.
In May 2018, Siemens acquired J2 Innovations for an undisclosed amount.
In May 2018, Siemens acquired Enlighted, Inc. for an undisclosed amount.
In September 2019, Siemens and Orascom Construction signed an agreement with the Iraqi government to rebuild two power plants, which is believed to set up the company for future deals in the country.
In 2019–2020, Siemens was identified as a key engineering company supporting the controversial Adani Carmichael coal mine in Queensland (Australia).
In January 2020, Siemens signed an agreement to acquire 99% equity share capital of Indian switchgear manufacturer C&S Electric at €267 million (₹2,100 crore). The takeover was approved by the Competition Commission of India in August 2020.
In April 2020, Siemens acquired a 77% majority stake in Indian building solution provider iMetrex Technologies for an undisclosed sum.
In April 2020, Siemens Energy was created as an independent company out of the energy division of Siemens.
In August 2020, Siemens Healthineers AG announced that it plans to acquire U.S. cancer device and software company Varian Medical Systems in an all-stock deal valued at $16.4 billion.
In February 2021, Roland Busch replaced Joe Kaeser as CEO.
In October 2021, Siemens acquired the building IoT software and hardware company Wattsense for an undisclosed sum.
In May 2022, Siemens made the decision to cease its operations in Russia after 170 years and disassociate itself from any involvement with the Russian government due to the ongoing war of aggression against Ukraine. This decision affected the approximately 3,000 employees working for the company in the country. The announcement came with a financial statement in which Siemens disclosed a second-quarter loss of approximately US$625 million as a direct consequence of the imposed sanctions on Russia.
In July 2022, Siemens acquired ZONA Technology, an aerospace simulation firm.
In October 2022, Siemens announced a strategic partnership with Swedish electric commercial vehicle manufacturer Volta Trucks to deliver and scale eMobility charging infrastructure to simplify the transition to fleet electrification.
In October 2022, Siemens became a target of the Boycott, Divestment and Sanctions movement due to its award of a contract for the EuroAsia Interconnector, which is planned to connect the electricity grids of Greece and Cyprus with both Israel and its illegal settlements in the West Bank.
In June 2023, Siemens announced a global investment plan of €2 billion to expand its manufacturing capacity, including specific commitments of €200 million for a new high-tech plant in Singapore and €140 million to enlarge a facility in Chengdu, China. The strategy aims to foster diversification across Asia, enhance growth in the Chinese market, and decrease dependency on a single country by utilizing Singapore as a primary export hub to Southeast Asia. Simultaneously, Siemens will allocate €1 billion for the development of new facilities and factories in Germany, including €500 million for the expansion and modernization of a factory in Erlangen, expected to enhance production capacity by 60% by 2029. This coincides with the German government's concerns about the economic and security risks associated with investing in China. Additional German investments will finance a new semiconductor factory in Forchheim and a training center for Siemens Healthineers in Erlangen.
In August 2023, it was announced Siemens had signed an agreement to acquire the Veldhoven-headquartered eBus, eTruck and passenger vehicle fast charging technology company, Heliox.
In March 2024, Siemens announced the creation of a new £100m digital engineering facility in Wiltshire, UK, aimed at replacing its existing rail infrastructure factory in Chippenham with a new research and development centre, expected to open by 2026. The move is endorsed by Chancellor Jeremy Hunt as "a big boost" for UK manufacturing.
In March 2024, it was announced Siemens had agreed to acquire ebm-papst's industrial drive technology (IDT) division for undisclosed amount.
Operations
As of 2023, the principal divisions of Siemens are Digital Industries, Smart Infrastructure, Siemens Mobility, Siemens Healthineers and Siemens Financial Services, with Siemens Healthineers and Siemens Mobility operating as independent entities. Siemens also operates a number of "Portfolio Companies" with market-specific offerings. In 2020, the energy business was spun off into the separate Siemens Energy AG, with Siemens retaining a stake of 17.1% as of December 2023. Other business units of the company include Siemens Technology (T) for research and development, Siemens Real Estate (SRE) for corporate real estate management, Siemens Advanta for consulting services (including the management consulting division Siemens Advanta Consulting), next47 as a venture capital fund, and Siemens Global Business Services (GBS) as a shared services unit.
Digital Industries
The Digital Industries division focuses on the automation needs of discrete and process industries. This includes factory automation infrastructure, numerical control systems, engines, drives, inverters, integrated automation systems for machine tools and production machines, and machine to machine communication products. The division also develops industrial control systems, various types of sensors, and radio-frequency identification systems.
In industrial automation and industrial software, Siemens is the global market leader.
In addition to hardware, Digital Industries supplies software for product lifecycle management (PLM), simulation and testing of mechatronic systems, and the MindSphere cloud-based IoT operating system that connects physical infrastructure to the digital world. The software portfolio is supplemented by the Mendix platform for low-code application development and digital marketplaces like Supplyframe and Pixeom. Key customer markets span automotive, machine building, pharmaceuticals, chemicals, food and beverage, electronics, and semiconductors.
In 2023, CEO Roland Busch announced the aim to raise software businesses sales share to 20% in the long term. In June 2023, Siemens launched a new open digital platform called "Siemens Xcelerator", which houses a curated portfolio of IoT-enabled hardware, software, and digital services from both Siemens and third parties. Siemens also announced a partnership with Nvidia, aiming to leverage its Omniverse platform with its 3D design capabilities. Xcelerator is part of a broader industry trend towards digital environments ("metaverses"), and is delivered through a software as a service (SaaS) subscription model, targeting accessibility for a range of businesses including small and medium-sized enterprises.
Smart Infrastructure
Siemens Smart Infrastructure offerings are categorized into buildings, electrification, and electrical products. Its buildings portfolio includes building automation systems, heating, ventilation, and air conditioning (HVAC) controls, and fire safety and security systems, and energy performance services. The electrification portfolio is dedicated to grid resilience and efficiency, encompassing grid simulation, operation control software, power-system automation and protection, and medium to low voltage switchgear. Moreover, it includes charging infrastructure for electric vehicles. In the realm of electrical products, the division offers low-voltage switching, measuring and control equipment, distribution systems, and medium voltage switchgear.
In the renewable energy industry, the company provides a portfolio of products and services to help build and operate microgrids of any size. It provides generation and distribution of electrical energy as well as monitoring and controlling of microgrids. By using primarily renewable energy, microgrids reduce carbon-dioxide emissions, which is often required by government regulations. It supplied a sustainable storage product and microgrids to Enel Produzione SPA for the island of Ventotene in Italy.
Siemens Mobility
Siemens Mobility is a division involved in passenger and freight transportation. This includes providing rolling stock, which covers a range of vehicles for urban, regional, and long-distance travel. The division also offers rail infrastructure products and services such as rail automation, digital station solutions, railway communication systems, and yard and depot solutions.
In 2019, the European Commission blocked a merger between Alstom and Siemens Mobility, citing anti-trust regulations. The plan would have seen the creation of a "European champion" to compete with China's CRRC.
Siemens Healthineers
Siemens Healthineers AG is a publicly listed company that was spun off from Siemens in 2017. As of 2022, Siemens retains a 75% majority stake in Siemens Healthineers.
As a global provider of healthcare solutions and services, its range of offerings includes the manufacture and sale of diagnostic and therapeutic products, clinical consulting, and a variety of training services. Its operations are divided into four main sectors: imaging, diagnostics, Varian Medical Systems, and advanced therapies. Imaging includes magnetic resonance, computed tomography, X-ray, molecular imaging, and ultrasound devices. The diagnostics segment offers in-vitro diagnostic products for laboratory and point-of-care settings. Varian, an American company acquired by Siemens Healthineers in 2021, covers technologies related to cancer care, and advanced therapies focus on image-guided minimally invasive procedures.
Siemens Financial Services
Siemens Financial Services (SFS) is a division that delivers a range of financing solutions. These services target both Siemens's customers and external companies, including debt and equity investments. It provides leasing, lending, working capital, structured financing, and equipment and project financing solutions. SFS is also involved in providing financial advisory services and risk management expertise to Siemens's industrial businesses, helping assess risk profiles of projects and business models.
Former operations
Siemens is known for actively refining its core business through strategic divestitures, pursuing a strategy referred to as "Corporate Clarity" that focuses on selling non-core aspects of the business. Major business divisions that were once part of Siemens before being spun off include:
Deutsche Grammophon/Polydor Records (1987)
Infineon Technologies (1999)
Siemens Mobile (2005)
Gigaset Communications (2008)
Osram (2013)
Siemens Energy (2020)
Joint ventures
Siemens's current joint ventures include:
Siemens Traction Equipment Ltd. (STEZ), Zhuzhou China, is a joint venture between Siemens, Zhuzhou CSR Times Electric Co., Ltd. (TEC) and CSR Zhuzhou Electric Locomotive Co., Ltd. (ZELC), which produces AC drive electric locomotives and AC locomotive traction components.
OMNETRIC Group, A Siemens & Accenture company formed in 2014.
Former joint ventures in which Siemens no longer holds any equity include:
Fujitsu Siemens Computers (sold to Fujitsu in 2009)
Nokia Siemens Networks (sold to Nokia in 2013)
BSH Hausgeräte (sold to Bosch in 2014)
Primetals Technologies (sold to Mitsubishi Heavy Industries in 2019).
Silcar was a joint venture between Siemens Ltd and Thiess Services Pty Ltd until 2013. Silcar is a 3,000 person Australian organisation providing productivity and reliability for large scale and technically complex plant assets. Services include asset management, design, construction, operations and maintenance. Silcar operates across a range of industries and essential services including power generation, electrical distribution, manufacturing, mining and telecommunications. In July 2013, Thiess took full control.
Corporate affairs
Siemens is incorporated in Germany and has its corporate headquarters at the Wittelsbacherplatz in central Munich.
Business trends
For the fiscal year 2023, Siemens reported a revenue of €77.7 billion, an increase of 8% over the previous fiscal cycle. In December 2023, Siemens's shares traded at over US$93 per share, and its market capitalization was valued at US$147 billion. According to an Ernst & Young study published in December 2023, Siemens and SAP were the only German companies of the top 100 most valuable companies by market capitalization worldwide.
The key trends of Siemens are (as at the financial year ending September 30):
* In 2020, Siemens Energy became an independent company.
Locations
As of 2011, Siemens has operations in around 190 countries and approximately 285 production and manufacturing facilities.
Research and development
In 2023, Siemens invested a total of €6.1 billion in research and development. As of 30 September 2022, Siemens had approximately 46,900 employees engaged in research and development and held approximately 43,600 patents worldwide.
Leadership
Chairmen of the Siemens-Schuckertwerke Managing Board (1903 to 1966)
Alfred Berliner (1903 to 1912)
Carl Friedrich von Siemens (1912 to 1919)
(1919 to 1920)
(1920 to 1939)
(1939 to 1945)
(1945 to 1949)
(1949 to 1951)
Friedrich Bauer (1951 to 1962)
Bernhard Plettner (1962 to 1966)
Chairmen of the Siemens & Halske / Siemens-Schuckertwerke Supervisory Board (1918 to 1966)
Wilhelm von Siemens (1918 to 1919)
Carl Friedrich von Siemens (1919 to 1941)
Hermann von Siemens (1941 to 1946)
Friedrich Carl Siemens (1946 to 1948)
Hermann von Siemens (1948 to 1956)
Ernst von Siemens (1956 to 1966)
Chairmen of Siemens AG's managing board (1966 to present)
, , Bernhard Plettner (presidency of the managing board) (1966 to 1967)
Erwin Hachmann, Bernhard Plettner, Gerd Tacke (presidency of the managing board) (1967 to 1968)
Gerd Tacke (1968 to 1971)
Bernhard Plettner (1971 to 1981)
Karlheinz Kaske (1981 to 1992)
Heinrich von Pierer (1992 to 2005)
Klaus Kleinfeld (2005 to 2007)
Peter Löscher (2007 to 2013)
Joe Kaeser (2013 to 2021)
Roland Busch (2021 to present)
Chairmen of the Siemens AG Supervisory Board (1966 to present)
Ernst von Siemens (1966 to 1971)
Peter von Siemens (1971 to 1981)
Bernhard Plettner (1981 to 1988)
Heribald Närger (1988 to 1993)
Hermann Franz (1993 to 1998)
Karl-Hermann Baumann (1998 to 2005)
Heinrich von Pierer (2005 to 2007)
(2007 to 2018)
Jim Hagemann Snabe (2018 to present)
Managing Board (present day)
Roland Busch (CEO Siemens AG)
Klaus Helmrich
Cedrik Neike (CEO Digital Industries)
Matthias Rebellius (CEO Smart Infrastructure)
Ralf P. Thomas (CFO)
Judith Wiese
Shareholders
The company has issued 881,000,000 shares of common stock. The largest single shareholder continues to be the founding shareholder, the Siemens family, with a stake of 6.9%, while 62% is held by institutional asset managers, the largest being two divisions of the world's largest asset manager BlackRock. Moreover, 83.97% of the shares are considered public float, however including such strategic investors as the State of Qatar (DIC Company Ltd.) with 3.04%, the Government Pension Fund of Norway with 2.5% and Siemens AG itself with 3.04%; and 19% are held by private investors, 13% by investors that are considered unidentifiable. In terms of nationality, 26% are owned by German investors, 21% by US investors, followed by the UK (11%), France (8%), Switzerland (8%) and a number of others (26%).
References
Further reading
Bundesarchiv Berlin, NS 19, No. 968, Communication on the creation of the barracks for the Siemens & Halske, the planned production and the planned expansion for 2,500 prisoners "after direct discussions with this company": Economic and Administrative Main Office of the SS (WVHA), Oswald Pohl, secretly, to Reichsführer SS (RFSS), Heinrich Himmler, dated 20 October 1942.
Margarete Buber (1993). 303f: As prisoners of Stalin and Hitler, Frankfurt am Main; Berlin.
Wilfried Feldenkirchen: 1918–1945 Siemens, Munich 1995, Ulrike fire, Claus Füllberg-Stolberg, Sylvia Kempe: work at Ravensbrück concentration camp, in: Women in concentration camps. Bergen-Belsen. Ravensbrück, Bremen, 1994, pp. 55–69
Feldenkirchen, Wilfried (2000). Siemens: From Workshop to Global Player, Munich.
Feldenkirchen, Wilfried, and Eberhard Posner (2005). The Siemens Entrepreneurs: Continuity and Change, 1847–2005. Ten Portraits, Munich.
Greider, William (1997). One World, Ready or Not. Penguin Press. .
Sigrid Jacobeit: working at Siemens in Ravensbrück, in: Dietrich Eichholz (eds) War and economy. Studies on German economic history 1939–1945, Berlin 1999.
Ursula Krause-Schmitt: The path to the Siemens stock led past the crematorium, in: Information. German Resistance Study Group, Frankfurt / Main, 18 Jg, No. 37/38, Nov. 1993, pp. 38–46
MSS in the estate include Wanda Kiedrzy'nska, in: National Library of Poland, Warsaw, Manuscript Division, Sygn. akc 12013/1 and archive the memorial I/6-7-139 RA. * Woman Ravensbruck concentration camp. An overall presentation, State Justice Administration in Ludwigsburg, IV ART 409-Z 39/59, April 1972, pp. 129ff.
Karl-Heinz Roth: "Forced labor in the Siemens Group (1938-1945): Facts, controversies, problems". In: Hermann Kaienburg (ed.): concentration camps and the German Economy 1939–1945 (Social studies, H. 34), Opladen 1996, pp. 149–168
Karl-Heinz Roth: forced labor in the Siemens Group, with a summary table, page 157 See also Ursula Krause-Schmitt: "The road to Siemens stock led to the crematorium past over," pp. 36f, where, according to the catalogs of the International Tracing Service Arolsen and Martin Weinmann (eds.). The Nazi camp system, Frankfurt / Main 1990 and Feldkirchen: Siemens 1918–1945, pp. 198–214, and in particular the associated annotations 91–187.
Carola Sachse: "Jewish forced labor and non-Jewish women and men at Siemens from 1940 to 1945", in: International Scientific Correspondence, No. 1/1991, pp. 12–24
Shaping the Future: The Siemens Entrepreneurs 1847–2018. Ed. Siemens Historical Institute, Hamburg 2018, .
Weiher, Siegfried von /Herbert Goetzeler (1984). The Siemens Company, Its Historical Role in the Progress of Electrical Engineering 1847–1980, 2nd ed. Berlin and Munich.
External links
Siemens Historical Institute
1847 establishments in Prussia
Auschwitz concentration camp
Companies in the Euro Stoxx 50
Companies in the Dow Jones Global Titans 50
Companies involved in the Holocaust
Companies listed on the Frankfurt Stock Exchange
Companies in the DAX index
Conglomerate companies established in 1847
Conglomerate companies of Germany
Consumer electronics brands
Electrical engineering companies of Germany
Electrical wiring and construction supplies manufacturers
Electric transformer manufacturers
Electronics companies of Germany
German brands
Guitar amplification tubes
Instrument-making corporations
Locomotive manufacturers of Germany
Home appliance manufacturers of Germany
Manufacturers of industrial automation
Manufacturing companies established in 1847
Mobile phone manufacturers
Networking companies
Nuclear technology companies of Germany
Price fixing convictions
Rolling stock manufacturers of Germany
Telecommunications equipment vendors
Werner von Siemens
Wind turbine manufacturers
Diesel engine manufacturers
Marine engine manufacturers
Electrical generation engine manufacturers
Gas engine manufacturers
Pump manufacturers
Electric motor manufacturers
Gas turbine manufacturers
Steam turbine manufacturers
Industrial machine manufacturers
Radio manufacturers
Companies formerly listed on the New York Stock Exchange | Siemens | [
"Engineering"
] | 9,187 | [
"Industrial machine manufacturers",
"Radio electronics",
"Radio manufacturers",
"Industrial machinery"
] |
168,651 | https://en.wikipedia.org/wiki/High-performance%20liquid%20chromatography | High-performance liquid chromatography (HPLC), formerly referred to as high-pressure liquid chromatography, is a technique in analytical chemistry used to separate, identify, and quantify specific components in mixtures. The mixtures can originate from food, chemicals, pharmaceuticals, biological, environmental and agriculture, etc., which have been dissolved into liquid solutions.
It relies on high pressure pumps, which deliver mixtures of various solvents, called the mobile phase, which flows through the system, collecting the sample mixture on the way, delivering it into a cylinder, called the column, filled with solid particles, made of adsorbent material, called the stationary phase.
Each component in the sample interacts differently with the adsorbent material, causing different migration rates for each component. These different rates lead to separation as the species flow out of the column into a specific detector such as UV detectors. The output of the detector is a graph, called a chromatogram. Chromatograms are graphical representations of the signal intensity versus time or volume, showing peaks, which represent components of the sample. Each sample appears in its respective time, called its retention time, having area proportional to its amount.
HPLC is widely used for manufacturing (e.g., during the production process of pharmaceutical and biological products), legal (e.g., detecting performance enhancement drugs in urine), research (e.g., separating the components of a complex biological sample, or of similar synthetic chemicals from each other), and medical (e.g., detecting vitamin D levels in blood serum) purposes.
Chromatography can be described as a mass transfer process involving adsorption and/or partition. As mentioned, HPLC relies on pumps to pass a pressurized liquid and a sample mixture through a column filled with adsorbent, leading to the separation of the sample components. The active component of the column, the adsorbent, is typically a granular material made of solid particles (e.g., silica, polymers, etc.), 1.5–50 μm in size, on which various reagents can be bonded. The components of the sample mixture are separated from each other due to their different degrees of interaction with the adsorbent particles. The pressurized liquid is typically a mixture of solvents (e.g., water, buffers, acetonitrile and/or methanol) and is referred to as a "mobile phase". Its composition and temperature play a major role in the separation process by influencing the interactions taking place between sample components and adsorbent. These interactions are physical in nature, such as hydrophobic (dispersive), dipole–dipole and ionic, most often a combination.
Operation
The liquid chromatograph is complex and has sophisticated and delicate technology. In order to properly operate the system, there should be a minimum basis for understanding of how the device performs the data processing to avoid incorrect data and distorted results.
HPLC is distinguished from traditional ("low pressure") liquid chromatography because operational pressures are significantly higher (around 50–1400 bar), while ordinary liquid chromatography typically relies on the force of gravity to pass the mobile phase through the packed column. Due to the small sample amount separated in analytical HPLC, typical column dimensions are 2.1–4.6 mm diameter, and 30–250 mm length. Also HPLC columns are made with smaller adsorbent particles (1.5–50 μm in average particle size). This gives HPLC superior resolving power (the ability to distinguish between compounds) when separating mixtures, which makes it a popular chromatographic technique.
The schematic of an HPLC instrument typically includes solvents' reservoirs, one or more pumps, a solvent-degasser, a sampler, a column, and a detector. The solvents are prepared in advance according to the needs of the separation, they pass through the degasser to remove dissolved gasses, mixed to become the mobile phase, then flow through the sampler, which brings the sample mixture into the mobile phase stream, which then carries it into the column. The pumps deliver the desired flow and composition of the mobile phase through the stationary phase inside the column, then directly into a flow-cell inside the detector. The detector generates a signal proportional to the amount of sample component emerging from the column, hence allowing for quantitative analysis of the sample components. The detector also marks the time of emergence, the retention time, which serves for initial identification of the component. More advanced detectors, provide also additional information, specific to the analyte's characteristics, such as UV-VIS spectrum or mass spectrum, which can provide insight on its structural features. These detectors are in common use, such as UV/Vis, photodiode array (PDA) / diode array detector and mass spectrometry detector.
A digital microprocessor and user software control the HPLC instrument and provide data analysis. Some models of mechanical pumps in an HPLC instrument can mix multiple solvents together at a ratios changing in time, generating a composition gradient in the mobile phase. Most HPLC instruments also have a column oven that allows for adjusting the temperature at which the separation is performed.
The sample mixture to be separated and analyzed is introduced, in a discrete small volume (typically microliters), into the stream of mobile phase percolating through the column. The components of the sample move through the column, each at a different velocity, which are a function of specific physical interactions with the adsorbent, the stationary phase. The velocity of each component depends on its chemical nature, on the nature of the stationary phase (inside the column) and on the composition of the mobile phase. The time at which a specific analyte elutes (emerges from the column) is called its retention time. The retention time, measured under particular conditions, is an identifying characteristic of a given analyte.
Many different types of columns are available, filled with adsorbents varying in particle size, porosity, and surface chemistry. The use of smaller particle size packing materials requires the use of higher operational pressure ("backpressure") and typically improves chromatographic resolution (the degree of peak separation between consecutive analytes emerging from the column). Sorbent particles may be ionic, hydrophobic or polar in nature.
The most common mode of liquid chromatography is reversed phase, whereby the mobile phases used, include any miscible combination of water or buffers with various organic solvents (the most common are acetonitrile and methanol). Some HPLC techniques use water-free mobile phases (see normal-phase chromatography below). The aqueous component of the mobile phase may contain acids (such as formic, phosphoric or trifluoroacetic acid) or salts to assist in the separation of the sample components. The composition of the mobile phase may be kept constant ("isocratic elution mode") or varied ("gradient elution mode") during the chromatographic analysis. Isocratic elution is typically effective in the separation of simple mixtures. Gradient elution is required for complex mixtures, with varying interactions with the stationary and mobile phases. This is the reason why in gradient elution the composition of the mobile phase is varied typically from low to high eluting strength. The eluting strength of the mobile phase is reflected by analyte retention times, as the high eluting strength speeds up the elution (resulting in shortening of retention times). For example, a typical gradient profile in reversed phase chromatography for might start at 5% acetonitrile (in water or aqueous buffer) and progress linearly to 95% acetonitrile over 5–25 minutes. Periods of constant mobile phase composition (plateau) may be also part of a gradient profile. For example, the mobile phase composition may be kept constant at 5% acetonitrile for 1–3 min, followed by a linear change up to 95% acetonitrile.
The chosen composition of the mobile phase depends on the intensity of interactions between various sample components ("analytes") and stationary phase (e.g., hydrophobic interactions in reversed-phase HPLC). Depending on their affinity for the stationary and mobile phases, analytes partition between the two during the separation process taking place in the column. This partitioning process is similar to that which occurs during a liquid–liquid extraction but is continuous, not step-wise.
In the example using a water/acetonitrile gradient, the more hydrophobic components will elute (come off the column) later, then, once the mobile phase gets richer in acetonitrile (i.e., in a mobile phase becomes higher eluting solution), their elution speeds up.
The choice of mobile phase components, additives (such as salts or acids) and gradient conditions depends on the nature of the column and sample components. Often a series of trial runs is performed with the sample in order to find the HPLC method which gives adequate separation.
History and development
Prior to HPLC, scientists used benchtop column liquid chromatographic techniques. Liquid chromatographic systems were largely inefficient due to the flow rate of solvents being dependent on gravity. Separations took many hours, and sometimes days to complete. Gas chromatography (GC) at the time was more powerful than liquid chromatography (LC), however, it was obvious that gas phase separation and analysis of very polar high molecular weight biopolymers was impossible. GC was ineffective for many life science and health applications for biomolecules, because they are mostly non-volatile and thermally unstable at the high temperatures of GC. As a result, alternative methods were hypothesized which would soon result in the development of HPLC.
Following on the seminal work of Martin and Synge in 1941, it was predicted by Calvin Giddings, Josef Huber, and others in the 1960s that LC could be operated in the high-efficiency mode by reducing the packing-particle diameter substantially below the typical LC (and GC) level of 150 μm and using pressure to increase the mobile phase velocity. These predictions underwent extensive experimentation and refinement throughout the 60s into the 70s until these very days. Early developmental research began to improve LC particles, for example the historic Zipax, a superficially porous particle.
The 1970s brought about many developments in hardware and instrumentation. Researchers began using pumps and injectors to make a rudimentary design of an HPLC system. Gas amplifier pumps were ideal because they operated at constant pressure and did not require leak-free seals or check valves for steady flow and good quantitation. Hardware milestones were made at Dupont IPD (Industrial Polymers Division) such as a low-dwell-volume gradient device being utilized as well as replacing the septum injector with a loop injection valve.
While instrumentation developments were important, the history of HPLC is primarily about the history and evolution of particle technology. After the introduction of porous layer particles, there has been a steady trend to reduced particle size to improve efficiency. However, by decreasing particle size, new problems arose. The practical disadvantages stem from the excessive pressure drop needed to force mobile fluid through the column and the difficulty of preparing a uniform packing of extremely fine materials. Every time particle size is reduced significantly, another round of instrument development usually must occur to handle the pressure.
Types
Partition chromatography
Partition chromatography was one of the first kinds of chromatography that chemists developed, and is barely used these days. The partition coefficient principle has been applied in paper chromatography, thin layer chromatography, gas phase and liquid–liquid separation applications. The 1952 Nobel Prize in chemistry was earned by Archer John Porter Martin and Richard Laurence Millington Synge for their development of the technique, which was used for their separation of amino acids. Partition chromatography uses a retained solvent, on the surface or within the grains or fibers of an "inert" solid supporting matrix as with paper chromatography; or takes advantage of some coulombic and/or hydrogen donor interaction with the stationary phase. Analyte molecules partition between a liquid stationary phase and the eluent. Just as in hydrophilic interaction chromatography (HILIC; a sub-technique within HPLC), this method separates analytes based on differences in their polarity. HILIC most often uses a bonded polar stationary phase and a mobile phase made primarily of acetonitrile with water as the strong component. Partition HPLC has been used historically on unbonded silica or alumina supports. Each works effectively for separating analytes by relative polar differences. HILIC bonded phases have the advantage of separating acidic, basic and neutral solutes in a single chromatographic run.
The polar analytes diffuse into a stationary water layer associated with the polar stationary phase and are thus retained. The stronger the interactions between the polar analyte and the polar stationary phase (relative to the mobile phase) the longer the elution time. The interaction strength depends on the functional groups part of the analyte molecular structure, with more polarized groups (e.g., hydroxyl-) and groups capable of hydrogen bonding inducing more retention. Coulombic (electrostatic) interactions can also increase retention. Use of more polar solvents in the mobile phase will decrease the retention time of the analytes, whereas more hydrophobic solvents tend to increase retention times.
Normal–phase chromatography
Normal–phase chromatography was one of the first kinds of HPLC that chemists developed, but has decreased in use over the last decades. Also known as normal-phase HPLC (NP-HPLC), this method separates analytes based on their affinity for a polar stationary surface such as silica; hence it is based on analyte ability to engage in polar interactions (such as hydrogen-bonding or dipole-dipole type of interactions) with the sorbent surface. NP-HPLC uses a non-polar, non-aqueous mobile phase (e.g., chloroform), and works effectively for separating analytes readily soluble in non-polar solvents. The analyte associates with and is retained by the polar stationary phase. Adsorption strengths increase with increased analyte polarity. The interaction strength depends not only on the functional groups present in the structure of the analyte molecule, but also on steric factors. The effect of steric hindrance on interaction strength allows this method to resolve (separate) structural isomers.
The use of more polar solvents in the mobile phase will decrease the retention time of analytes, whereas more hydrophobic solvents tend to induce slower elution (increased retention times). Very polar solvents such as traces of water in the mobile phase tend to adsorb to the solid surface of the stationary phase forming a stationary bound (water) layer which is considered to play an active role in retention. This behavior is somewhat peculiar to normal phase chromatography because it is governed almost exclusively by an adsorptive mechanism (i.e., analytes interact with a solid surface rather than with the solvated layer of a ligand attached to the sorbent surface; see also reversed-phase HPLC below). Adsorption chromatography is still somewhat used for structural isomer separations in both column and thin-layer chromatography formats on activated (dried) silica or alumina supports.
Partition- and NP-HPLC fell out of favor in the 1970s with the development of reversed-phase HPLC because of poor reproducibility of retention times due to the presence of a water or protic organic solvent layer on the surface of the silica or alumina chromatographic media. This layer changes with any changes in the composition of the mobile phase (e.g., moisture level) causing drifting retention times.
Recently, partition chromatography has become popular again with the development of Hilic bonded phases which demonstrate improved reproducibility, and due to a better understanding of the range of usefulness of the technique.
Displacement chromatography
The use of displacement chromatography is rather limited, and is mostly used for preparative chromatography. The basic principle is based on a molecule with a high affinity for the chromatography matrix (the displacer) which is used to compete effectively for binding sites, and thus displace all molecules with lesser affinities.
There are distinct differences between displacement and elution chromatography. In elution mode, substances typically emerge from a column in narrow, Gaussian peaks. Wide separation of peaks, preferably to baseline, is desired in order to achieve maximum purification. The speed at which any component of a mixture travels down the column in elution mode depends on many factors. But for two substances to travel at different speeds, and thereby be resolved, there must be substantial differences in some interaction between the biomolecules and the chromatography matrix. Operating parameters are adjusted to maximize the effect of this difference. In many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. Thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. Displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than "peaks". Because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentration.
Reversed-phase liquid chromatography (RP-LC)
Reversed phase HPLC (RP-HPLC) is the most widespread mode of chromatography. It has a non-polar stationary phase and an aqueous, moderately polar mobile phase. In the reversed phase methods, the substances are retained in the system the more hydrophobic they are. For the retention of organic materials, the stationary phases, packed inside the columns, are consisted mainly of porous granules of silica gel in various shapes, mainly spherical, at different diameters (1.5, 2, 3, 5, 7, 10 um), with varying pore diameters (60, 100, 150, 300, A), on whose surface are chemically bound various hydrocarbon ligands such as C3, C4, C8, C18. There are also polymeric hydrophobic particles that serve as stationary phases, when solutions at extreme pH are needed, or hybrid silica, polymerized with organic substances. The longer the hydrocarbon ligand on the stationary phase, the longer the sample components can be retained. Most of the current methods of separation of biomedical materials use C-18 type of columns, sometimes called by a trade names such as ODS (octadecylsilane) or RP-18 (Reversed Phase 18).
The most common RP stationary phases are based on a silica support, which is surface-modified by bonding RMe2SiCl, where R is a straight chain alkyl group such as C18H37 or C8H17.
With such stationary phases, retention time is longer for lipophylic molecules, whereas polar molecules elute more readily (emerge early in the analysis). A chromatographer can increase retention times by adding more water to the mobile phase, thereby making the interactions of the hydrophobic analyte with the hydrophobic stationary phase relatively stronger. Similarly, an investigator can decrease retention time by adding more organic solvent to the mobile phase. RP-HPLC is so commonly used among the biologists and life science users, therefore it is often incorrectly referred to as just "HPLC" without further specification. The pharmaceutical industry also regularly employs RP-HPLC to qualify drugs before their release.
RP-HPLC operates on the principle of hydrophobic interactions, which originates from the high symmetry in the dipolar water structure and plays the most important role in all processes in life science. RP-HPLC allows the measurement of these interactive forces. The binding of the analyte to the stationary phase is proportional to the contact surface area around the non-polar segment of the analyte molecule upon association with the ligand on the stationary phase. This solvophobic effect is dominated by the force of water for "cavity-reduction" around the analyte and the C18-chain versus the complex of both. The energy released in this process is proportional to the surface tension of the eluent (water: 7.3 J/cm2, methanol: 2.2 J/cm2) and to the hydrophobic surface of the analyte and the ligand respectively. The retention can be decreased by adding a less polar solvent (methanol, acetonitrile) into the mobile phase to reduce the surface tension of water. Gradient elution uses this effect by automatically reducing the polarity and the surface tension of the aqueous mobile phase during the course of the analysis.
Structural properties of the analyte molecule can play an important role in its retention characteristics. In theory, an analyte with a larger hydrophobic surface area (C–H, C–C, and generally non-polar atomic bonds, such as S-S and others) can be retained longer as it does not interact with the water structure. On the other hand, analytes with higher polar surface area (as a result of the presence of polar groups, such as -OH, -NH2, COO− or -NH3+ in their structure) are less retained, as they are better integrated into water. The interactions with the stationary phase can also affected by steric effects, or exclusion effects, whereby a component of very large molecule may have only restricted access to the pores of the stationary phase, where the interactions with surface ligands (alkyl chains) take place. Such surface hindrance typically results in less retention.
Retention time increases with more hydrophobic (non-polar) surface area of the molecules. For example, branched chain compounds can elute more rapidly than their corresponding linear isomers because their overall surface area is lower. Similarly organic compounds with single C–C bonds frequently elute later than those with a C=C or even triple bond, as the double or triple bond makes the molecule more compact than a single C–C bond.
Another important factor is the mobile phase pH since it can change the hydrophobic character of the ionizable analyte. For this reason most methods use a buffering agent, such as sodium phosphate, to control the pH. Buffers serve multiple purposes: control of pH which affects the ionization state of the ionizable analytes, affect the charge upon the ionizable silica surface of the stationary phase in between the bonded phase linands, and in some cases even act as ion pairing agents to neutralize analyte charge. Ammonium formate is commonly added in mass spectrometry to improve detection of certain analytes by the formation of analyte-ammonium adducts. A volatile organic acid such as acetic acid, or most commonly formic acid, is often added to the mobile phase if mass spectrometry is used to analyze the column effluents.
Trifluoroacetic acid (TFA) as additive to the mobile phase is widely used for complex mixtures of biomedical samples, mostly peptides and proteins, using mostly UV based detectors. They are rarely used in mass spectrometry methods, due to residues it can leave in the detector and solvent delivery system, which interfere with the analysis and detection. However, TFA can be highly effective in improving retention of analytes such as carboxylic acids, in applications utilizing other detectors such as UV-VIS, as it is a fairly strong organic acid. The effects of acids and buffers vary by application but generally improve chromatographic resolution when dealing with ionizable components.
Reversed phase columns are quite difficult to damage compared to normal silica columns, thanks to the shielding effect of the bonded hydrophobic ligands; however, most reversed phase columns consist of alkyl derivatized silica particles, and are prone to hydrolysis of the silica at extreme pH conditions in the mobile phase. Most types of RP columns should not be used with aqueous bases as these will hydrolyze the underlying silica particle and dissolve it. There are selected brands of hybrid or enforced silica based particles of RP columns which can be used at extreme pH conditions. The use of extreme acidic conditions is also not recommended, as they also might hydrolyzed as well as corrode the inside walls of the metallic parts of the HPLC equipment.
As a rule, in most cases RP-HPLC columns should be flushed with clean solvent after use to remove residual acids or buffers, and stored in an appropriate composition of solvent. Some biomedical applications require non metallic environment for the optimal separation. For such sensitive cases there is a test for the metal content of a column is to inject a sample which is a mixture of 2,2'- and 4,4'-bipyridine. Because the 2,2'-bipy can chelate the metal, the shape of the peak for the 2,2'-bipy will be distorted (tailed) when metal ions are present on the surface of the silica...
Size-exclusion chromatography
Size-exclusion chromatography (SEC) separates polymer molecules and biomolecules based on differences in their molecular size (actually by a particle's Stokes radius). The separation process is based on the ability of sample molecules to permeate through the pores of gel spheres, packed inside the column, and is dependent on the relative size of analyte molecules and the respective pore size of the absorbent. The process also relies on the absence of any interactions with the packing material surface.
Two types of SEC are usually termed:
Gel permeation chromatography (GPC)—separation of synthetic polymers (aqueous or organic soluble). GPC is a powerful technique for polymer characterization using primarily organic solvents.
Gel filtration chromatography (GFC)—separation of water-soluble biopolymers. GFC uses primarily aqueous solvents (typically for aqueous soluble biopolymers, such as proteins, etc.).
The separation principle in SEC is based on the fully, or partially penetrating of the high molecular weight substances of the sample into the porous stationary-phase particles during their transport through column. The mobile-phase eluent is selected in such a way that it totally prevents interactions with the stationary phase's surface. Under these conditions, the smaller the size of the molecule, the more it is able to penetrate inside the pore space and the movement through the column takes longer. On the other hand, the bigger the molecular size, the higher the probability the molecule will not fully penetrate the pores of the stationary phase, and even travel around them, thus, will be eluted earlier. The molecules are separated in order of decreasing molecular weight, with the largest molecules eluting from the column first and smaller molecules eluting later. Molecules larger than the pore size do not enter the pores at all, and elute together as the first peak in the chromatogram and this is called total exclusion volume which defines the exclusion limit for a particular column. Small molecules will permeate fully through the pores of the stationary phase particles and will be eluted last, marking the end of the chromatogram, and may appear as a total penetration marker.
In biomedical sciences it is generally considered as a low resolution chromatography and thus it is often reserved for the final, "polishing" step of the purification. It is also useful for determining the tertiary structure and quaternary structure of purified proteins. SEC is used primarily for the analysis of large molecules such as proteins or polymers. SEC works also in a preparative way by trapping the smaller molecules in the pores of a particles. The larger molecules simply pass by the pores as they are too large to enter the pores. Larger molecules therefore flow through the column quicker than smaller molecules: that is, the smaller the molecule, the longer the retention time.
This technique is widely used for the molecular weight determination of polysaccharides. SEC is the official technique (suggested by European pharmacopeia) for the molecular weight comparison of different commercially available low-molecular weight heparins.
Ion-exchange chromatography
Ion-exchange chromatography (IEC) or ion chromatography (IC) is an analytical technique for the separation and determination of ionic solutes in aqueous samples from environmental and industrial origins such as metal industry, industrial waste water, in biological systems, pharmaceutical samples, food, etc. Retention is based on the attraction between solute ions and charged sites bound to the stationary phase. Solute ions charged the same as the ions on the column are repulsed and elute without retention, while solute ions charged oppositely to the charged sites of the column are retained on it. Solute ions that are retained on the column can be eluted from it by changing the mobile phase composition, such as increasing its salt concentration and pH or increasing the column temperature, etc.
Types of ion exchangers include polystyrene resins, cellulose and dextran ion exchangers (gels), and controlled-pore glass or porous silica gel. Polystyrene resins allow cross linkage, which increases the stability of the chain. Higher cross linkage reduces swerving, which increases the equilibration time and ultimately improves selectivity. Cellulose and dextran ion exchangers possess larger pore sizes and low charge densities making them suitable for protein separation.
In general, ion exchangers favor the binding of ions of higher charge and smaller radius.
An increase in counter ion (with respect to the functional groups in resins) concentration reduces the retention time, as it creates a strong competition with the solute ions. A decrease in pH reduces the retention time in cation exchange while an increase in pH reduces the retention time in anion exchange. By lowering the pH of the solvent in a cation exchange column, for instance, more hydrogen ions are available to compete for positions on the anionic stationary phase, thereby eluting weakly bound cations.
This form of chromatography is widely used in the following applications: water purification, preconcentration of trace components, ligand-exchange chromatography, ion-exchange chromatography of proteins, high-pH anion-exchange chromatography of carbohydrates and oligosaccharides, and others.
Bioaffinity chromatography
High performance affinity chromatography (HPAC) works by passing a sample solution through a column packed with a stationary phase that contains an immobilized biologically active ligand. The ligand is in fact a substrate that has a specific binding affinity for the target molecule in the sample solution. The target molecule binds to the ligand, while the other molecules in the sample solution pass through the column, having little or no retention. The target molecule is then eluted from the column using a suitable elution buffer.
This chromatographic process relies on the capability of the bonded active substances to form stable, specific, and reversible complexes thanks to their biological recognition of certain specific sample components. The formation of these complexes involves the participation of common molecular forces such as the Van der Waals interaction, electrostatic interaction, dipole-dipole interaction, hydrophobic interaction, and the hydrogen bond. An efficient, biospecific bond is formed by a simultaneous and concerted action of several of these forces in the complementary binding sites.
Aqueous normal-phase chromatography
Aqueous normal-phase chromatography (ANP) is also called hydrophilic interaction liquid chromatography (HILIC). This is a chromatographic technique which encompasses the mobile phase region between reversed-phase chromatography (RP) and organic normal phase chromatography (ONP). HILIC is used to achieve unique selectivity for hydrophilic compounds, showing normal phase elution order, using "reversed-phase solvents", i.e., relatively polar mostly non-aqueous solvents in the mobile phase. Many biological molecules, especially those found in biological fluids, are small polar compounds that do not retain well by reversed phase-HPLC. This has made hydrophilic interaction LC (HILIC) an attractive alternative and useful approach for analysis of polar molecules. Additionally, because HILIC is routinely used with traditional aqueous mixtures with polar organic solvents such as ACN and methanol, it can be easily coupled to MS.
Isocratic and gradient elution
A separation in which the mobile phase composition remains constant throughout the procedure is termed isocratic (meaning constant composition). The word was coined by Csaba Horvath who was one of the pioneers of HPLC.
The mobile phase composition does not have to remain constant. A separation in which the mobile phase composition is changed during the separation process is described as a gradient elution. For example, a gradient can start at 10% methanol in water, and end at 90% methanol in water after 20 minutes. The two components of the mobile phase are typically termed "A" and "B"; A is the "weak" solvent which allows the solute to elute only slowly, while B is the "strong" solvent which rapidly elutes the solutes from the column. In reversed-phase chromatography, solvent A is often water or an aqueous buffer, while B is an organic solvent miscible with water, such as acetonitrile, methanol, THF, or isopropanol.
In isocratic elution, peak width increases with retention time linearly according to the equation for N, the number of theoretical plates. This can be a major disadvantage when analyzing a sample that contains analytes with a wide range of retention factors. Using a weaker mobile phase, the runtime is lengthened and results in slowly eluting peaks to be broad, leading to reduced sensitivity. A stronger mobile phase would improve issues of runtime and broadening of later peaks but results in diminished peak separation, especially for quickly eluting analytes which may have insufficient time to fully resolve. This issue is addressed through the changing mobile phase composition of gradient elution.
By starting from a weaker mobile phase and strengthening it during the runtime, gradient elution decreases the retention of the later-eluting components so that they elute faster, giving narrower (and taller) peaks for most components, while also allowing for the adequate separation of earlier-eluting components. This also improves the peak shape for tailed peaks, as the increasing concentration of the organic eluent pushes the tailing part of a peak forward. This also increases the peak height (the peak looks "sharper"), which is important in trace analysis. The gradient program may include sudden "step" increases in the percentage of the organic component, or different slopes at different times – all according to the desire for optimum separation in minimum time.
In isocratic elution, the retention order does not change if the column dimensions (length and inner diameter) change – that is, the peaks elute in the same order. In gradient elution, however, the elution order may change as the dimensions or flow rate change. if they are no scaled down or up according to the change
The driving force in reversed phase chromatography originates in the high order of the water structure. The role of the organic component of the mobile phase is to reduce this high order and thus reduce the retarding strength of the aqueous component.
Parameters
Theoretical
The theory of high performance liquid chromatography-HPLC is, at its core, the same as general chromatography theory. This theory has been used as the basis for system-suitability tests, as can be seen in the USP Pharmacopeia, which are a set of quantitative criteria, which test the suitability of the HPLC system to the required analysis at any step of it.
This relation is also represented as a normalized unit-less factor known as the retention factor, or retention parameter, which is the experimental measurement of the capacity ratio, as shown in the Figure of Performance Criteria as well. tR is the retention time of the specific component and t0 is the time it takes for a non-retained substance to elute through the system without any retention, thus it is called the Void Time.
The ratio between the retention factors, k', of every two adjacent peaks in the chromatogram is used in the evaluation of the degree of separation between them, and is called selectivity factor, α, as shown in the Performance Criteria graph.
The plate count N as a criterion for system efficiency was developed for isocratic conditions, i.e., a constant mobile phase composition throughout the run. In gradient conditions, where the mobile phase changes with time during the chromatographic run, it is more appropriate to use the parameter peak capacity Pc as a measure for the system efficiency. The definition of peak capacity in chromatography is the number of peaks that can be separated within a retention window for a specific pre-defined resolution factor, usually ~1. It could also be envisioned as the runtime measured in number of peaks' average widths. The equation is shown in the Figure of the performance criteria. In this equation tg is the gradient time and w(ave) is the average peaks width at the base.
The parameters are largely derived from two sets of chromatographic theory: plate theory (as part of partition chromatography), and the rate theory of chromatography / Van Deemter equation. Of course, they can be put in practice through analysis of HPLC chromatograms, although rate theory is considered the more accurate theory.
They are analogous to the calculation of retention factor for a paper chromatography separation, but describes how well HPLC separates a mixture into two or more components that are detected as peaks (bands) on a chromatogram. The HPLC parameters are the: efficiency factor(N), the retention factor (kappa prime), and the separation factor (alpha). Together the factors are variables in a resolution equation, which describes how well two components' peaks separated or overlapped each other. These parameters are mostly only used for describing HPLC reversed phase and HPLC normal phase separations, since those separations tend to be more subtle than other HPLC modes (e.g., ion exchange and size exclusion).
Void volume is the amount of space in a column that is occupied by solvent. It is the space within the column that is outside of the column's internal packing material. Void volume is measured on a chromatogram as the first component peak detected, which is usually the solvent that was present in the sample mixture; ideally the sample solvent flows through the column without interacting with the column, but is still detectable as distinct from the HPLC solvent. The void volume is used as a correction factor.
Efficiency factor (N) practically measures how sharp component peaks on the chromatogram are, as ratio of the component peak's area ("retention time") relative to the width of the peaks at their widest point (at the baseline). Peaks that are tall, sharp, and relatively narrow indicate that separation method efficiently removed a component from a mixture; high efficiency. Efficiency is very dependent upon the HPLC column and the HPLC method used. Efficiency factor is synonymous with plate number, and the 'number of theoretical plates'.
Retention factor (kappa prime) measures how long a component of the mixture stuck to the column, measured by the area under the curve of its peak in a chromatogram (since HPLC chromatograms are a function of time). Each chromatogram peak will have its own retention factor (e.g., kappa1 for the retention factor of the first peak). This factor may be corrected for by the void volume of the column.
Separation factor (alpha) is a relative comparison on how well two neighboring components of the mixture were separated (i.e., two neighboring bands on a chromatogram). This factor is defined in terms of a ratio of the retention factors of a pair of neighboring chromatogram peaks, and may also be corrected for by the void volume of the column. The greater the separation factor value is over 1.0, the better the separation, until about 2.0 beyond which an HPLC method is probably not needed for separation.
Resolution equations relate the three factors such that high efficiency and separation factors improve the resolution of component peaks in an HPLC separation.
Internal diameter
The internal diameter (ID) of an HPLC column is an important parameter. It can influence the detection response when reduced due to the reduced lateral diffusion of the solute band. It can also affect the separation selectivity, when flow rate and injection volumes are not scaled down or up proportionally to the smaller or larger diameter used, both in the isocratic and in gradient modes. It determines the quantity of analyte that can be loaded onto the column. Larger diameter columns are usually seen in preparative applications, such as the purification of a drug product for later use. Low-ID columns have improved sensitivity and lower solvent consumption in the recent ultra-high performance liquid chromatography (UHPLC).
Larger ID columns (over 10 mm) are used to purify usable amounts of material because of their large loading capacity.
Analytical scale columns (4.6 mm) have been the most common type of columns, though narrower columns are rapidly gaining in popularity. They are used in traditional quantitative analysis of samples and often use a UV-Vis absorbance detector.
Narrow-bore columns (1–2 mm) are used for applications when more sensitivity is desired either with special UV-vis detectors, fluorescence detection or with other detection methods like liquid chromatography-mass spectrometry
Capillary columns (under 0.3 mm) are used almost exclusively with alternative detection means such as mass spectrometry. They are usually made from fused silica capillaries, rather than the stainless steel tubing that larger columns employ.
Particle size
Most traditional HPLC is performed with the stationary phase attached to the outside of small spherical silica particles (very small beads). These particles come in a variety of sizes with 5 μm beads being the most common. Smaller particles generally provide more surface area and better separations, but the pressure required for optimum linear velocity increases by the inverse of the particle diameter squared.
According to the equations of the column velocity, efficiency and backpressure, reducing the particle diameter by half and keeping the size of the column the same, will double the column velocity and efficiency; but four times increase the backpressure. And the small particles HPLC also can decrease the width broadening. Larger particles are used in preparative HPLC (column diameters 5 cm up to >30 cm) and for non-HPLC applications such as solid-phase extraction.
Pore size
Many stationary phases are porous to provide greater surface area. Small pores provide greater surface area while larger pore size has better kinetics, especially for larger analytes. For example, a protein which is only slightly smaller than a pore might enter the pore but does not easily leave once inside.
Pump pressure
Pumps vary in pressure capacity, but their performance is measured on their ability to yield a consistent and reproducible volumetric flow rate. Pressure may reach as high as 60 MPa (6000 lbf/in2), or about 600 atmospheres. Modern HPLC systems have been improved to work at much higher pressures, and therefore are able to use much smaller particle sizes in the columns (<2 μm). These "ultra high performance liquid chromatography" systems or UHPLCs, which could also be known as ultra high pressure chromatography systems, can work at up to 120 MPa (17,405 lbf/in2), or about 1200 atmospheres. The term "UPLC" is a trademark of the Waters Corporation, but is sometimes used to refer to the more general technique of UHPLC.
Detectors
HPLC detectors fall into two main categories: universal or selective. Universal detectors typically measure a bulk property (e.g., refractive index) by measuring a difference of a physical property between the mobile phase and mobile phase with solute while selective detectors measure a solute property (e.g., UV-Vis absorbance) by simply responding to the physical or chemical property of the solute. HPLC most commonly uses a UV-Vis absorbance detector; however, a wide range of other chromatography detectors can be used. A universal detector that complements UV-Vis absorbance detection is the charged aerosol detector (CAD). A kind of commonly utilized detector includes refractive index detectors, which provide readings by measuring the changes in the refractive index of the eluant as it moves through the flow cell. In certain cases, it is possible to use multiple detectors, for example LCMS normally combines UV-Vis with a mass spectrometer.
When used with an electrochemical detector (ECD) the HPLC-ECD selectively detects neurotransmitters such as: norepinephrine, dopamine, serotonin, glutamate, GABA, acetylcholine and others in neurochemical analysis research applications. The HPLC-ECD detects neurotransmitters to the femtomolar range. Other methods to detect neurotransmitters include liquid chromatography-mass spectrometry, ELISA, or radioimmunoassays.
Autosamplers
Large numbers of samples can be automatically injected onto an HPLC system, by the use of HPLC autosamplers. In addition, HPLC autosamplers have an injection volume and technique which is exactly the same for each injection, consequently they provide a high degree of injection volume precision.
It is possible to enable sample stirring within the sampling-chamber, thus promoting homogeneity.
Applications
Manufacturing
HPLC has many applications in both laboratory and clinical science. It is a common technique used in pharmaceutical development, as it is a dependable way to obtain and ensure product purity. While HPLC can produce extremely high quality (pure) products, it is not always the primary method used in the production of bulk drug materials. According to the European pharmacopoeia, HPLC is used in only 15.5% of syntheses. However, it plays a role in 44% of syntheses in the United States pharmacopoeia. This could possibly be due to differences in monetary and time constraints, as HPLC on a large scale can be an expensive technique. An increase in specificity, precision, and accuracy that occurs with HPLC unfortunately corresponds to an increase in cost.
Legal
This technique is also used for detection of illicit drugs in various samples. The most common method of drug detection has been an immunoassay. This method is much more convenient. However, convenience comes at the cost of specificity and coverage of a wide range of drugs, therefore, HPLC has been used as well as an alternative method. As HPLC is a method of determining (and possibly increasing) purity, using HPLC alone in evaluating concentrations of drugs was somewhat insufficient. Therefore, HPLC in this context is often performed in conjunction with mass spectrometry. Using liquid chromatography-mass spectrometry (LC-MS) instead of gas chromatography-mass spectrometry (GC-MS) circumvents the necessity for derivitizing with acetylating or alkylation agents, which can be a burdensome extra step. LC-MS has been used to detect a variety of agents like doping agents, drug metabolites, glucuronide conjugates, amphetamines, opioids, cocaine, BZDs, ketamine, LSD, cannabis, and pesticides. Performing HPLC in conjunction with mass spectrometry reduces the absolute need for standardizing HPLC experimental runs.
Research
Similar assays can be performed for research purposes, detecting concentrations of potential clinical candidates like anti-fungal and asthma drugs. This technique is obviously useful in observing multiple species in collected samples, as well, but requires the use of standard solutions when information about species identity is sought out. It is used as a method to confirm results of synthesis reactions, as purity is essential in this type of research. However, mass spectrometry is still the more reliable way to identify species.
Medical and health sciences
Medical use of HPLC typically use mass spectrometer (MS) as the detector, so the technique is called LC-MS or LC-MS/MS for tandem MS, where two types of MS are operated sequentially. When the HPLC instrument is connected to more than one detector, it is called a hyphenated LC system. Pharmaceutical applications are the major users of HPLC, LC-MS and LC-MS/MS. This includes drug development and pharmacology, which is the scientific study of the effects of drugs and chemicals on living organisms, personalized medicine, public health and diagnostics. While urine is the most common medium for analyzing drug concentrations, blood serum is the sample collected for most medical analyses with HPLC. One of the most important roles of LC-MS and LC-MS/MS in the clinical lab is the Newborn Screening (NBS) for metabolic disorders and follow-up diagnostics. The infants' samples come in the shape of dried blood spot (DBS), which is simple to prepare and transport, enabling safe and accessible diagnostics, both locally and globally.
Other methods of detection of molecules that are useful for clinical studies have been tested against HPLC, namely immunoassays. In one example of this, competitive protein binding assays (CPBA) and HPLC were compared for sensitivity in detection of vitamin D. Useful for diagnosing vitamin D deficiencies in children, it was found that sensitivity and specificity of this CPBA reached only 40% and 60%, respectively, of the capacity of HPLC. While an expensive tool, the accuracy of HPLC is nearly unparalleled.
See also
History of chromatography
Capillary electrochromatography
Column chromatography
Csaba Horváth
Ion chromatography
Micellar liquid chromatography
References
Further reading
L. R. Snyder, J.J. Kirkland, and J. W. Dolan, Introduction to Modern Liquid Chromatography, John Wiley & Sons, New York, 2009.
M.W. Dong, Modern HPLC for practicing scientists. Wiley, 2006.
L. R. Snyder, J.J. Kirkland, and J. L. Glajch, Practical HPLC Method Development, John Wiley & Sons, New York, 1997.
S. Ahuja and H. T. Rasmussen (ed), HPLC Method Development for Pharmaceuticals, Academic Press, 2007.
S. Ahuja and M.W. Dong (ed), Handbook of Pharmaceutical Analysis by HPLC, Elsevier/Academic Press, 2005.
Y. V. Kazakevich and R. LoBrutto (ed.), HPLC for Pharmaceutical Scientists, Wiley, 2007.
U. D. Neue, HPLC Columns: Theory, Technology, and Practice, Wiley-VCH, New York, 1997.
M. C. McMaster, HPLC, a practical user's guide, Wiley, 2007.
External links
HPLC Chromatography Principle, Application [Basic Note] – 2020. at Rxlalit.com
Hungarian inventions
Chromatography
Scientific techniques | High-performance liquid chromatography | [
"Chemistry"
] | 10,890 | [
"Chromatography",
"Separation processes"
] |
168,701 | https://en.wikipedia.org/wiki/Open%20Database%20Connectivity | In computing, Open Database Connectivity (ODBC) is a standard application programming interface (API) for accessing database management systems (DBMS). The designers of ODBC aimed to make it independent of database systems and operating systems. An application written using ODBC can be ported to other platforms, both on the client and server side, with few changes to the data access code.
ODBC accomplishes DBMS independence by using an ODBC driver as a translation layer between the application and the DBMS. The application uses ODBC functions through an ODBC driver manager with which it is linked, and the driver passes the query to the DBMS. An ODBC driver can be thought of as analogous to a printer driver or other driver, providing a standard set of functions for the application to use, and implementing DBMS-specific functionality. An application that can use ODBC is referred to as "ODBC-compliant". Any ODBC-compliant application can access any DBMS for which a driver is installed. Drivers exist for all major DBMSs, many other data sources like address book systems and Microsoft Excel, and even for text or comma-separated values (CSV) files.
ODBC was originally developed by Microsoft and Simba Technologies during the early 1990s, and became the basis for the Call Level Interface (CLI) standardized by SQL Access Group in the Unix and mainframe field. ODBC retained several features that were removed as part of the CLI effort. Full ODBC was later ported back to those platforms, and became a de facto standard considerably better known than CLI. The CLI remains similar to ODBC, and applications can be ported from one platform to the other with few changes.
History
Before ODBC
The introduction of the mainframe-based relational database during the 1970s led to a proliferation of data access methods. Generally these systems operated together with a simple command processor that allowed users to type in English-like commands, and receive output. The best-known examples are SQL from IBM and QUEL from the Ingres project. These systems may or may not allow other applications to access the data directly, and those that did use a wide variety of methodologies. The introduction of SQL aimed to solve the problem of language standardization, although substantial differences in implementation remained.
Since the SQL language had only rudimentary programming features, users often wanted to use SQL within a program written in another language, say Fortran or C. This led to the concept of Embedded SQL, which allowed SQL code to be embedded within another language. For instance, a SQL statement like SELECT * FROM city could be inserted as text within C source code, and during compiling it would be converted into a custom format that directly called a function within a library that would pass the statement into the SQL system. Results returned from the statements would be interpreted back into C data formats like char * using similar library code.
There were several problems with the Embedded SQL approach. Like the different varieties of SQL, the Embedded SQLs that used them varied widely, not only from platform to platform, but even across languages on one platform – a system that allowed calls into IBM Db2 would look very different from one that called into their own SQL/DS. Another key problem to the Embedded SQL concept was that the SQL code could only be changed in the program's source code, so that even small changes to the query required considerable programmer effort to modify. The SQL market referred to this as static SQL, versus dynamic SQL which could be changed at any time, like the command-line interfaces that shipped with almost all SQL systems, or a programming interface that left the SQL as plain text until it was called. Dynamic SQL systems became a major focus for SQL vendors during the 1980s.
Older mainframe databases, and the newer microcomputer based systems that were based on them, generally did not have a SQL-like command processor between the user and the database engine. Instead, the data was accessed directly by the program – a programming library in the case of large mainframe systems, or a command line interface or interactive forms system in the case of dBASE and similar applications. Data from dBASE could not generally be accessed directly by other programs running on the machine. Those programs may be given a way to access this data, often through libraries, but it would not work with any other database engine, or even different databases in the same engine. In effect, all such systems were static, which presented considerable problems.
Early efforts
By the mid-1980s the rapid improvement in microcomputers, and especially the introduction of the graphical user interface and data-rich application programs like Lotus 1-2-3 led to an increasing interest in using personal computers as the client-side platform of choice in client–server computing. Under this model, large mainframes and minicomputers would be used primarily to serve up data over local area networks to microcomputers that would interpret, display and manipulate that data. For this model to work, a data access standard was a requirement – in the mainframe field it was highly likely that all of the computers in a shop were from one vendor and clients were computer terminals talking directly to them, but in the micro field there was no such standardization and any client might access any server using any networking system.
By the late 1980s there were several efforts underway to provide an abstraction layer for this purpose. Some of these were mainframe related, designed to allow programs running on those machines to translate between the variety of SQL's and provide a single common interface which could then be called by other mainframe or microcomputer programs. These solutions included IBM's Distributed Relational Database Architecture (DRDA) and Apple Computer's Data Access Language. Much more common, however, were systems that ran entirely on microcomputers, including a complete protocol stack that included any required networking or file translation support.
One of the early examples of such a system was Lotus Development's DataLens, initially known as Blueprint. Blueprint, developed for 1-2-3, supported a variety of data sources, including SQL/DS, DB2, FOCUS and a variety of similar mainframe systems, as well as microcomputer systems like dBase and the early Microsoft/Ashton-Tate efforts that would eventually develop into Microsoft SQL Server. Unlike the later ODBC, Blueprint was a purely code-based system, lacking anything approximating a command language like SQL. Instead, programmers used data structures to store the query information, constructing a query by linking many of these structures together. Lotus referred to these compound structures as query trees.
Around the same time, an industry team including members from Sybase (Tom Haggin), Tandem Computers (Jim Gray & Rao Yendluri) and Microsoft (Kyle Geiger) were working on a standardized dynamic SQL concept. Much of the system was based on Sybase's DB-Library system, with the Sybase-specific sections removed and several additions to support other platforms. DB-Library was aided by an industry-wide move from library systems that were tightly linked to a specific language, to library systems that were provided by the operating system and required the languages on that platform to conform to its standards. This meant that a single library could be used with (potentially) any programming language on a given platform.
The first draft of the Microsoft Data Access API was published in April 1989, about the same time as Lotus' announcement of Blueprint. In spite of Blueprint's great lead – it was running when MSDA was still a paper project – Lotus eventually joined the MSDA efforts as it became clear that SQL would become the de facto database standard. After considerable industry input, in the summer of 1989 the standard became SQL Connectivity (SQLC).
SAG and CLI
In 1988 several vendors, mostly from the Unix and database communities, formed the SQL Access Group (SAG) in an effort to produce a single basic standard for the SQL language. At the first meeting there was considerable debate over whether or not the effort should work solely on the SQL language itself, or attempt a wider standardization which included a dynamic SQL language-embedding system as well, what they called a Call Level Interface (CLI). While attending the meeting with an early draft of what was then still known as MS Data Access, Kyle Geiger of Microsoft invited Jeff Balboni and Larry Barnes of Digital Equipment Corporation (DEC) to join the SQLC meetings as well. SQLC was a potential solution to the call for the CLI, which was being led by DEC.
The new SQLC "gang of four", MS, Tandem, DEC and Sybase, brought an updated version of SQLC to the next SAG meeting in June 1990. The SAG responded by opening the standard effort to any competing design, but of the many proposals, only Oracle Corp had a system that presented serious competition. In the end, SQLC won the votes and became the draft standard, but only after large portions of the API were removed – the standards document was trimmed from 120 pages to 50 during this time. It was also during this period that the name Call Level Interface was formally adopted. In 1995 SQL/CLI became part of the international SQL standard, ISO/IEC 9075-3. The SAG itself was taken over by the X/Open group in 1996, and, over time, became part of The Open Group's Common Application Environment.
MS continued working with the original SQLC standard, retaining many of the advanced features that were removed from the CLI version. These included features like scrollable cursors, and metadata information queries. The commands in the API were split into groups; the Core group was identical to the CLI, the Level 1 extensions were commands that would be easy to implement in drivers, while Level 2 commands contained the more advanced features like cursors. A proposed standard was released in December 1991, and industry input was gathered and worked into the system through 1992, resulting in yet another name change to ODBC.
JET and ODBC
During this time, Microsoft was in the midst of developing their Jet database system. Jet combined three primary subsystems; an ISAM-based database engine (also named Jet, confusingly), a C-based interface allowing applications to access that data, and a selection of driver dynamic-link libraries (DLL) that allowed the same C interface to redirect input and output to other ISAM-based databases, like Paradox and xBase. Jet allowed using one set of calls to access common microcomputer databases in a fashion similar to Blueprint, by then renamed DataLens. However, Jet did not use SQL; like DataLens, the interface was in C and consisted of data structures and function calls.
The SAG standardization efforts presented an opportunity for Microsoft to adapt their Jet system to the new CLI standard. This would not only make Windows a premier platform for CLI development, but also allow users to use SQL to access both Jet and other databases as well. What was missing was the SQL parser that could convert those calls from their text form into the C-interface used in Jet. To solve this, MS partnered with PageAhead Software to use their existing query processor, SIMBA. SIMBA was used as a parser above Jet's C library, turning Jet into an SQL database. And because Jet could forward those C-based calls to other databases, this also allowed SIMBA to query other systems. Microsoft included drivers for Excel to turn its spreadsheet documents into SQL-accessible database tables.
Release and continued development
ODBC 1.0 was released in September 1992. At the time, there was little direct support for SQL databases (versus ISAM), and early drivers were noted for poor performance. Some of this was unavoidable due to the path that the calls took through the Jet-based stack; ODBC calls to SQL databases were first converted from Simba Technologies's SQL dialect to Jet's internal C-based format, then passed to a driver for conversion back into SQL calls for the database. Digital Equipment and Oracle both contracted Simba Technologies to develop drivers for their databases as well.
Circa 1993, OpenLink Software shipped one of the first independently developed third-party ODBC drivers, for the PROGRESS DBMS, and soon followed with their UDBC (a cross-platform API equivalent of ODBC and the SAG/CLI) SDK and associated drivers for PROGRESS, Sybase, Oracle, and other DBMS, for use on Unix-like OS (AIX, HP-UX, Solaris, Linux, etc.), VMS, Windows NT, OS/2, and other OS.
Meanwhile, the CLI standard effort dragged on, and it was not until March 1995 that the definitive version was finalized. By then, Microsoft had already granted Visigenic Software a source code license to develop ODBC on non-Windows platforms. Visigenic ported ODBC to the classic Mac OS, and a wide variety of Unix platforms, where ODBC quickly became the de facto standard. "Real" CLI is rare today. The two systems remain similar, and many applications can be ported from ODBC to CLI with few or no changes.
Over time, database vendors took over the driver interfaces and provided direct links to their products. Skipping the intermediate conversions to and from Jet or similar wrappers often resulted in higher performance. However, by then Microsoft had changed focus to their OLE DB concept (recently reinstated ), which provided direct access to a wider variety of data sources from address books to text files. Several new systems followed which further turned their attention from ODBC, including ActiveX Data Objects (ADO) and ADO.net, which interacted more or less with ODBC over their lifetimes.
As Microsoft turned its attention away from working directly on ODBC, the Unix field was increasingly embracing it. This was propelled by two changes within the market, the introduction of graphical user interfaces (GUIs) like GNOME that provided a need to access these sources in non-text form, and the emergence of open software database systems like PostgreSQL and MySQL, initially under Unix. The later adoption of ODBC by Apple for using the standard Unix-side iODBC package Mac OS X 10.2 (Jaguar) (which OpenLink Software had been independently providing for Mac OS X 10.0 and even Mac OS 9 since 2001) further cemented ODBC as the standard for cross-platform data access.
Sun Microsystems used the ODBC system as the basis for their own open standard, Java Database Connectivity (JDBC). In most ways, JDBC can be considered a version of ODBC for the programming language Java instead of C. JDBC-to-ODBC bridges allow Java-based programs to access data sources through ODBC drivers on platforms lacking a native JDBC driver, although these are now relatively rare. Inversely, ODBC-to-JDBC bridges allow C-based programs to access data sources through JDBC drivers on platforms or from databases lacking suitable ODBC drivers.
ODBC today
ODBC remains in wide use today, with drivers available for most platforms and most databases. It is not uncommon to find ODBC drivers for database engines that are meant to be embedded, like SQLite, as a way to allow existing tools to act as front-ends to these engines for testing and debugging.
Version history
ODBC specifications
1.0: released in September 1992
2.0: 1994
2.5
3.0: 1995, John Goodson of Intersolv and Frank Pellow and Paul Cotton of IBM provided significant input to ODBC 3.0
3.5: 1997
3.8: 2009, with Windows 7
4.0: Development announced June 2016 with first implementation with SQL Server 2017 released Sep 2017 and additional desktop drivers late 2018 final spec on Github
Desktop Database Drivers
1.0 (1993–08): Used the SIMBA query processor produced by PageAhead Software.
2.0 (1994–12): Used with ODBC 2.0.
3.0 (1995–10): Supports Windows 95 and Windows NT Workstation or NT Server 3.51. Only 32-bit drivers were included in this release.
3.5 (1996–10): Supports double-byte character set (DBCS), and accommodated the use of File data source names (DSNs). The Microsoft Access driver was released in an RISC version for use on Alpha platforms for Windows 95/98 and Windows NT 3.51 and later operating systems.
4.0 (late 1998): Support Microsoft Jet Engine Unicode format along with compatibility for ANSI format of earlier versions.
Drivers and Managers
Drivers
ODBC is based on the device driver model, where the driver encapsulates the logic needed to convert a standard set of commands and functions into the specific calls required by the underlying system. For instance, a printer driver presents a standard set of printing commands, the API, to applications using the printing system. Calls made to those APIs are converted by the driver into the format used by the actual hardware, say PostScript or PCL.
In the case of ODBC, the drivers encapsulate many functions that can be broken down into several broad categories. One set of functions is primarily concerned with finding, connecting to and disconnecting from the DBMS that driver talks to. A second set is used to send SQL commands from the ODBC system to the DBMS, converting or interpreting any commands that are not supported internally. For instance, a DBMS that does not support cursors can emulate this functionality in the driver. Finally, another set of commands, mostly used internally, is used to convert data from the DBMS's internal formats to a set of standardized ODBC formats, which are based on the C language formats.
An ODBC driver enables an ODBC-compliant application to use a data source, normally a DBMS. Some non-DBMS drivers exist, for such data sources as CSV files, by implementing a small DBMS inside the driver itself. ODBC drivers exist for most DBMSs, including Oracle, PostgreSQL, MySQL, Microsoft SQL Server (but not for the Compact aka CE edition), Mimer SQL, Sybase ASE, SAP HANA and IBM Db2. Because different technologies have different capabilities, most ODBC drivers do not implement all functionality defined in the ODBC standard. Some drivers offer extra functionality not defined by the standard.
Driver Manager
Device drivers are normally enumerated, set up and managed by a separate Manager layer, which may provide additional functionality. For instance, printing systems often include functionality to provide spooling functionality on top of the drivers, providing print spooling for any supported printer.
In ODBC the Driver Manager (DM) provides these features. The DM can enumerate the installed drivers and present this as a list, often in a GUI-based form.
But more important to the operation of the ODBC system is the DM's concept of a Data Source Name (DSN). DSNs collect additional information needed to connect to a specific data source, versus the DBMS itself. For instance, the same MySQL driver can be used to connect to any MySQL server, but the connection information to connect to a local private server is different from the information needed to connect to an internet-hosted public server. The DSN stores this information in a standardized format, and the DM provides this to the driver during connection requests. The DM also includes functionality to present a list of DSNs using human readable names, and to select them at run-time to connect to different resources.
The DM also includes the ability to save partially complete DSN's, with code and logic to ask the user for any missing information at runtime. For instance, a DSN can be created without a required password. When an ODBC application attempts to connect to the DBMS using this DSN, the system will pause and ask the user to provide the password before continuing. This frees the application developer from having to create this sort of code, as well as having to know which questions to ask. All of this is included in the driver and the DSNs.
Bridging configurations
A bridge is a special kind of driver: a driver that uses another driver-based technology.
ODBC-to-JDBC (ODBC-JDBC) bridges
An ODBC-JDBC bridge consists of an ODBC driver which uses the services of a JDBC driver to connect to a database. This driver translates ODBC function-calls into JDBC method-calls. Programmers usually use such a bridge when they lack an ODBC driver for some database but have access to a JDBC driver. Examples: OpenLink ODBC-JDBC Bridge, SequeLink ODBC-JDBC Bridge.
JDBC-to-ODBC (JDBC-ODBC) bridges
A JDBC-ODBC bridge consists of a JDBC driver which employs an ODBC driver to connect to a target database. This driver translates JDBC method calls into ODBC function calls. Programmers usually use such a bridge when a given database lacks a JDBC driver, but is accessible through an ODBC driver. Sun Microsystems included one such bridge in the JVM, but viewed it as a stop-gap measure while few JDBC drivers existed (The built-in JDBC-ODBC bridge was dropped from the JVM in Java 8). Sun never intended its bridge for production environments, and generally recommended against its use. independent data-access vendors deliver JDBC-ODBC bridges which support current standards for both mechanisms, and which far outperform the JVM built-in. Examples: OpenLink JDBC-ODBC Bridge, SequeLink JDBC-ODBC Bridge, ZappySys JDBC-ODBC Bridge.
OLE DB-to-ODBC bridges
An OLE DB-ODBC bridge consists of an OLE DB Provider which uses the services of an ODBC driver to connect to a target database. This provider translates OLE DB method calls into ODBC function calls. Programmers usually use such a bridge when a given database lacks an OLE DB provider, but is accessible through an ODBC driver. Microsoft ships one, MSDASQL.DLL, as part of the MDAC system component bundle, together with other database drivers, to simplify development in COM-aware languages (e.g. Visual Basic). Third parties have also developed such, notably OpenLink Software whose 64-bit OLE DB Provider for ODBC Data Sources filled the gap when Microsoft initially deprecated this bridge for their 64-bit OS. (Microsoft later relented, and 64-bit Windows starting with Windows Server 2008 and Windows Vista SP1 have shipped with a 64-bit version of MSDASQL.) Examples: OpenLink OLEDB-ODBC Bridge , SequeLink OLEDB-ODBC Bridge.
ADO.NET-to-ODBC bridges
An ADO.NET-ODBC bridge consists of an ADO.NET Provider which uses the services of an ODBC driver to connect to a target database. This provider translates ADO.NET method calls into ODBC function calls. Programmers usually use such a bridge when a given database lacks an ADO.NET provider, but is accessible through an ODBC driver. Microsoft ships one as part of the MDAC system component bundle, together with other database drivers, to simplify development in C#. Third parties have also developed such. Examples: OpenLink ADO.NET-ODBC Bridge, SequeLink ADO.NET-ODBC Bridge.
See also
GNU Data Access
Java Database Connectivity (JDBC)
Windows Open Services Architecture
ODBC Administrator
References
Bibliography
Citations
External links
Microsoft ODBC Overview
IBM i ODBC Administration
Presentation slides from www.roth.net
Microsoft ODBC & Data Access APIs History Article.
Github page: Microsoft ODBC 4.0 Specification
Computer programming
Microsoft application programming interfaces
Database APIs
SQL data access | Open Database Connectivity | [
"Technology",
"Engineering"
] | 5,032 | [
"Software engineering",
"Computer programming",
"Computers"
] |
168,848 | https://en.wikipedia.org/wiki/Human%20skeleton | The human skeleton is the internal framework of the human body. It is composed of around 270 bones at birth – this total decreases to around 206 bones by adulthood after some bones get fused together. The bone mass in the skeleton makes up about 14% of the total body weight (ca. 10–11 kg for an average person) and reaches maximum mass between the ages of 25 and 30. The human skeleton can be divided into the axial skeleton and the appendicular skeleton. The axial skeleton is formed by the vertebral column, the rib cage, the skull and other associated bones. The appendicular skeleton, which is attached to the axial skeleton, is formed by the shoulder girdle, the pelvic girdle and the bones of the upper and lower limbs.
The human skeleton performs six major functions: support, movement, protection, production of blood cells, storage of minerals, and endocrine regulation.
The human skeleton is not as sexually dimorphic as that of many other primate species, but subtle differences between sexes in the morphology of the skull, dentition, long bones, and pelvis exist. In general, female skeletal elements tend to be smaller and less robust than corresponding male elements within a given population. The human female pelvis is also different from that of males in order to facilitate childbirth. Unlike most primates, human males do not have penile bones.
Divisions
Axial
The axial skeleton (80 bones) is formed by the vertebral column (32–34 bones; the number of the vertebrae differs from human to human as the lower 2 parts, sacral and coccygeal bone may vary in length), a part of the rib cage (12 pairs of ribs and the sternum), and the skull (22 bones and 7 associated bones).
The upright posture of humans is maintained by the axial skeleton, which transmits the weight from the head, the trunk, and the upper extremities down to the lower extremities at the hip joints. The bones of the spine are supported by many ligaments. The erector spinae muscles are also supporting and are useful for balance.
Appendicular
The appendicular skeleton (126 bones) is formed by the pectoral girdles, the upper limbs, the pelvic girdle or pelvis, and the lower limbs. Their functions are to make locomotion possible and to protect the major organs of digestion, excretion and reproduction.
Functions
The skeleton serves six major functions: support, movement, protection, production of blood cells, storage of minerals and endocrine regulation.
Support
The skeleton provides the framework which supports the body and maintains its shape. The pelvis, associated ligaments and muscles provide a floor for the pelvic structures. Without the rib cages, costal cartilages, and intercostal muscles, the lungs would collapse.
Movement
The joints between bones allow movement, some allowing a wider range of movement than others, e.g. the ball and socket joint allows a greater range of movement than the pivot joint at the neck. Movement is powered by skeletal muscles, which are attached to the skeleton at various sites on bones. Muscles, bones, and joints provide the principal mechanics for movement, all coordinated by the nervous system.
It is believed that the reduction of human bone density in prehistoric times reduced the agility and dexterity of human movement. Shifting from hunting to agriculture has caused human bone density to reduce significantly.
Protection
The skeleton helps to protect many vital internal organs from being damaged.
The skull protects the brain
The vertebrae protect the spinal cord.
The rib cage, spine, and sternum protect the lungs, heart and major blood vessels.
Blood cell production
The skeleton is the site of haematopoiesis, the development of blood cells that takes place in the bone marrow. In children, haematopoiesis occurs primarily in the marrow of the long bones such as the femur and tibia. In adults, it occurs mainly in the pelvis, cranium, vertebrae, and sternum.
Storage
The bone matrix can store calcium and is involved in calcium metabolism, and bone marrow can store iron in ferritin and is involved in iron metabolism. However, bones are not entirely made of calcium, but a mixture of chondroitin sulfate and hydroxyapatite, the latter making up 70% of a bone. Hydroxyapatite is in turn composed of 39.8% of calcium, 41.4% of oxygen, 18.5% of phosphorus, and 0.2% of hydrogen by mass. Chondroitin sulfate is a sugar made up primarily of oxygen and carbon.
Endocrine regulation
Bone cells release a hormone called osteocalcin, which contributes to the regulation of blood sugar (glucose) and fat deposition. Osteocalcin increases both insulin secretion and sensitivity, in addition to boosting the number of insulin-producing cells and reducing stores of fat.
Sex differences
Anatomical differences between human males and females are highly pronounced in some soft tissue areas, but tend to be limited in the skeleton. The human skeleton is not as sexually dimorphic as that of many other primate species, but subtle differences between sexes in the morphology of the skull, dentition, long bones, and pelvis are exhibited across human populations. In general, female skeletal elements tend to be smaller and less robust than corresponding male elements within a given population. It is not known whether or to what extent those differences are genetic or environmental.
Skull
A variety of gross morphological traits of the human skull demonstrate sexual dimorphism, such as the median nuchal line, mastoid processes, supraorbital margin, supraorbital ridge, and the chin.
Dentition
Human inter-sex dental dimorphism centers on the canine teeth, but it is not nearly as pronounced as in the other great apes.
Long bones
Long bones are generally larger in males than in females within a given population. Muscle attachment sites on long bones are often more robust in males than in females, reflecting a difference in overall muscle mass and development between sexes. Sexual dimorphism in the long bones is commonly characterized by morphometric or gross morphological analyses.
Pelvis
The human pelvis exhibits greater sexual dimorphism than other bones, specifically in the size and shape of the pelvic cavity, ilia, greater sciatic notches, and the sub-pubic angle. The Phenice method is commonly used to determine the sex of an unidentified human skeleton by anthropologists with 96% to 100% accuracy in some populations.
Women's pelvises are wider in the pelvic inlet and are wider throughout the pelvis to allow for child birth. The sacrum in the women's pelvis is curved inwards to allow the child to have a "funnel" to assist in the child's pathway from the uterus to the birth canal.
Clinical significance
There are many classified skeletal disorders. One of the most common is osteoporosis. Also common is scoliosis, a side-to-side curve in the back or spine, often creating a pronounced "C" or "S" shape when viewed on an x-ray of the spine. This condition is most apparent during adolescence, and is most common with females.
Arthritis
Arthritis is a disorder of the joints. It involves inflammation of one or more joints. When affected by arthritis, the joint or joints affected may be painful to move, may move in unusual directions or may be immobile completely. The symptoms of arthritis will vary differently between types of arthritis. The most common form of arthritis, osteoarthritis, can affect both the larger and smaller joints of the human skeleton. The cartilage in the affected joints will degrade, soften and wear away. This decreases the mobility of the joints and decreases the space between bones where cartilage should be.
Osteoporosis
Osteoporosis is a disease of bone where there is reduced bone mineral density, increasing the likelihood of fractures. Osteoporosis is defined by the World Health Organization in women as a bone mineral density 2.5 standard deviations below peak bone mass, relative to the age and sex-matched average, as measured by dual energy X-ray absorptiometry, with the term "established osteoporosis" including the presence of a fragility fracture. Osteoporosis is most common in women after menopause, when it is called "postmenopausal osteoporosis", but may develop in men and premenopausal women in the presence of particular hormonal disorders and other chronic diseases or as a result of smoking and medications, specifically glucocorticoids. Osteoporosis usually has no symptoms until a fracture occurs. For this reason, DEXA scans are often done in people with one or more risk factors, who have developed osteoporosis and be at risk of fracture.
Osteoporosis treatment includes advice to stop smoking, decrease alcohol consumption, exercise regularly, and have a healthy diet. Calcium supplements may also be advised, as may vitamin D. When medication is used, it may include bisphosphonates, strontium ranelate, and osteoporosis may be one factor considered when commencing hormone replacement therapy.
History
India
The Sushruta Samhita, composed between the 6th century BCE and 5th century CE speaks of 360 bones. Books on Salya-Shastra (surgical science) know of only 300. The text then lists the total of 300 as follows: 120 in the extremities (e.g. hands, legs), 117 in the pelvic area, sides, back, abdomen and breast, and 63 in the neck and upwards. The text then explains how these subtotals were empirically verified. The discussion shows that the Indian tradition nurtured diversity of thought, with Sushruta school reaching its own conclusions and differing from the Atreya-Caraka tradition. The differences in the count of bones in the two schools is partly because Charaka Samhita includes 32 tooth sockets in its count, and their difference of opinions on how and when to count a cartilage as bone (which both sometimes do, unlike modern anatomy).
Hellenistic world
The study of bones in ancient Greece started under Ptolemaic kings due to their link to Egypt. Herophilos, through his work by studying dissected human corpses in Alexandria, is credited to be the pioneer of the field. His works are lost but are often cited by notable persons in the field such as Galen and Rufus of Ephesus. Galen himself did little dissection though and relied on the work of others like Marinus of Alexandria, as well as his own observations of gladiator cadavers and animals. According to Katherine Park, in medieval Europe dissection continued to be practiced, contrary to the popular understanding that such practices were taboo and thus completely banned. The practice of holy autopsy, such as in the case of Clare of Montefalco further supports the claim. Alexandria continued as a center of anatomy under Islamic rule, with Ibn Zuhr a notable figure. Chinese understandings are divergent, as the closest corresponding concept in the medicinal system seems to be the meridians, although given that Hua Tuo regularly performed surgery, there may be some distance between medical theory and actual understanding.
Renaissance
Leonardo da Vinci made studies of the skeleton, albeit unpublished in his time. Many artists, Antonio del Pollaiuolo being the first, performed dissections for better understanding of the body, although they concentrated mostly on the muscles. Vesalius, regarded as the founder of modern anatomy, authored the book De humani corporis fabrica, which contained many illustrations of the skeleton and other body parts, correcting some theories dating from Galen, such as the lower jaw being a single bone instead of two. Various other figures like Alessandro Achillini also contributed to the further understanding of the skeleton.
18th century
As early as 1797, the death goddess or folk saint known as Santa Muerte has been represented as a skeleton.
See also
List of bones of the human skeleton
Distraction osteogenesis
References
Bibliography
Further reading
Endocrine system
Human anatomy | Human skeleton | [
"Biology"
] | 2,518 | [
"Organ systems",
"Endocrine system"
] |
168,864 | https://en.wikipedia.org/wiki/Well-ordering%20principle | In mathematics, the well-ordering principle states that every non-empty subset of nonnegative integers contains a least element. In other words, the set of nonnegative integers is well-ordered by its "natural" or "magnitude" order in which precedes if and only if is either or the sum of and some nonnegative integer (other orderings include the ordering ; and ).
The phrase "well-ordering principle" is sometimes taken to be synonymous with the "well-ordering theorem". On other occasions it is understood to be the proposition that the set of integers contains a well-ordered subset, called the natural numbers, in which every nonempty subset contains a least element.
Properties
Depending on the framework in which the natural numbers are introduced, this (second-order) property of the set of natural numbers is either an axiom or a provable theorem. For example:
In Peano arithmetic, second-order arithmetic and related systems, and indeed in most (not necessarily formal) mathematical treatments of the well-ordering principle, the principle is derived from the principle of mathematical induction, which is itself taken as basic.
Considering the natural numbers as a subset of the real numbers, and assuming that we know already that the real numbers are complete (again, either as an axiom or a theorem about the real number system), i.e., every bounded (from below) set has an infimum, then also every set of natural numbers has an infimum, say . We can now find an integer such that lies in the half-open interval , and can then show that we must have , and in .
In axiomatic set theory, the natural numbers are defined as the smallest inductive set (i.e., set containing 0 and closed under the successor operation). One can (even without invoking the regularity axiom) show that the set of all natural numbers such that " is well-ordered" is inductive, and must therefore contain all natural numbers; from this property one can conclude that the set of all natural numbers is also well-ordered.
In the second sense, this phrase is used when that proposition is relied on for the purpose of justifying proofs that take the following form: to prove that every natural number belongs to a specified set , assume the contrary, which implies that the set of counterexamples is non-empty and thus contains a smallest counterexample. Then show that for any counterexample there is a still smaller counterexample, producing a contradiction. This mode of argument is the contrapositive of proof by complete induction. It is known light-heartedly as the "minimal criminal" method and is similar in its nature to Fermat's method of "infinite descent".
Garrett Birkhoff and Saunders Mac Lane wrote in A Survey of Modern Algebra that this property, like the least upper bound axiom for real numbers, is non-algebraic; i.e., it cannot be deduced from the algebraic properties of the integers (which form an ordered integral domain).
Example applications
The well-ordering principle can be used in the following proofs.
Prime factorization
Theorem: Every integer greater than one can be factored as a product of primes. This theorem constitutes part of the Prime Factorization Theorem.
Proof (by well-ordering principle). Let be the set of all integers greater than one that cannot be factored as a product of primes. We show that is empty.
Assume for the sake of contradiction that is not empty. Then, by the well-ordering principle, there is a least element ; cannot be prime since a prime number itself is considered a length-one product of primes. By the definition of non-prime numbers, has factors , where are integers greater than one and less than . Since , they are not in as is the smallest element of . So, can be factored as products of primes, where and , meaning that , a product of primes. This contradicts the assumption that , so the assumption that is nonempty must be false.
Integer summation
Theorem: for all positive integers .
Proof. Suppose for the sake of contradiction that the above theorem is false. Then, there exists a non-empty set of positive integers . By the well-ordering principle, has a minimum element such that when , the equation is false, but true for all positive integers less than . The equation is true for , so ; is a positive integer less than , so the equation holds for as it is not in . Therefore,
which shows that the equation holds for , a contradiction. So, the equation must hold for all positive integers.
References
Wellfoundedness
Mathematical principles
cs:Princip dobrého uspořádání | Well-ordering principle | [
"Mathematics"
] | 981 | [
"Mathematical principles",
"Order theory",
"Wellfoundedness",
"Mathematical induction"
] |
168,927 | https://en.wikipedia.org/wiki/Somatic%20cell%20nuclear%20transfer | In genetics and developmental biology, somatic cell nuclear transfer (SCNT) is a laboratory strategy for creating a viable embryo from a body cell and an egg cell. The technique consists of taking a denucleated oocyte (egg cell) and implanting a donor nucleus from a somatic (body) cell. It is used in both therapeutic and reproductive cloning. In 1996, Dolly the sheep became famous for being the first successful case of the reproductive cloning of a mammal. In January 2018, a team of scientists in Shanghai announced the successful cloning of two female crab-eating macaques (named Zhong Zhong and Hua Hua) from foetal nuclei.
"Therapeutic cloning" refers to the potential use of SCNT in regenerative medicine; this approach has been championed as an answer to the many issues concerning embryonic stem cells (ESCs) and the destruction of viable embryos for medical use, though questions remain on how homologous the two cell types truly are.
Introduction
Somatic cell nuclear transfer is a technique for cloning in which the nucleus of a somatic cell is transferred to the cytoplasm of an enucleated egg. After the somatic cell transfers, the cytoplasmic factors affect the nucleus to become a zygote. The blastocyst stage is developed by the egg to help create embryonic stem cells from the inner cell mass of the blastocyst. The first mammal to be developed by this technique was Dolly the sheep, in 1996.
Early 20th-Century
Although Dolly is generally recognized as the first animal to be cloned using this technique, earlier instances of SCNT exist as early as the 1950s. In particular, the research of Sir John Gurdon in 1958 entailed the cloning of Xenopus laevis utilizing the principles of SCNT. In short, the experiment consisted of inducing a female specimen to ovulate, at which point her eggs were harvested. From here, the egg was enucleated using ultra-violet irradiation to disable the egg's pronucleus. At this point, the prepared egg cell and nucleus from the donor cell were combined, and then incubation and eventual development into a tadpole proceeded. Gurdon's application of SCNT differs from more modern applications and even applications used on other model systems of the time (i.e., Rana pipiens) due to his usage of UV irradiation to enucleate the egg instead of using a pipette to remove the nucleus from the egg.
Process
The process of somatic cell nuclear transfer involves two different cells. The first being a female gamete, known as the ovum (egg/oocyte). In human SCNT experiments, these eggs are obtained through consenting donors, utilizing ovarian stimulation. The second being a somatic cell, referring to the cells of the human body. Skin cells, fat cells, and liver cells are only a few examples. The genetic material of the donor egg cell is removed and discarded, leaving it 'deprogrammed.' What is left is a somatic cell and an enucleated egg cell. These are then fused by inserting the somatic cell into the 'empty' ovum. After being inserted into the egg, the somatic cell nucleus is reprogrammed by its host egg cell. The ovum, now containing the somatic cell's nucleus, is stimulated with a shock and will begin to divide. The egg is now viable and capable of producing an adult organism containing all necessary genetic information from just one parent. Development will ensue normally and after many mitotic divisions, the single cell forms a blastocyst (an early stage embryo with about 100 cells) with an identical genome to the original organism (i.e. a clone). Stem cells can then be obtained by the destruction of this clone embryo for use in therapeutic cloning or in the case of reproductive cloning the clone embryo is implanted into a host mother for further development and brought to term.
Conventional SCNT requires the use of micromanipulators, which are expensive machines used to accurately manipulate cells. Using the micromanipulator, a scientist makes an opening in the zona pellucida and sucks out the egg cell's original nucleus using a pipette. They then make another opening to a different pipette to inject the donor nucleus. Alternatively, electric energy can be applied to fuse the empty egg cell with a donor cell containing a nucleus.
An alternative technique called "handmade cloning" was described by Indian scientists in 2001. This technique requires no use of a micromanipulator and has been used for the cloning of several livestock species. Removal of the nucleus can be done chemically, by centrifuge, or with the use of a blade. The empty egg is glued to the donor cell with phytohaemagglutinin, then fused using electricity. (If a blade is used, two fusion steps would be required: the first fusion is between the donor and an empty half-egg, the second between the half-size "demi-embryo" and another empty half-egg.)
Applications
Stem cell research
Somatic cell nuclear transplantation has become a focus of study in stem cell research. The aim of carrying out this procedure is to obtain pluripotent cells from a cloned embryo. These cells genetically matched the donor organism from which they came. This gives them the ability to create patient specific pluripotent cells, which could then be used in therapies or disease research.
Embryonic stem cells are undifferentiated cells of an embryo. These cells are deemed to have a pluripotent potential because they have the ability to give rise to all of the tissues found in an adult organism. This ability allows stem cells to create any cell type, which could then be transplanted to replace damaged or destroyed cells. Controversy surrounds human ESC work due to the destruction of viable human embryos, leading scientists to seek alternative methods of obtaining pluripotent stem cells, SCNT is one such method.
A potential use of stem cells genetically matched to a patient would be to create cell lines that have genes linked to a patient's particular disease. By doing so, an in vitro model could be created, would be useful for studying that particular disease, potentially discovering its pathophysiology, and discovering therapies. For example, if a person with Parkinson's disease donated their somatic cells, the stem cells resulting from SCNT would have genes that contribute to Parkinson's disease. The disease specific stem cell lines could then be studied in order to better understand the condition.
Another application of SCNT stem cell research is using the patient specific stem cell lines to generate tissues or even organs for transplant into the specific patient. The resulting cells would be genetically identical to the somatic cell donor, thus avoiding any complications from immune system rejection.
Only a handful of the labs in the world are currently using SCNT techniques in human stem cell research. In the United States, scientists at the Harvard Stem Cell Institute, the University of California San Francisco, the Oregon Health & Science University, Stemagen (La Jolla, CA) and possibly Advanced Cell Technology are currently researching a technique to use somatic cell nuclear transfer to produce embryonic stem cells. In the United Kingdom, the Human Fertilisation and Embryology Authority has granted permission to research groups at the Roslin Institute and the Newcastle Centre for Life. SCNT may also be occurring in China.
Though there has been numerous successes with cloning animals, questions remain concerning the mechanisms of reprogramming in the ovum. Despite many attempts, success in creating human nuclear transfer embryonic stem cells has been limited. There lies a problem in the human cell's ability to form a blastocyst; the cells fail to progress past the eight cell stage of development. This is thought to be a result from the somatic cell nucleus being unable to turn on embryonic genes crucial for proper development. These earlier experiments used procedures developed in non-primate animals with little success.
A research group from the Oregon Health & Science University demonstrated SCNT procedures developed for primates successfully using skin cells. The key to their success was utilizing oocytes in metaphase II (MII) of the cell cycle. Egg cells in MII contain special factors in the cytoplasm that have a special ability in reprogramming implanted somatic cell nuclei into cells with pluripotent states. When the ovum's nucleus is removed, the cell loses its genetic information. This has been blamed for why enucleated eggs are hampered in their reprogramming ability. It is theorized the critical embryonic genes are physically linked to oocyte chromosomes, enucleation negatively affects these factors. Another possibility is removing the egg nucleus or inserting the somatic nucleus causes damage to the cytoplast, affecting reprogramming ability.
Taking this into account the research group applied their new technique in an attempt to produce human SCNT stem cells. In May 2013, the Oregon group reported the successful derivation of human embryonic stem cell lines derived through SCNT, using fetal and infant donor cells. Using MII oocytes from volunteers and their improved SCNT procedure, human clone embryos were successfully produced. These embryos were of poor quality, lacking a substantial inner cell mass and poorly constructed trophectoderm. The imperfect embryos prevented the acquisition of human ESC. The addition of caffeine during the removal of the ovum's nucleus and fusion of the somatic cell and the egg improved blastocyst formation and ESC isolation. The ESC obtain were found to be capable of producing teratomas, expressed pluripotent transcription factors, and expressed a normal 46XX karyotype, indicating these SCNT were in fact ESC-like. This was the first instance of successfully using SCNT to reprogram human somatic cells. This study used fetal and infantile somatic cells to produce their ESC.
In April 2014, an international research team expanded on this break through. There remained the question of whether the same success could be accomplished using adult somatic cells. Epigenetic and age related changes were thought to possibly hinder an adult somatic cells ability to be reprogrammed. Implementing the procedure pioneered by the Oregon research group they indeed were able to grow stem cells generated by SCNT using adult cells from two donors aged 35 and 75, indicating that age does not impede a cell's ability to be reprogrammed.
Late April 2014, the New York Stem Cell Foundation was successful in creating SCNT stem cells derived from adult somatic cells. One of these lines of stem cells was derived from the donor cells of a type 1 diabetic. The group was then able to successfully culture these stem cells and induce differentiation. When injected into mice, cells of all three of the germ layers successfully formed. The most significant of these cells, were those who expressed insulin and were capable of secreting the hormone. These insulin producing cells could be used for replacement therapy in diabetics, demonstrating real SCNT stem cell therapeutic potential.
The impetus for SCNT-based stem cell research has been decreased by the development and improvement of alternative methods of generating stem cells. Methods to reprogram normal body cells into pluripotent stem cells were developed in humans in 2007. The following year, this method achieved a key goal of SCNT-based stem cell research: the derivation of pluripotent stem cell lines that have all genes linked to various diseases. Some scientists working on SCNT-based stem cell research have recently moved to the new methods of induced pluripotent stem cells. Though recent studies have put in question how similar iPS cells are to embryonic stem cells. Epigenetic memory in iPS affects the cell lineage it can differentiate into. For instance, an iPS cell derived from a blood cell using only the yamanaka factors will be more efficient at differentiating into blood cells, while it will be less efficient at creating a neuron. Recent studies indicate however that changes to the epigenetic memory of iPSCs using small molecules can reset them to an almost naive state of pluripotency. Studies have even shown that via tetraploid complementation, an entire viable organism can be created solely from iPSCs. SCNT stem cells have been found to have similar challenges. The cause for low yields in bovine SCNT cloning has, in recent years, been attributed to the previously hidden epigenetic memory of the somatic cells that were being introduced into the oocyte.
Reproductive cloning
This technique is currently the basis for cloning animals (such as the famous Dolly the sheep), and has been proposed as a possible way to clone humans. Using SCNT in reproductive cloning has proven difficult with limited success. High fetal and neonatal death make the process very inefficient. Resulting cloned offspring are also plagued with development and imprinting disorders in non-human species. For these reasons, along with moral and ethical objections, reproductive cloning in humans is proscribed in more than 30 countries. Most researchers believe that in the foreseeable future it will not be possible to use the current cloning technique to produce a human clone that will develop to term. It remains a possibility, though critical adjustments will be required to overcome current limitations during early embryonic development in human SCNT.
There is also the potential for treating diseases associated with mutations in mitochondrial DNA. Recent studies show SCNT of the nucleus of a body cell afflicted with one of these diseases into a healthy oocyte prevents the inheritance of the mitochondrial disease. This treatment does not involve cloning but would produce a child with three genetic parents. A father providing a sperm cell, one mother providing the egg nucleus, and another mother providing the enucleated egg cell.
In 2018, the first successful cloning of primates using somatic cell nuclear transfer, the same method as Dolly the sheep, with the birth of two live female clones (crab-eating macaques named Zhong Zhong and Hua Hua) was reported.
Interspecies nuclear transfer
Interspecies nuclear transfer (iSCNT) is a means of somatic cell nuclear transfer being used to facilitate the rescue of endangered species, or even to restore species after their extinction. The technique is similar to SCNT cloning which typically is between domestic animals and rodents, or where there is a ready supply of oocytes and surrogate animals. However, the cloning of highly endangered or extinct species requires the use of an alternative method of cloning. Interspecies nuclear transfer utilizes a host and a donor of two different organisms that are closely related species and within the same genus. In 2000, Robert Lanza was able to produce a cloned fetus of a gaur, Bos gaurus, combining it successfully with a domestic cow, Bos taurus.
In 2017, the first cloned Bactrian camel was born through iSCNT, using oocytes of dromedary camel and skin fibroblast cells of an adult Bactrian camel as donor nuclei.
Limitations
Somatic cell nuclear transfer (SCNT) can be inefficient due to stresses placed on both the egg cell and the introduced nucleus. This can result in a low percentage of successfully reprogrammed cells. For example, in 1996 Dolly the sheep was born after 277 eggs were used for SCNT, which created 29 viable embryos, giving it a measly 0.3% efficiency. Only three of these embryos survived until birth, and only one survived to adulthood. Millie, the offspring that survived, took 95 attempts to produce. Because the procedure was not automated and had to be performed manually under a microscope, SCNT was very resource intensive. Another reason why there is such high mortality rate with the cloned offspring is due to the fetus being larger than even other large offspring, resulting in death soon after birth. The biochemistry involved in reprogramming the differentiated somatic cell nucleus and activating the recipient egg was also far from understood. Another limitation is trying to use one-cell embryos during the SCNT. When using just one-cell cloned embryos, the experiment has a 65% chance to fail in the process of making morula or blastocyst. The biochemistry also has to be extremely precise, as most late term cloned fetus deaths are the result of inadequate placentation. However, by 2014, researchers were reporting success rates of 70-80% with cloning pigs and in 2016 a Korean company, Sooam Biotech, was reported to be producing 500 cloned embryos a day.
In SCNT, not all of the donor cell's genetic information is transferred, as the donor cell's mitochondria that contain their own mitochondrial DNA are left behind. The resulting hybrid cells retain those mitochondrial structures which originally belonged to the egg. As a consequence, clones such as Dolly that are born from SCNT are not perfect copies of the donor of the nucleus. This fact may also hamper the potential benefits of SCNT-derived tissues and organs for therapy, as there may be an immuno-response to the non-self mtDNA after transplant. Additionally, the genes found in the mitochondria’s genome need to communicate with the cell’s genome and a failure of somatic cell nuclear reprogramming can lead to non communication to the cell’s genome causing SCNT to fail.
Epigenetic factors play an important role in the success or failure of SCNT attempts. The varying gene expression of a previously activated cell and its mRNAs may lead to overexpression, underexpression, or in some cases non functional genes which will affect the developing fetus. One such example of epigenetic limitations to SCNT is regulating histone methylation. Differing regulation of these histone methylation genes can directly affect the transcription of the developing genome, causing failure of the SCNT. Another contributing factor to failure of SCNT includes the X chromosome inactivation in early development of the embryo. A non coding gene called XIST is responsible for inactivating one X chromosome during development, however in SCNT this gene can have abnormal regulation causing mortality to the developing fetus.
Controversy
Nuclear transfer techniques present a different set of ethical considerations than those associated with the use of other stem cells like embryonic stem cells which are controversial for their requirement to destroy an embryo. These different considerations have led to some individuals and organizations who are not opposed to human embryonic stem cell research to be concerned about, or opposed to, SCNT research.
One concern is that blastula creation in SCNT-based human stem cell research will lead to the reproductive cloning of humans. Both processes use the same first step: the creation of a nuclear transferred embryo, most likely via SCNT. Those who hold this concern often advocate for strong regulation of SCNT to preclude implantation of any derived products for the intention of human reproduction, or its prohibition.
A second important concern is the appropriate source of the eggs that are needed. SCNT requires human egg cells, which can only be obtained from women. The most common source of these eggs today are eggs that are produced and in excess of the clinical need during IVF treatment. This is a minimally invasive procedure, but it does carry some health risks, such as ovarian hyperstimulation syndrome.
One vision for successful stem cell therapies is to create custom stem cell lines for patients. Each custom stem cell line would consist of a collection of identical stem cells each carrying the patient's own DNA, thus reducing or eliminating any problems with rejection when the stem cells were transplanted for treatment. For example, to treat a man with Parkinson's disease, a cell nucleus from one of his cells would be transplanted by SCNT into an egg cell from an egg donor, creating a unique lineage of stem cells almost identical to the patient's own cells. (There would be differences. For example, the mitochondrial DNA would be the same as that of the egg donor. In comparison, his own cells would carry the mitochondrial DNA of his mother.)
Potentially millions of patients could benefit from stem cell therapy, and each patient would require a large number of donated eggs in order to successfully create a single custom therapeutic stem cell line. Such large numbers of donated eggs would exceed the number of eggs currently left over and available from couples trying to have children through assisted reproductive technology. Therefore, healthy young women would need to be induced to sell eggs to be used in the creation of custom stem cell lines that could then be purchased by the medical industry and sold to patients. It is so far unclear where all these eggs would come from.
Stem cell experts consider it unlikely that such large numbers of human egg donations would occur in a developed country because of the unknown long-term public health effects of treating large numbers of healthy young women with heavy doses of hormones in order to induce hyper-ovulation (ovulating several eggs at once). Although such treatments have been performed for several decades now, the long-term effects have not been studied or declared safe to use on a large scale on otherwise healthy women. Longer-term treatments with much lower doses of hormones are known to increase the rate of cancer decades later. Whether hormone treatments to induce hyper-ovulation could have similar effects is unknown. There are also ethical questions surrounding paying for eggs. In general, marketing body parts is considered unethical and is banned in most countries. Human eggs have been a notable exception to this rule for some time.
To address the problem of creating a human egg market, some stem cell researchers are investigating the possibility of creating artificial eggs. If successful, human egg donations would not be needed to create custom stem cell lines. However, this technology may be a long way off.
Policies regarding human SCNT
SCNT involving human cells is currently legal for research purposes in the United Kingdom, having been incorporated into the Human Fertilisation and Embryology Act 1990. Permission must be obtained from the Human Fertilisation and Embryology Authority in order to perform or attempt SCNT.
In the United States, the practice remains legal, as it has not been addressed by federal law. However, in 2002, a moratorium on United States federal funding for SCNT prohibits funding the practice for the purposes of research. Thus, though legal, SCNT cannot be federally funded. American scholars have recently argued that because the product of SCNT is a clone embryo, rather than a human embryo, these policies are morally wrong and should be revised.
In 2003, the United Nations adopted a proposal submitted by Costa Rica, calling on member states to "prohibit all forms of human cloning in as much as they are incompatible with human dignity and the protection of human life." This phrase may include SCNT, depending on interpretation.
The Council of Europe's Convention on Human Rights and Biomedicine and its Additional Protocol to the Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine, on the Prohibition of Cloning Human Being appear to ban SCNT of human beings. Of the Council's 45 member states, the Convention has been signed by 31 and ratified by 18. The Additional Protocol has been signed by 29 member nations and ratified by 14.
See also
Cloning
Embryogenesis
Handmade cloning
In vitro fertilisation
Induced stem cells
New Jersey legislation S1909/A2840
Rejuvenation
Stem cell controversy
Stem cell research
References
Further reading
External links
Research Cloning: Medical and scientific, legal and ethical aspects
The Basics: Stem Cells and Public Policy The Century Foundation, June 2005
Research Cloning Basic Science, Center for Genetics and Society, (last modified October 4, 2004, retrieved October 6, 2006)
Cloning: present uses and promises National Institutes of Health, Paper giving background information on cloning in general and SCNT from The Office of Science Policy Analysis.
Nuclear Transfer – Stem Cells or Somatic Cell Nuclear Transfer (SCNT) The International Society for Stem Cell Research
The Hinxton Group: An International Consortium on Stem Cells, Ethics & Law
Cell culture techniques
Cloning
Induced stem cells
Life extension
Stem cell research
1996 in biotechnology
Bioethics | Somatic cell nuclear transfer | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 5,010 | [
"Biochemistry methods",
"Bioethics",
"Stem cell research",
"Cloning",
"Cell culture techniques",
"Genetic engineering",
"Translational medicine",
"Tissue engineering",
"Ethics of science and technology",
"Induced stem cells"
] |
168,944 | https://en.wikipedia.org/wiki/Immunosuppression | Immunosuppression is a reduction of the activation or efficacy of the immune system. Some portions of the immune system itself have immunosuppressive effects on other parts of the immune system, and immunosuppression may occur as an adverse reaction to treatment of other conditions.
In general, deliberately induced immunosuppression is performed to prevent the body from rejecting an organ transplant. Additionally, it is used for treating graft-versus-host disease after a bone marrow transplant, or for the treatment of auto-immune diseases such as systemic lupus erythematosus, rheumatoid arthritis, Sjögren's syndrome, or Crohn's disease. This is typically done using medications, but may involve surgery (splenectomy), plasmapheresis, or radiation. A person who is undergoing immunosuppression, or whose immune system is weak for some other reasons (such as chemotherapy or HIV), is said to be immunocompromised.
Deliberately induced
Administration of immunosuppressive medications or immunosuppressants is the main method for deliberately inducing immunosuppression; in optimal circumstances, immunosuppressive drugs primarily target hyperactive components of the immune system. People in remission from cancer who require immunosuppression are not more likely to experience a recurrence. Throughout its history, radiation therapy has been used to decrease the strength of the immune system. Dr. Joseph Murray of Brigham and Women's Hospital was given the Nobel Prize in Physiology or Medicine in 1990 for work on immunosuppression.
Immunosuppressive drugs have the potential to cause immunodeficiency, which can increase susceptibility to opportunistic infection and decrease cancer immunosurveillance. Immunosuppressants may be prescribed when a normal immune response is undesirable, such as in autoimmune diseases.
Steroids were the first class of immunosuppressant drugs identified, though side-effects of early compounds limited their use. The more specific azathioprine was identified in 1960, but it was the discovery of ciclosporin in 1980 (together with azathioprine) that allowed significant expansion of transplantation to less well-matched donor-recipient pairs as well as broad application to lung transplantation, pancreas transplantation, and heart transplantation. After an organ transplantation, the body will nearly always reject the new organ(s) due to differences in human leukocyte antigen between the donor and recipient. As a result, the immune system detects the new tissue as "foreign", and attempts to remove it by attacking it with white blood cells, resulting in the death of the donated tissue. Immunosuppressants are administered in order to help prevent rejection; however, the body becomes more vulnerable to infections and malignancy during the course of such treatment.
Non-deliberate immunosuppression
Non-deliberate immunosuppression can occur in, for example, ataxia–telangiectasia, complement deficiencies, many types of cancer, and certain chronic infections such as human immunodeficiency virus (HIV). The unwanted effect in non-deliberate immunosuppression is immunodeficiency that results in increased susceptibility to pathogens, such as bacteria and viruses.
Immunodeficiency is also a potential adverse effect of many immunosuppressant drugs, in this sense, the scope of the term immunosuppression in general includes both beneficial and potential adverse effects of decreasing the function of the immune system.
B cell deficiency and T cell deficiency are immune impairment that individuals are born with or are acquired, which in turn can lead to immunodeficiency problems. Nezelof syndrome is an example of an immunodeficiency of T-cells.
See also
References
Further reading
Retrieved 6 May 2017.
External links
PubMed
Immune system
Immunology
Medical treatments
it:Immunodepressione | Immunosuppression | [
"Biology"
] | 845 | [
"Organ systems",
"Immunology",
"Immune system"
] |
168,986 | https://en.wikipedia.org/wiki/Glycogen | Glycogen is a multibranched polysaccharide of glucose that serves as a form of energy storage in animals, fungi, and bacteria. It is the main storage form of glucose in the human body.
Glycogen functions as one of three regularly used forms of energy reserves, creatine phosphate being for very short-term, glycogen being for short-term and the triglyceride stores in adipose tissue (i.e., body fat) being for long-term storage. Protein, broken down into amino acids, is seldom used as a main energy source except during starvation and glycolytic crisis (see bioenergetic systems).
In humans, glycogen is made and stored primarily in the cells of the liver and skeletal muscle. In the liver, glycogen can make up 5–6% of the organ's fresh weight: the liver of an adult, weighing 1.5 kg, can store roughly 100–120 grams of glycogen. In skeletal muscle, glycogen is found in a low concentration (1–2% of the muscle mass): the skeletal muscle of an adult weighing 70 kg stores roughly 400 grams of glycogen. Small amounts of glycogen are also found in other tissues and cells, including the kidneys, red blood cells, white blood cells, and glial cells in the brain. The uterus also stores glycogen during pregnancy to nourish the embryo.
The amount of glycogen stored in the body mostly depends on oxidative type 1 fibres, physical training, basal metabolic rate, and eating habits. Different levels of resting muscle glycogen are reached by changing the number of glycogen particles, rather than increasing the size of existing particles though most glycogen particles at rest are smaller than their theoretical maximum.
Approximately 4 grams of glucose are present in the blood of humans at all times; in fasting individuals, blood glucose is maintained constant at this level at the expense of glycogen stores, primarily from the liver (glycogen in skeletal muscle is mainly used as an immediate source of energy for that muscle rather than being used to maintain physiological blood glucose levels). Glycogen stores in skeletal muscle serve as a form of energy storage for the muscle itself; however, the breakdown of muscle glycogen impedes muscle glucose uptake from the blood, thereby increasing the amount of blood glucose available for use in other tissues. Liver glycogen stores serve as a store of glucose for use throughout the body, particularly the central nervous system. The human brain consumes approximately 60% of blood glucose in fasted, sedentary individuals.
Glycogen is an analogue of starch, a glucose polymer that functions as energy storage in plants. It has a structure similar to amylopectin (a component of starch), but is more extensively branched and compact than starch. Both are white powders in their dry state. Glycogen is found in the form of granules in the cytosol/cytoplasm in many cell types, and plays an important role in the glucose cycle. Glycogen forms an energy reserve that can be quickly mobilized to meet a sudden need for glucose, but one that is less compact than the energy reserves of triglycerides (lipids). As such it is also found as storage reserve in many parasitic protozoa.
Structure
Glycogen is a branched biopolymer consisting of linear chains of glucose residues with an average chain length of approximately 8–12 glucose units and 2,000-60,000 residues per one molecule of glycogen.
Like amylopectin, glucose units are linked together linearly by α(1→4) glycosidic bonds from one glucose to the next. Branches are linked to the chains from which they are branching off by α(1→6) glycosidic bonds between the first glucose of the new branch and a glucose on the stem chain.
Each glycogen is essentially a ball of glucose trees, with around 12 layers, centered on a glycogenin protein, with three kinds of glucose chains: A, B, and C. There is only one C-chain, attached to the glycogenin. This C-chain is formed by the self-glucosylation of the glycogenin, forming a short primer chain. From the C-chain grows out B-chains, and from B-chains branch out B- and A-chains. The B-chains have on average 2 branch points, while the A-chains are terminal, thus unbranched. On average, each chain has length 12, tightly constrained to be between 11 and 15. All A-chains reach the spherical surface of the glycogen.
Glycogen in muscle, liver, and fat cells is stored in a hydrated form, composed of three or four parts of water per part of glycogen associated with 0.45 millimoles (18 mg) of potassium per gram of glycogen.
Glucose is an osmotic molecule, and can have profound effects on osmotic pressure in high concentrations possibly leading to cell damage or death if stored in the cell without being modified. Glycogen is a non-osmotic molecule, so it can be used as a solution to storing glucose in the cell without disrupting osmotic pressure.
Functions
Liver
As a meal containing carbohydrates or protein is eaten and digested, blood glucose levels rise, and the pancreas secretes insulin. Blood glucose from the portal vein enters liver cells (hepatocytes). Insulin acts on the hepatocytes to stimulate the action of several enzymes, including glycogen synthase. Glucose molecules are added to the chains of glycogen as long as both insulin and glucose remain plentiful. In this postprandial or "fed" state, the liver takes in more glucose from the blood than it releases.
After a meal has been digested and glucose levels begin to fall, insulin secretion is reduced, and glycogen synthesis stops. When it is needed for energy, glycogen is broken down and converted again to glucose. Glycogen phosphorylase is the primary enzyme of glycogen breakdown. For the next 8–12 hours, glucose derived from liver glycogen is the primary source of blood glucose used by the rest of the body for fuel.
Glucagon, another hormone produced by the pancreas, in many respects serves as a countersignal to insulin. In response to insulin levels being below normal (when blood levels of glucose begin to fall below the normal range), glucagon is secreted in increasing amounts and stimulates both glycogenolysis (the breakdown of glycogen) and gluconeogenesis (the production of glucose from other sources).
Muscle
Muscle glycogen appears to function as a reserve of quickly available phosphorylated glucose, in the form of glucose-1-phosphate, for muscle cells. Glycogen contained within skeletal muscle cells are primarily in the form of β particles. Other cells that contain small amounts use it locally as well. As muscle cells lack glucose-6-phosphatase, which is required to pass glucose into the blood, the glycogen they store is available solely for internal use and is not shared with other cells. This is in contrast to liver cells, which, on demand, readily do break down their stored glycogen into glucose and send it through the blood stream as fuel for other organs.
Skeletal muscle needs ATP (provides energy) for muscle contraction and relaxation, in what is known as the sliding filament theory. Skeletal muscle relies predominantly on glycogenolysis for the first few minutes as it transitions from rest to activity, as well as throughout high-intensity aerobic activity and all anaerobic activity. During anaerobic activity, such as weightlifting and isometric exercise, the phosphagen system (ATP-PCr) and muscle glycogen are the only substrates used as they do not require oxygen nor blood flow.
Different bioenergetic systems produce ATP at different speeds, with ATP produced from muscle glycogen being much faster than fatty acid oxidation. The level of exercise intensity determines how much of which substrate (fuel) is used for ATP synthesis also. Muscle glycogen can supply a much higher rate of substrate for ATP synthesis than blood glucose. During maximum intensity exercise, muscle glycogen can supply 40 mmol glucose/kg wet weight/minute, whereas blood glucose can supply 4 - 5 mmol. Due to its high supply rate and quick ATP synthesis, during high-intensity aerobic activity (such as brisk walking, jogging, or running), the higher the exercise intensity, the more the muscle cell produces ATP from muscle glycogen. This reliance on muscle glycogen is not only to provide the muscle with enough ATP during high-intensity exercise, but also to maintain blood glucose homeostasis (that is, to not become hypoglycaemic by the muscles needing to extract far more glucose from the blood than the liver can provide). A deficit of muscle glycogen leads to muscle fatigue known as "hitting the wall" or "the bonk" (see below under glycogen depletion).
Structure Type
In 1999, Meléndez et al claimed that the structure of glycogen is optimal under a particular metabolic constraint model, where the structure was suggested to be "fractal" in nature. However, research by Besford et al used small angle X-ray scattering experiments accompanied by branching theory models to show that glycogen is a randomly hyperbranched polymer nanoparticle. Glycogen is not fractal in nature. This has been subsequently verified by others who have performed Monte Carlo simulations of glycogen particle growth, and shown that the molecular density reaches a maximum near the centre of the nanoparticle structure, not at the periphery (contradicting a fractal structure that would have greater density at the periphery).
History
Glycogen was discovered by Claude Bernard. His experiments showed that the liver contained a substance that could give rise to reducing sugar by the action of a "ferment" in the liver. By 1857, he described the isolation of a substance he called "la matière glycogène", or "sugar-forming substance". Soon after the discovery of glycogen in the liver, M.A. Sanson found that muscular tissue also contains glycogen. The empirical formula for glycogen of ()n was established by August Kekulé in 1858.
Sanson, M. A. "Note sur la formation physiologique du sucre dans l’economie animale." Comptes rendus des seances de l’Academie des Sciences 44 (1857): 1323-5.
Metabolism
Synthesis
Glycogen synthesis is, unlike its breakdown, endergonic—it requires the input of energy. Energy for glycogen synthesis comes from uridine triphosphate (UTP), which reacts with glucose-1-phosphate, forming UDP-glucose, in a reaction catalysed by UTP—glucose-1-phosphate uridylyltransferase. Glycogen is synthesized from monomers of UDP-glucose initially by the protein glycogenin, which has two tyrosine anchors for the reducing end of glycogen, since glycogenin is a homodimer. After about eight glucose molecules have been added to a tyrosine residue, the enzyme glycogen synthase progressively lengthens the glycogen chain using UDP-glucose, adding α(1→4)-bonded glucose to the nonreducing end of the glycogen chain.
The glycogen branching enzyme catalyzes the transfer of a terminal fragment of six or seven glucose residues from a nonreducing end to the C-6 hydroxyl group of a glucose residue deeper into the interior of the glycogen molecule. The branching enzyme can act upon only a branch having at least 11 residues, and the enzyme may transfer to the same glucose chain or adjacent glucose chains.
Breakdown
Glycogen is cleaved from the nonreducing ends of the chain by the enzyme glycogen phosphorylase to produce monomers of glucose-1-phosphate:
In vivo, phosphorolysis proceeds in the direction of glycogen breakdown because the ratio of phosphate and glucose-1-phosphate is usually greater than 100. Glucose-1-phosphate is then converted to glucose 6 phosphate (G6P) by phosphoglucomutase. A special debranching enzyme is needed to remove the α(1→6) branches in branched glycogen and reshape the chain into a linear polymer. The G6P monomers produced have three possible fates:
G6P can continue on the glycolysis pathway and be used as fuel.
G6P can enter the pentose phosphate pathway via the enzyme glucose-6-phosphate dehydrogenase to produce NADPH and 5 carbon sugars.
In the liver and kidney, G6P can be dephosphorylated back to glucose by the enzyme glucose 6-phosphatase. This is the final step in the gluconeogenesis pathway.
Clinical relevance
Disorders of glycogen metabolism
The most common disease in which glycogen metabolism becomes abnormal is diabetes, in which, because of abnormal amounts of insulin, liver glycogen can be abnormally accumulated or depleted. Restoration of normal glucose metabolism usually normalizes glycogen metabolism, as well.
In hypoglycemia caused by excessive insulin, liver glycogen levels are high, but the high insulin levels prevent the glycogenolysis necessary to maintain normal blood sugar levels. Glucagon is a common treatment for this type of hypoglycemia.
Various inborn errors of carbohydrate metabolism are caused by deficiencies of enzymes or transport proteins necessary for glycogen synthesis or breakdown. These are collectively referred to as glycogen storage diseases.
Glycogen depletion and endurance exercise
Long-distance athletes, such as marathon runners, cross-country skiers, and cyclists, often experience glycogen depletion, where almost all of the athlete's glycogen stores are depleted after long periods of exertion without sufficient carbohydrate consumption. This phenomenon is referred to as "hitting the wall" in running and "bonking" in cycling.
Glycogen depletion can be forestalled in three possible ways:
First, during exercise, carbohydrates with the highest possible rate of conversion to blood glucose (high glycemic index) are ingested continuously. The best possible outcome of this strategy replaces about 35% of glucose consumed at heart rates above about 80% of maximum.
Second, through endurance training adaptations and specialized regimens (e.g. fasting, low-intensity endurance training), the body can condition type I muscle fibers to improve both fuel use efficiency and workload capacity to increase the percentage of fatty acids used as fuel, sparing carbohydrate use from all sources.
Third, by consuming large quantities of carbohydrates after depleting glycogen stores as a result of exercise or diet, the body can increase storage capacity of intramuscular glycogen stores. This process is known as carbohydrate loading. In general, glycemic index of carbohydrate source does not matter since muscular insulin sensitivity is increased as a result of temporary glycogen depletion.
When athletes ingest both carbohydrate and caffeine following exhaustive exercise, their glycogen stores tend to be replenished more rapidly; however, the minimum dose of caffeine at which there is a clinically significant effect on glycogen repletion has not been established.
Nanomedicine
Glycogen nanoparticles have been investigated as potential drug delivery systems.
See also
Bioenergetic systems
Chitin
Peptidoglycan
References
External links
Exercise physiology
Glycobiology
Hepatology
Nutrition
Polysaccharides | Glycogen | [
"Chemistry",
"Biology"
] | 3,462 | [
"Biochemistry",
"Glycobiology",
"Carbohydrates",
"Polysaccharides"
] |
169,146 | https://en.wikipedia.org/wiki/Cold%20cathode | A cold cathode is a cathode that is not electrically heated by a filament. A cathode may be considered "cold" if it emits more electrons than can be supplied by thermionic emission alone. It is used in gas-discharge lamps, such as neon lamps, discharge tubes, and some types of vacuum tube. The other type of cathode is a hot cathode, which is heated by electric current passing through a filament. A cold cathode does not necessarily operate at a low temperature: it is often heated to its operating temperature by other methods, such as the current passing from the cathode into the gas.
Cold-cathode devices
A cold-cathode vacuum tube does not rely on external heating of an electrode to provide thermionic emission of electrons. Early cold-cathode devices included the Geissler tube and Plucker tube, and early cathode-ray tubes. Study of the phenomena in these devices led to the discovery of the electron.
Neon lamps are used both to produce light as indicators and for special-purpose illumination, and also as circuit elements displaying negative resistance. Addition of a trigger electrode to a device allowed the glow discharge to be initiated by an external control circuit; Bell Laboratories developed a "trigger tube" cold-cathode device in 1936.
Many types of cold-cathode switching tube were developed, including various types of thyratron, the krytron, cold-cathode displays (Nixie tube) and others. Voltage regulator tubes rely on the relatively constant voltage of a glow discharge over a range of current and were used to stabilize power-supply voltages in tube-based instruments. A Dekatron is a cold-cathode tube with multiple electrodes that is used for counting. Each time a pulse is applied to a control electrode, a glow discharge moves to a step electrode; by providing ten electrodes in each tube and cascading the tubes, a counter system can be developed and the count observed by the position of the glow discharges. Counter tubes were used widely before development of integrated circuit counter devices.
The flash tube is a cold-cathode device filled with xenon gas, used to produce an intense short pulse of light for photography or to act as a stroboscope to examine the motion of moving parts.
Lamps
Cold-cathode lamps include cold-cathode fluorescent lamps (CCFLs) and neon lamps. Neon lamps primarily rely on excitation of gas molecules to emit light; CCFLs use a discharge in mercury vapor to develop ultraviolet light, which in turn causes a fluorescent coating on the inside of the lamp to emit visible light.
Cold-cathode fluorescent lamps were used for backlighting of LCDs, for example computer monitors and television screens.
In the lighting industry, “cold cathode” historically refers to luminous tubing larger than 20 mm in diameter and operating on a current of 120 to 240 milliamperes. This larger-diameter tubing is often used for interior alcove and general lighting.
The term "neon lamp" refers to tubing that is smaller than 15 mm in diameter and typically operates at approximately 40 milliamperes. These lamps are commonly used for neon signs.
Details
The cathode is the negative electrode. Any gas-discharge lamp has a positive (anode) and a negative electrode. Both electrodes alternate between acting as an anode and a cathode when these devices run with alternating current.
A cold cathode is distinguished from a hot cathode that is heated to induce thermionic emission of electrons. Discharge tubes with hot cathodes have an envelope filled with low-pressure gas and containing two electrodes. Hot cathode devices include common vacuum tubes, fluorescent lamps, high-pressure discharge lamps and vacuum fluorescent displays.
The surface of cold cathodes can emit secondary electrons at a ratio greater than unity (breakdown). An electron that leaves the cathode will collide with neutral gas molecules. The collision may just excite the molecule, but sometimes it will knock an electron free to create a positive ion. The original electron and the freed electron continue toward the anode and may create more positive ions (see Townsend avalanche). The result is for each electron that leaves the cathode, several positive ions are generated that eventually crash onto the cathode. Some crashing positive ions may generate a secondary electron. The discharge is self-sustaining when for each electron that leaves the cathode, enough positive ions hit the cathode to free, on average, another electron. External circuitry limits the discharge current. Cold-cathode discharge lamps use higher voltages than hot-cathode ones. The resulting strong electric field near the cathode accelerates ions to a sufficient velocity to create free electrons from the cathode material.
Another mechanism to generate free electrons from a cold metallic surface is field electron emission. It is used in some x-ray tubes, the field-electron microscope (FEM), and field-emission displays (FEDs).
Cold cathodes sometimes have a rare-earth coating to enhance electron emission. Some types contain a source of beta radiation to start ionization of the gas that fills the tube. In some tubes, glow discharge around the cathode is usually minimized; instead there is a so-called positive column, filling the tube. Examples are the neon lamp and nixie tubes. Nixie tubes too are cold-cathode neon displays that are in-line, but not in-plane, display devices.
Cold-cathode devices typically use a complex high-voltage power supply with some mechanism for limiting current. Although creating the initial space charge and the first arc of current through the tube may require a very high voltage, once the tube begins to heat up, the electrical resistance drops, thus increasing the electric current through the lamp. To offset this effect and maintain normal operation, the supply voltage is gradually lowered. In the case of tubes with an ionizing gas, the gas can become a very hot plasma, and electrical resistance is greatly reduced. If operated from a simple power supply without current limiting, this reduction in resistance would lead to damage to the power supply and overheating of the tube electrodes.
Applications
Cold cathodes are used in cold-cathode rectifiers, such as the crossatron and mercury-arc valves, and cold-cathode amplifiers, such as in automatic message accounting and other pseudospark switching applications. Other examples include the thyratron, krytron, sprytron, and ignitron tubes.
A common cold-cathode application is in neon signs and other locations where the ambient temperature is likely to drop well below freezing, The Clock Tower, Palace of Westminster (Big Ben) uses cold-cathode lighting behind the clock faces where continual striking and failure to strike in cold weather would be undesirable. Large cold-cathode fluorescent lamps (CCFLs) have been produced in the past and are still used today when shaped, long-life linear light sources are required. , miniature CCFLs were extensively used as backlights for computer and television liquid-crystal displays. CCFL lifespans vary in LCD televisions depending on transient voltage surges and temperature levels in usage environments.
Due to its efficiency, CCFL technology has expanded into room lighting. Costs are similar to those of traditional fluorescent lighting, but with several advantages: it has a long life, the light emitted is , bulbs turn on instantly to full output and are also dimmable.
Effects of internal heating
In systems using alternating current but without separate anode structures, the electrodes alternate as anodes and cathodes, and the impinging electrons can cause substantial localized heating, often to red heat. The electrode may take advantage of this heating to facilitate the thermionic emission of electrons when it is acting as a cathode. (Instant-start fluorescent lamps employ this aspect; they start as cold-cathode devices, but soon localized heating of the fine tungsten-wire cathodes causes them to operate in the same mode as hot-cathode lamps.)
This aspect is problematic in the case of backlights used for LCD TV displays. New energy-efficiency regulations being proposed in many countries will require variable backlighting; variable backlighting also improves the perceived contrast range, which is desirable for LCD TV sets. However, CCFLs are strictly limited in the degree to which they can be dimmed, both because a lower plasma current will lower the temperature of the cathode, causing erratic operation, and because running the cathode at too low a temperature drastically shortens the life of the lamps. Much research is being directed to this problem, but high-end manufacturers are now turning to high-efficiency white LEDs as a better solution.
References and notes
Notes
Citations
Electrodes
Gas discharge lamps
Types of lamp
Vacuum
Vacuum tubes | Cold cathode | [
"Physics",
"Chemistry"
] | 1,852 | [
"Vacuum tubes",
"Electrodes",
"Vacuum",
"Electrochemistry",
"Matter"
] |
169,169 | https://en.wikipedia.org/wiki/Cryopump | A cryopump or a "cryogenic pump" is a vacuum pump that traps gases and vapours by condensing them on a cold surface, but are only effective on some gases. The effectiveness depends on the freezing and boiling points of the gas relative to the cryopump's temperature. They are sometimes used to block particular contaminants, for example in front of a diffusion pump to trap backstreaming oil, or in front of a McLeod gauge to keep out water. In this function, they are called a cryotrap, waterpump or cold trap, even though the physical mechanism is the same as for a cryopump.
Cryotrapping can also refer to a somewhat different effect, where molecules will increase their residence time on a cold surface without actually freezing (supercooling). There is a delay between the molecule impinging on the surface and rebounding from it. Kinetic energy will have been lost as the molecules slow down. For example, hydrogen does not condense at 8 kelvins, but it can be cryotrapped. This effectively traps molecules for an extended period and thereby removes them from the vacuum environment just like cryopumping.
History
Early experiments into cryotrapping of gasses in activated charcoal were conducted as far back as 1874.
The first cryopumps mainly used liquid helium to cool the pump, either in a large liquid helium reservoir, or by continuous flow into the cryopump. However, over time most cryopumps were redesigned to use gaseous helium, enabled by the invention of better cryocoolers. The key refrigeration technology was discovered in the 1950s by two employees of the Massachusetts-based company Arthur D. Little Inc., William E. Gifford and Howard O. McMahon. This technology came to be known as the Gifford-McMahon cryocooler. In the 1970s, the Gifford-McMahon cryocooler was used to make a vacuum pump by Helix Technology Corporation and its subsidiary company Cryogenic Technology Inc. In 1976, cryopumps began to be used in IBM's manufacturing of integrated circuits. The use of cryopumps became common in semiconductor manufacturing worldwide, with expansions such as a cryogenics company founded jointly by Helix and ULVAC (jp:アルバック) in 1981.
Operation
Cryopumps are commonly cooled by compressed helium, though they may also use dry ice, liquid nitrogen, or stand-alone versions may include a built-in cryocooler. Baffles are often attached to the cold head to expand the surface area available for condensation, but these also increase the radiative heat uptake of the cryopump. Over time, the surface eventually saturates with condensate and thus the pumping speed gradually drops to zero. It will hold the trapped gases as long as it remains cold, but it will not condense fresh gases from leaks or backstreaming until it is regenerated. Saturation happens very quickly in low vacuums, so cryopumps are usually only used in high or ultrahigh vacuum systems.
The cryopump provides fast, clean pumping of all gases in the 10−3 to 10−9 Torr range. The cryopump operates on the principle that gases can be condensed and held at extremely low vapor pressures, achieving high speeds and throughputs. The cold head consists of a two-stage cold head cylinder (part of the vacuum vessel) and a drive unit displacer assembly. These together produce closed-cycle refrigeration at temperatures that range from 60 to 80K for the first-stage cold station to 10 to 20K for the second-stage cold station, typically.
Some cryopumps have multiple stages at various low temperatures, with the outer stages shielding the coldest inner stages. The outer stages condense high boiling point gases such as water and oil, thus saving the surface area and refrigeration capacity of the inner stages for lower boiling point gases such as nitrogen.
As cooling temperatures decrease when using dry ice, liquid nitrogen, then compressed helium, lower molecular-weight gases can be trapped. Trapping nitrogen, helium, and hydrogen requires extremely low temperatures (~10K) and large surface area as described below. Even at this temperature, the lighter gases helium and hydrogen have very low trapping efficiency and are the predominant molecules in ultra-high vacuum systems.
Cryopumps are often combined with sorption pumps by coating the cold head with highly adsorbing materials such as activated charcoal or a zeolite. As the sorbent saturates, the effectiveness of a sorption pump decreases, but can be recharged by heating the zeolite material (preferably under conditions of low pressure) to outgas it. The breakdown temperature of the zeolite material's porous structure may limit the maximum temperature that it may be heated to for regeneration.
Sorption pumps are a type of cryopump that is often used as roughing pumps to reduce pressures from the range of atmospheric to on the order of 0.1 Pa (10−3 Torr), while lower pressures are achieved using a finishing pump (see vacuum).
Regeneration
Regeneration of a cryopump is the process of evaporating the trapped gases. During a regeneration cycle, the cryopump is warmed to room temperature or higher, allowing trapped gases to change from a solid state to a gaseous state and thereby be released from the cryopump through a pressure relief valve into the atmosphere.
Most production equipment utilizing a cryopump have a means to isolate the cryopump from the vacuum chamber so regeneration takes place without exposing the vacuum system to released gasses such as water vapor. Water vapor is the hardest natural element to remove from vacuum chamber walls upon exposure to the atmosphere due to monolayer formation and hydrogen bonding. Adding heat to the dry nitrogen purge-gas will speed the warm-up and reduce the regeneration time.
When regeneration is complete, the cryopump will be roughed to 50μm (50 milliTorr or ), isolated, and the rate-of-rise (ROR) will be monitored to test for complete regeneration. If the ROR exceeds 10μm/min the cryopump will require additional purge time.
References
, Chapter 3
Vacuum pumps
Gases
Gas technologies | Cryopump | [
"Physics",
"Chemistry",
"Engineering"
] | 1,300 | [
"Matter",
"Vacuum pumps",
"Vacuum systems",
"Phases of matter",
"Vacuum",
"Statistical mechanics",
"Gases"
] |
169,188 | https://en.wikipedia.org/wiki/Color%20confinement | In quantum chromodynamics (QCD), color confinement, often simply called confinement, is the phenomenon that color-charged particles (such as quarks and gluons) cannot be isolated, and therefore cannot be directly observed in normal conditions below the Hagedorn temperature of approximately 2 terakelvin (corresponding to energies of approximately 130–140 MeV per particle). Quarks and gluons must clump together to form hadrons. The two main types of hadron are the mesons (one quark, one antiquark) and the baryons (three quarks). In addition, colorless glueballs formed only of gluons are also consistent with confinement, though difficult to identify experimentally. Quarks and gluons cannot be separated from their parent hadron without producing new hadrons.
Origin
There is not yet an analytic proof of color confinement in any non-abelian gauge theory. The phenomenon can be understood qualitatively by noting that the force-carrying gluons of QCD have color charge, unlike the photons of quantum electrodynamics (QED). Whereas the electric field between electrically charged particles decreases rapidly as those particles are separated, the gluon field between a pair of color charges forms a narrow flux tube (or string) between them. Because of this behavior of the gluon field, the strong force between the particles is constant regardless of their separation.
Therefore, as two color charges are separated, at some point it becomes energetically favorable for a new quark–antiquark pair to appear, rather than extending the tube further. As a result of this, when quarks are produced in particle accelerators, instead of seeing the individual quarks in detectors, scientists see "jets" of many color-neutral particles (mesons and baryons), clustered together. This process is called hadronization, fragmentation, or string breaking.
The confining phase is usually defined by the behavior of the action of the Wilson loop, which is simply the path in spacetime traced out by a quark–antiquark pair created at one point and annihilated at another point. In a non-confining theory, the action of such a loop is proportional to its perimeter. However, in a confining theory, the action of the loop is instead proportional to its area. Since the area is proportional to the separation of the quark–antiquark pair, free quarks are suppressed. Mesons are allowed in such a picture, since a loop containing another loop with the opposite orientation has only a small area between the two loops. At non-zero temperatures, the order operator for confinement are thermal versions of Wilson loops known as Polyakov loops.
Confinement scale
The confinement scale or QCD scale is the scale at which the perturbatively defined strong coupling constant diverges. This is known as the Landau pole. The confinement scale definition and value therefore depend on the renormalization scheme used. For example, in the MS-bar scheme and at 4-loop in the running of , the world average in the 3-flavour case is given by
When the renormalization group equation is solved exactly, the scale is not defined at all. It is therefore customary to quote the value of the strong coupling constant at a particular reference scale instead.
It is sometimes believed that the sole origin of confinement is the very large value of the strong coupling near the Landau pole. This is sometimes referred as infrared slavery (a term chosen to contrast with the ultraviolet freedom). It is however incorrect since in QCD the Landau pole is unphysical, which can be seen by the fact that its position at the confinement scale largely depends on the chosen renormalization scheme, i.e., on a convention. Most evidence points to a moderately large coupling, typically of value 1-3 depending on the choice of renormalization scheme. In contrast to the simple but erroneous mechanism of infrared slavery, a large coupling is but one ingredient for color confinement, the other one being that gluons are color-charged and
can therefore collapse into gluon tubes.
Models exhibiting confinement
In addition to QCD in four spacetime dimensions, the two-dimensional Schwinger model also exhibits confinement. Compact Abelian gauge theories also exhibit confinement in 2 and 3 spacetime dimensions. Confinement has been found in elementary excitations of magnetic systems called spinons.
If the electroweak symmetry breaking scale were lowered, the unbroken SU(2) interaction would eventually become confining. Alternative models where SU(2) becomes confining above that scale are quantitatively similar to the Standard Model at lower energies, but dramatically different above symmetry breaking.
Models of fully screened quarks
Besides the quark confinement idea, there is a potential possibility that the color charge of quarks gets fully screened by the gluonic color surrounding the quark. Exact solutions of SU(3) classical Yang–Mills theory which provide full screening (by gluon fields) of the color charge of a quark have been found. However, such classical solutions do not take into account non-trivial properties of QCD vacuum. Therefore, the significance of such full gluonic screening solutions for a separated quark is not clear.
See also
Lund string model
Gluon field strength tensor
Asymptotic freedom
Beta function (physics)
Yang–Mills existence and mass gap
Lattice gauge theory
Dual superconductor model
Center vortex
References
Gluons
Quantum chromodynamics
Quark matter
Unsolved problems in physics | Color confinement | [
"Physics"
] | 1,149 | [
"Astrophysics",
"Unsolved problems in physics",
"Quark matter",
"Nuclear physics"
] |
169,283 | https://en.wikipedia.org/wiki/CHSH%20inequality | In physics, the Clauser–Horne–Shimony–Holt (CHSH) inequality can be used in the proof of Bell's theorem, which states that certain consequences of entanglement in quantum mechanics cannot be reproduced by local hidden-variable theories. Experimental verification of the inequality being violated is seen as confirmation that nature cannot be described by such theories. CHSH stands for John Clauser, Michael Horne, Abner Shimony, and Richard Holt, who described it in a much-cited paper published in 1969. They derived the CHSH inequality, which, as with John Stewart Bell's original inequality, is a constraint—on the statistical occurrence of "coincidences" in a Bell test—which is necessarily true if an underlying local hidden-variable theory exists. In practice, the inequality is routinely violated by modern experiments in quantum mechanics.
Statement
The usual form of the CHSH inequality is
where
and are detector settings on side , and on side , the four combinations being tested in separate subexperiments. The terms etc. are the quantum correlations of the particle pairs, where the quantum correlation is defined to be the expectation value of the product of the "outcomes" of the experiment, i.e. the statistical average of , where are the separate outcomes, using the coding +1 for the '+' channel and −1 for the '−' channel. Clauser et al.'s 1969 derivation was oriented towards the use of "two-channel" detectors, and indeed it is for these that it is generally used, but under their method the only possible outcomes were +1 and −1. In order to adapt to real situations, which at the time meant the use of polarised light and single-channel polarisers, they had to interpret '−' as meaning "non-detection in the '+' channel", i.e. either '−' or nothing. They did not in the original article discuss how the two-channel inequality could be applied in real experiments with real imperfect detectors, though it was later proved that the inequality itself was equally valid. The occurrence of zero outcomes, though, means it is no longer so obvious how the values of E are to be estimated from the experimental data.
The mathematical formalism of quantum mechanics predicts that the value of exceeds 2 for systems prepared in suitable entangled states and the appropriate choice of measurement settings (see below). The maximum violation predicted by quantum mechanics is (Tsirelson's bound) and can be obtained from a maximal entangled Bell state.
Experiments
Many Bell tests conducted subsequent to Alain Aspect's second experiment in 1982 have used the CHSH inequality, estimating the terms using (3) and assuming fair sampling. Some dramatic violations of the inequality have been reported.
In practice most actual experiments have used light rather than the electrons that Bell originally had in mind. The property of interest is, in the best known experiments, the polarisation direction, though other properties can be used. The diagram shows a typical optical experiment. Coincidences (simultaneous detections) are recorded, the results being categorised as '++', '+−', '−+' or '−−' and corresponding counts accumulated.
Four separate subexperiments are conducted, corresponding to the four terms in the test statistic S (, above). The settings , , , and are generally in practice chosen—the "Bell test angles"—these being the ones for which the quantum mechanical formula gives the greatest violation of the inequality.
For each selected value of , the numbers of coincidences in each category are recorded. The experimental estimate for is then calculated as:
Once all the 's have been estimated, an experimental estimate of S (Eq. ) can be found. If it is numerically greater than 2 it has infringed the CHSH inequality and the experiment is declared to have supported the quantum mechanics prediction and ruled out all local hidden-variable theories.
The CHSH paper lists many preconditions (or "reasonable and/or presumable assumptions") to derive the simplified theorem and formula. For example, for the method to be valid, it has to be assumed that the detected pairs are a fair sample of those emitted. In actual experiments, detectors are never 100% efficient, so that only a sample of the emitted pairs are detected. A subtle, related requirement is that the hidden variables do not influence or determine detection probability in a way that would lead to different samples at each arm of the experiment.
The CHSH inequality has been violated with photon pairs, beryllium ion pairs, ytterbium ion pairs, rubidium atom pairs, whole rubidium-atom cloud pairs, nitrogen vacancies in diamonds, and Josephson phase qubits.
Derivation
The original 1969 derivation will not be given here since it is not easy to follow and involves the assumption that the outcomes are all +1 or −1, never zero. Bell's 1971 derivation is more general. He effectively assumes the "Objective Local Theory" later used by Clauser and Horne. It is assumed that any hidden variables associated with the detectors themselves are independent on the two sides and can be averaged out from the start. Another derivation of interest is given in Clauser and Horne's 1974 paper, in which they start from the CH74 inequality.
Bell's 1971 derivation
The following is based on page 37 of Bell's Speakable and Unspeakable, the main change being to use the symbol ‘E’ instead of ‘P’ for the expected value of the quantum correlation. This avoids any suggestion that the quantum correlation is itself a probability.
We start with the standard assumption of independence of the two sides, enabling us to obtain the joint probabilities of pairs of outcomes by multiplying the separate probabilities, for any selected value of the "hidden variable" λ. λ is assumed to be drawn from a fixed distribution of possible states of the source, the probability of the source being in the state λ for any particular trial being given by the density function ρ(λ), the integral of which over the complete hidden variable space is 1. We thus assume we can write:
where A and B are the outcomes. Since the possible values of A and B are −1, 0 and +1, it follows that:
Then, if a, a′, b and b′ are alternative settings for the detectors,
Taking absolute values of both sides, and applying the triangle inequality to the right-hand side, we obtain
We use the fact that and are both non-negative to rewrite the right-hand side of this as
By (), this must be less than or equal to
which, using the fact that the integral of is 1, is equal to
which is equal to .
Putting this together with the left-hand side, we have:
which means that the left-hand side is less than or equal to both and . That is:
from which we obtain
(by the triangle inequality again), which is the CHSH inequality.
Derivation from Clauser and Horne's 1974 inequality
In their 1974 paper, Clauser and Horne show that the CHSH inequality can be derived from the CH74 one. As they tell us, in a two-channel experiment the CH74 single-channel test is still applicable and provides four sets of inequalities governing the probabilities p of coincidences.
Working from the inhomogeneous version of the inequality, we can write:
where j and k are each '+' or '−', indicating which detectors are being considered.
To obtain the CHSH test statistic S (), all that is needed is to multiply the inequalities for which j is different from k by −1 and add these to the inequalities for which j and k are the same.
Optimal violation by a general quantum state
In experimental practice, the two particles are not an ideal EPR pair. There is a necessary and sufficient condition for a two-qubit density matrix to violate the CHSH inequality, expressed by the maximum attainable polynomial Smax defined in . This is important in entanglement-based quantum key distribution, where the secret key rate depends on the degree of measurement correlations.
Let us introduce a 3×3 real matrix with elements , where are the Pauli matrices. Then we find the eigenvalues and eigenvectors of the real symmetric matrix ,
where the indices are sorted by . Then, the maximal CHSH polynomial is determined by the two greatest eigenvalues,
Optimal measurement bases
There exists an optimal configuration of the measurement bases a, a', b, b for a given that yields Smax with at least one free parameter.
The projective measurement that yields either +1 or −1 for two orthogonal states respectively, can be expressed by an operator . The choice of this measurement basis can be parametrized by a real unit vector and the Pauli vector by expressing . Then, the expected correlation in bases a, b is
The numerical values of the basis vectors, when found, can be directly translated to the configuration of the projective measurements.
The optimal set of bases for the state is found by taking the two greatest eigenvalues and the corresponding eigenvectors of , and finding the auxiliary unit vectors
where is a free parameter. We also calculate the acute angle
to obtain the bases that maximize ,
In entanglement-based quantum key distribution, there is another measurement basis used to communicate the secret key ( assuming Alice uses the side A). The bases then need to minimize the quantum bit error rate Q, which is the probability of obtaining different measurement outcomes (+1 on one particle and −1 on the other). The corresponding bases are
The CHSH polynomial S needs to be maximized as well, which together with the bases above creates the constraint .
CHSH game
The CHSH game''' is a thought experiment involving two parties separated at a great distance (far enough to preclude classical communication at the speed of light), each of whom has access to one half of an entangled two-qubit pair. Analysis of this game shows that no classical local hidden-variable theory can explain the correlations that can result from entanglement. Since this game is indeed physically realizable, this gives strong evidence that classical physics is fundamentally incapable of explaining certain quantum phenomena, at least in a "local" fashion.
In the CHSH game, there are two cooperating players, Alice and Bob, and a referee, Charlie. These agents will be abbreviated respectively. At the start of the game, Charlie chooses bits uniformly at random, and then sends to Alice and to Bob. Alice and Bob must then each respond to Charlie with bits respectively. Now, once Alice and Bob send their responses back to Charlie, Charlie tests if , where ∧ denotes a logical AND operation and ⊕ denotes a logical XOR operation. If this equality holds, then Alice and Bob win, and if not then they lose.
It is also required that Alice and Bob's responses can only depend on the bits they see: so Alice's response depends only on , and similarly for Bob. This means that Alice and Bob are forbidden from directly communicating with each other about the values of the bits sent to them by Charlie. However, Alice and Bob are allowed to decide on a common strategy before the game begins.
In the following sections, it is shown that if Alice and Bob use only classical strategies involving their local information (and potentially some random coin tosses), it is impossible for them to win with a probability higher than 75%. However, if Alice and Bob are allowed to share a single entangled qubit pair, then there exists a strategy which allows Alice and Bob to succeed with a probability of ~85%.
Optimal classical strategy
We first establish that any deterministic classical strategy has success probability at most 75% (where the probability is taken over Charlie's uniformly random choice of ). By a deterministic strategy, we mean a pair of functions , where is a function determining Alice's response as a function of the message she receives from Charlie, and is a function determining Bob's response based on what he receives. To prove that any deterministic strategy fails at least 25% of the time, we can simply consider all possible pairs of strategies for Alice and Bob, of which there are at most 8 (for each party, there are 4 functions ). It can be verified that for each of those 8 strategies there is always at least one out of the four possible input pairs which makes the strategy fail. For example, in the strategy where both players always answer 0, we have that Alice and Bob win in all cases except for when , so using this strategy their win probability is exactly 75%.
Now, consider the case of randomized classical strategies, where Alice and Bob have access to correlated random numbers. They can be produced by jointly flipping a coin several times before the game has started and Alice and Bob are still allowed to communicate. The output they give at each round is then a function of both Charlie's message and the outcome of the corresponding coin flip. Such a strategy can be viewed as a probability distribution over deterministic strategies, and thus its success probability is a weighted sum over the success probabilities of the deterministic strategies. But since every deterministic strategy has a success probability of at most 75%, this weighted sum cannot exceed 75% either.
Optimal quantum strategy
Now, imagine that Alice and Bob share the two-qubit entangled state: , commonly referred to as an EPR pair. Alice and Bob will use this entangled pair in their strategy as described below. The optimality of this strategy then follows from Tsirelson's bound.
Upon receiving the bit from Charlie, Alice will measure her qubit in the basis or in the basis , conditionally on whether or , respectively. She will then label the two possible outputs resulting from each measurement choice as if the first state in the measurement basis is observed, and otherwise.
Bob also uses the bit received from Charlie to decide which measurement to perform: if he measures in the basis , while if he measures in the basis , wherewith .
The following table shows how the game is played. The states are arranged in the order that puts each state between the two most similar. They could correspond, for example, to photons polarized at angles of 0°, 22.5°, 45°, ... 180° (with 180° and 0° being the same state).
To analyze the success probability, it suffices to analyze the probability that they output a winning value pair on each of the four possible inputs , and then take the average. We analyze the case where here:
In this case the winning response pairs are and . On input , we know that Alice will measure in the basis , and Bob will measure in the basis . Then the probability that they both output 0 is the same as the probability that their measurements yield respectively, so precisely . Similarly, the probability that they both output 1 is exactly . So the probability that either of these successful outcomes happens is .
In the case of the 3 other possible input pairs, essentially identical analysis shows that Alice and Bob will have the same win probability of , so overall the average win probability for a randomly chosen input is . Since , this is strictly better than what was possible in the classical case.
Modeling general quantum strategies
An arbitrary quantum strategy for the CHSH game can be modeled as a triple where
is a bipartite state for some ,
and are Alice's observables each corresponding to receiving from the referee, and
and are Bob's observables each corresponding to receiving from the referee.
The optimal quantum strategy described above can be recast in this notation as follows: is the EPR pair , the observable (corresponding to Alice measuring in the basis), the observable (corresponding to Alice measuring in the basis), where and are Pauli matrices. The observables and (corresponding to each of Bob's choice of basis to measure in).
We will denote the success probability of a strategy in the CHSH game by , and we define the bias of the strategy as , which is the difference between the winning and losing probabilities of .
In particular, we have
The bias of the quantum strategy described above is .
Tsirelson's inequality and CHSH rigidity
Tsirelson's inequality, discovered by Boris Tsirelson in 1980, states that for any quantum strategy for the CHSH game, the bias . Equivalently, it states that success probability
for any quantum strategy for the CHSH game. In particular, this implies the optimality of the quantum strategy described above for the CHSH game.
Tsirelson's inequality establishes that the maximum success probability of any quantum strategy is , and we saw that this maximum success probability is achieved by the quantum strategy described above. In fact, any quantum strategy that achieves this maximum success probability must be isomorphic (in a precise sense) to the canonical quantum strategy described above; this property is called the rigidity of the CHSH game, first attributed to Summers and Werner. More formally, we have the following result:
Informally, the above theorem states that given an arbitrary optimal strategy for the CHSH game, there exists a local change-of-basis (given by the isometries ) for Alice and Bob such that their shared state factors into the tensor of an EPR pair and an additional auxiliary state . Furthermore, Alice and Bob's observables and behave, up to unitary transformations, like the and observables on their respective qubits from the EPR pair. An approximate or quantitative version of CHSH rigidity was obtained by McKague, et al. who proved that if you have a quantum strategy such that for some , then there exist isometries under which the strategy is -close to the canonical quantum strategy. Representation-theoretic proofs of approximate rigidity are also known.
Applications
Note that the CHSH game can be viewed as a test for quantum entanglement and quantum measurements, and that the rigidity of the CHSH game lets us test for a specific entanglement as well as specific'' quantum measurements. This in turn can be leveraged to test or even verify entire quantum computations—in particular, the rigidity of CHSH games has been harnessed to construct protocols for verifiable quantum delegation, certifiable randomness expansion, and device-independent cryptography.
See also
Correlation does not imply causation
Leggett–Garg inequality
Quantum game theory
References
External links
Bell inequality - Virtual Lab by Quantum Flytrap, an interactive simulation of the CHSH Bell inequality violation
Quantum measurement
Inequalities | CHSH inequality | [
"Physics",
"Mathematics"
] | 3,840 | [
"Mathematical theorems",
"Quantum game theory",
"Quantum mechanics",
"Binary relations",
"Game theory",
"Quantum measurement",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems"
] |
3,050,716 | https://en.wikipedia.org/wiki/Mesh%20generation | Mesh generation is the practice of creating a mesh, a subdivision of a continuous geometric space into discrete geometric and topological cells.
Often these cells form a simplicial complex.
Usually the cells partition the geometric input domain.
Mesh cells are used as discrete local approximations of the larger domain. Meshes are created by computer algorithms, often with human guidance through a GUI, depending on the complexity of the domain and the type of mesh desired.
A typical goal is to create a mesh that accurately captures the input domain geometry, with high-quality (well-shaped) cells, and without so many cells as to make subsequent calculations intractable.
The mesh should also be fine (have small elements) in areas that are important for the subsequent calculations.
Meshes are used for rendering to a computer screen and for physical simulation such as finite element analysis or computational fluid dynamics. Meshes are composed of simple cells like triangles because, e.g., we know how to perform operations such as finite element calculations (engineering) or ray tracing (computer graphics) on triangles, but we do not know how to perform these operations directly on complicated spaces and shapes such as a roadway bridge. We can simulate the strength of the bridge, or draw it on a computer screen, by performing calculations on each triangle and calculating the interactions between triangles.
A major distinction is between structured and unstructured meshing. In structured meshing the mesh is a regular lattice, such as an array, with implied connectivity between elements. In unstructured meshing, elements may be connected to each other in irregular patterns, and more complicated domains can be captured. This page is primarily about unstructured meshes.
While a mesh may be a triangulation, the process of meshing is distinguished from point set triangulation in that meshing includes the freedom to add vertices not present in the input. "Facetting" (triangulating) CAD models for drafting has the same freedom to add vertices, but the goal is to represent the shape accurately using as few triangles as possible and the shape of individual triangles is not important. Computer graphics renderings of textures and realistic lighting conditions use meshes instead.
Many mesh generation software is coupled to a CAD system defining its input, and simulation software for taking its output. The input can vary greatly but common forms are Solid modeling, Geometric modeling, NURBS, B-rep, STL or a point cloud.
Terminology
The terms "mesh generation," "grid generation," "meshing," " and "gridding," are often used interchangeably, although strictly speaking the latter two are broader and encompass mesh improvement: changing the mesh with the goal of increasing the speed or accuracy of the numerical calculations that will be performed over it. In computer graphics rendering, and mathematics, a mesh is sometimes referred to as a tessellation.
Mesh faces (cells, entities) have different names depending on their dimension and the context in which the mesh will be used. In finite elements, the highest-dimensional mesh entities are called "elements," "edges" are 1D and "nodes" are 0D. If the elements are 3D, then the 2D entities are "faces." In computational geometry, the 0D points are called vertices. Tetrahedra are often abbreviated as "tets"; triangles are "tris", quadrilaterals are "quads" and hexahedra (topological cubes) are "hexes."
Techniques
Many meshing techniques are built on the principles of the Delaunay triangulation, together with rules for adding vertices, such as Ruppert's algorithm.
A distinguishing feature is that an initial coarse mesh of the entire space is formed, then vertices and triangles are added.
In contrast, advancing front algorithms start from the domain boundary, and add elements incrementally filling up the interior.
Hybrid techniques do both. A special class of advancing front techniques creates thin boundary layers of elements for fluid flow.
In structured mesh generation the entire mesh is a lattice graph, such as a regular grid of squares. In block-structured meshing, the domain is divided into large subregions, each of which is a structured mesh. Some direct methods start with a block-structured mesh and then move the mesh to conform to the input; see Automatic Hex-Mesh Generation based on polycube. Another direct method is to cut the structured cells by the domain boundary; see sculpt based on Marching cubes.
Some types of meshes are much more difficult to create than others. Simplicial meshes tend to be easier than cubical meshes. An important category is generating a hex mesh conforming to a fixed quad surface mesh; a research subarea is studying the existence and generation of meshes of specific small configurations, such as the tetragonal trapezohedron. Because of the difficulty of this problem, the existence of combinatorial hex meshes has been studied apart from the problem of generating good geometric realizations; see Combinatorial Techniques for Hexahedral Mesh Generation. While known algorithms generate simplicial meshes with guaranteed minimum quality, such guarantees are rare for cubical meshes, and many popular implementations generate inverted (inside-out) hexes from some inputs.
Meshes are often created in serial on workstations, even when subsequent calculations over the mesh will be done in parallel on super-computers. This is both because of the limitation that most mesh generators are interactive, and because mesh generation runtime is typically insignificant compared to solver time. However, if the mesh is too large to fit in the memory of a single serial machine, or the mesh must be changed (adapted) during the simulation, meshing is done in parallel.
Algebraic methods
The grid generation by algebraic methods is based on mathematical interpolation function. It is done by using known functions in one, two or three dimensions taking arbitrary shaped regions. The computational domain might not be rectangular, but for the sake of simplicity, the domain is taken to be rectangular. The main advantage of the methods is that they provide explicit control of physical grid shape and spacing. The simplest procedure that may be used to produce boundary fitted computational mesh is the normalization transformation.
For a nozzle, with the describing function the grid can easily be generated using uniform division in y-direction with equally spaced increments in x-direction, which are described by
where denotes the y-coordinate of the nozzle wall. For given values of (, ), the values of (, ) can be easily recovered.
Differential equation methods
Like algebraic methods, differential equation methods are also used to generate grids. The advantage of using the partial differential equations (PDEs) is that the solution of grid generating equations can be exploited to generate the mesh. Grid construction can be done using all three classes of partial differential equations.
Elliptic schemes
Elliptic PDEs generally have very smooth solutions leading to smooth contours. Using its smoothness as an advantage Laplace's equations can preferably be used because the Jacobian found out to be positive as a result of maximum principle for harmonic functions. After extensive work done by Crowley (1962) and Winslow (1966) on PDEs by transforming physical domain into computational plane while mapping using Poisson's equation, Thompson et al. (1974) have worked extensively on elliptic PDEs to generate grids. In Poisson grid generators, the mapping is accomplished by marking the desired grid points on the boundary of the physical domain, with the interior point distribution determined through the solution of equations written below
where, are the co-ordinates in the computational domain, while P and Q are responsible for point spacing within D. Transforming above equations in computational space yields a set of two elliptic PDEs of the form,
where
These systems of equations are solved in the computational plane on uniformly spaced grid which provides us with the co-ordinates of each point in physical space. The advantage of using elliptic PDEs is the solution linked to them is smooth and the resulting grid is smooth. But, specification of P and Q becomes a difficult task thus adding it to its disadvantages. Moreover, the grid has to be computed after each time step which adds up to computational time.
Hyperbolic schemes
This grid generation scheme is generally applicable to problems with open domains consistent with the type of PDE describing the physical problem. The advantage associated with hyperbolic PDEs is that the governing equations need to be solved only once for generating grid. The initial point distribution along with the approximate boundary conditions forms the required input and the solution is the then marched outward. Steger and Sorenson (1980) proposed a volume orthogonality method that uses Hyperbolic PDEs for mesh generation.
For a 2-D problem, Considering computational space to be given by , the inverse of the Jacobian is given by,
where represents the area in physical space for a given area in computational space. The second equation links the orthogonality of grid lines at the boundary in physical space which can be written as
For and surfaces to be perpendicular the equation becomes
The problem associated with such system of equations is the specification of . Poor selection of may lead to shock and discontinuous propagation of this information throughout the mesh. While mesh being orthogonal is generated very rapidly which comes out as an advantage with this method.
Parabolic schemes
The solving technique is similar to that of hyperbolic PDEs by advancing the solution away from the initial data surface satisfying the boundary conditions at the end. Nakamura (1982) and Edwards (1985) developed the basic ideas for parabolic grid generation. The idea uses either of Laplace or the Poisson's equation and especially treating the parts which controls elliptic behavior. The initial values are given as the coordinates of the point along the surface and the advancing the solutions to the outer surface of the object satisfying the boundary conditions along edges.
The control of the grid spacing has not been suggested until now. Nakamura and Edwards, grid control was accomplished using non uniform spacing. The parabolic grid generation shows an advantage over the hyperbolic grid generation that, no shocks or discontinuities occur and the grid is relatively smooth. The specifications of initial values and selection of step size to control the grid points is however time-consuming, but these techniques can be effective when familiarity and experience is gained.
Variational methods
This method includes a technique that minimizes grid smoothness, orthogonality and volume variation. This method forms mathematical platform to solve grid generation problems. In this method an alternative grid is generated by a new mesh after each iteration and computing the grid speed using backward difference method. This technique is a powerful one with a disadvantage that effort is required to solve the equations related to grid. Further work needed to be done to minimize the integrals that will reduce the CPU time.
Unstructured grid generation
The main importance of this scheme is that it provides a method that will generate the grid automatically. Using this method, grids are segmented into blocks according to the surface of the element and a structure is provided to ensure appropriate connectivity. To interpret the data flow solver is used. When an unstructured scheme is employed, the main interest is to fulfill the demand of the user and a grid generator is used to accomplish this task. The information storage in structured scheme is cell to cell instead of grid to grid and hence the more memory space is needed. Due to random cell location, the solver efficiency in unstructured is less as compared to the structured scheme.
Some points are needed to be kept in mind at the time of grid construction. The grid point with high resolution creates difficulty for both structured and unstructured. For example, in case of boundary layer, structured scheme produces elongated grid in the direction of flow. On the other hand, unstructured grids require a higher cell density in the boundary layer because the cell needs to be as equilateral as possible to avoid errors.
We must identify what information is required to identify the cell and all the neighbors of the cell in the computational mesh. We can choose to locate the arbitrary points anywhere we want for the unstructured grid. A point insertion scheme is used to insert the points independently and the cell connectivity is determined. This suggests that the point be identified as they are inserted.
Logic for establishing new connectivity is determined once the points are inserted. Data that form grid point that identifies grid cell are needed. As each cell is formed it is numbered and the points are sorted. In addition the neighbor cell information is needed.
Adaptive grid
A problem in solving partial differential equations using previous methods is that the grid is constructed and the points are distributed in the physical domain before details of the solution is known. So the grid may or may not be the best for the given problem.
Adaptive methods are used to improve the accuracy of the solutions. The adaptive method is referred to as ‘h’ method if mesh refinement is used, ‘r’ method if the number of grid point is fixed and not redistributed and ‘p’ if the order of solution scheme is increased in finite-element theory. The multi dimensional problems using the equidistribution scheme can be accomplished in several ways. The simplest to understand are the Poisson Grid Generators with control function based on the equidistribution of the weight function with the diffusion set as a multiple of desired cell volume. The equidistribution scheme can also be applied to the unstructured problem. The problem is the connectivity hampers if mesh point movement is very large.
Steady flow and the time-accurate flow calculation can be solved through this adaptive method. The grid is refined and after a predetermined number of iteration in order to adapt it in a steady flow problem. The grid will stop adjusting to the changes once the solution converges. In time accurate case coupling of the partial differential equations of the physical problem and those describing the grid movement is required.
Image-based meshing
Cell topology
Usually the cells are polygonal or polyhedral and form a mesh that partitions the domain.
Important classes of two-dimensional elements include triangles (simplices) and quadrilaterals (topological squares).
In three-dimensions the most-common cells are tetrahedra (simplices) and hexahedra (topological cubes).
Simplicial meshes may be of any dimension and include triangles (2D) and tetrahedra (3D) as important instances.
Cubical meshes is the pan-dimensional category that includes quads (2D) and hexes (3D). In 3D, 4-sided pyramids and 3-sided prisms appear in conformal meshes of mixed cell type.
Cell dimension
The mesh is embedded in a geometric space that is typically two or three dimensional, although sometimes the dimension is increased by one by adding the time-dimension. Higher dimensional meshes are used in niche contexts. One-dimensional meshes are useful as well. A significant category is surface meshes, which are 2D meshes embedded in 3D to represent a curved surface.
Duality
Dual graphs have several roles in meshing. One can make a polyhedral Voronoi diagram mesh by dualizing a Delaunay triangulation simplicial mesh. One can create a cubical mesh by generating an arrangement of surfaces and dualizing the intersection graph; see spatial twist continuum. Sometimes both the primal mesh and its dual mesh are used in the same simulation; see Hodge star operator. This arises from physics involving divergence and curl (mathematics) operators, such as flux & vorticity or electricity & magnetism, where one variable naturally lives on the primal faces and its counterpart on the dual faces.
Mesh type by use
Three-dimensional meshes created for finite element analysis need to consist of tetrahedra, pyramids, prisms or hexahedra. Those used for the finite volume method can consist of arbitrary polyhedra. Those used for finite difference methods consist of piecewise structured arrays of hexahedra known as multi-block structured meshes.
4-sided pyramids are useful to conformally connect hexes to tets. 3-sided prisms are used for boundary layers conforming to a tet mesh of the far-interior of the object.
Surface meshes are useful in computer graphics where the surfaces of objects reflect light (also subsurface scattering) and a full 3D mesh is not needed. Surface meshes are also used to model thin objects such as sheet metal in auto manufacturing and building exteriors in architecture. High (e.g., 17) dimensional cubical meshes are common in astrophysics and string theory.
Mathematical definition and variants
What is the precise definition of a mesh? There is not a universally-accepted mathematical description that applies in all contexts.
However, some mathematical objects are clearly meshes: a simplicial complex is a mesh composed of simplices.
Most polyhedral (e.g. cubical) meshes are conformal, meaning they have the cell structure of a CW complex, a generalization of a simplicial complex. A mesh need not be simplicial because an arbitrary subset of nodes of a cell is not necessarily a cell: e.g., three nodes of a quad does not define a cell.
However, two cells intersect at cells: e.g. a quad does not have a node in its interior. The intersection of two cells may be several cells: e.g., two quads may share two edges. An intersection being more than one cell is sometimes forbidden and rarely desired; the goal of some mesh improvement techniques (e.g. pillowing) is to remove these configurations. In some contexts, a distinction is made between a topological mesh and a geometric mesh whose embedding satisfies certain quality criteria.
Important mesh variants that are not CW complexes include non-conformal meshes where cells do not meet strictly face-to-face, but the cells nonetheless partition the domain. An example of this is an octree, where an element face may be partitioned by the faces of adjacent elements. Such meshes are useful for flux-based simulations. In overset grids, there are multiple conformal meshes that overlap geometrically and do not partition the domain; see e.g., Overflow, the OVERset grid FLOW solver. So-called meshless or meshfree methods often make use of some mesh-like discretization of the domain, and have basis functions with overlapping support. Sometimes a local mesh is created near each simulation degree-of-freedom point, and these meshes may overlap and be non-conformal to one another.
Implicit triangulations are based on a delta complex: for each triangle the lengths of its edges, and a gluing map between face edges. (please expand)
High-order elements
Many meshes use linear elements, where the mapping from the abstract to realized element is linear, and mesh edges are straight segments.
Higher order polynomial mappings are common, especially quadratic.
A primary goal for higher-order elements is to more accurately represent the domain boundary, although they have accuracy benefits in the interior of the mesh as well.
One of the motivations for cubical meshes is that linear cubical elements have some of the same numerical advantages as quadratic simplicial elements.
In the isogeometric analysis simulation technique, the mesh cells containing the domain boundary use the CAD representation directly instead of a linear or polynomial approximation.
Mesh improvement
Improving a mesh involves changing its discrete connectivity, the continuous geometric position of its cells, or both. For discrete changes, for simplicial elements one swaps edges and inserts/removes nodes. The same kinds of operations are done for cubical (quad/hex) meshes, although there are fewer possible operations and local changes have global consequences. E.g., for a hexahedral mesh, merging two nodes creates cells that are not hexes, but if diagonally-opposite nodes on a quadrilateral are merged and this is propagated into collapsing an entire face-connected column of hexes, then all remaining cells will still be hexes.
In adaptive mesh refinement, elements are split (h-refinement) in areas where the function being calculated has a high gradient.
Meshes are also coarsened, removing elements for efficiency. The multigrid method does something similar to refinement and coarsening to speed up the numerical solve, but without actually changing the mesh.
For continuous changes, nodes are moved, or the higher-dimensional faces are moved by changing the polynomial order of elements. Moving nodes to improve quality is called "smoothing" or "r-refinement" and increasing the order of elements is called "p-refinement." Nodes are also moved in simulations where the shape of objects change over time. This degrades the shape of the elements. If the object deforms enough, the entire object is remeshed and the current solution mapped from the old mesh to the new mesh.
Research community
Practitioners
The field is highly interdisciplinary, with contributions found in mathematics, computer science, and engineering. Meshing R&D is distinguished by an equal focus on discrete and continuous math and computation, as with computational geometry, but in contrast to graph theory (discrete) and numerical analysis (continuous). Mesh generation is deceptively difficult: it is easy for humans to see how to create a mesh of a given object, but difficult to program a computer to make good decisions for arbitrary input a priori. There is an infinite variety of geometry found in nature and man-made objects. Many mesh generation researchers were first users of meshes. Mesh generation continues to receive widespread attention, support and funding because the human-time to create a mesh dwarfs the time to set up and solve the calculation once the mesh is finished. This has always been the situation since numerical simulation and computer graphics were invented, because as computer hardware and simple equation-solving software have improved, people have been drawn to larger and more complex geometric models in a drive for greater fidelity, scientific insight, and artistic expression.
Journals
Meshing research is published in a broad range of journals. This is in keeping with the interdisciplinary nature of the research required to make progress, and also the wide variety of applications that make use of meshes. About 150 meshing publications appear each year across 20 journals, with at most 20 publications appearing in any one journal. There is no journal whose primary topic is meshing. The journals that publish at least 10 meshing papers per year are in bold.
Advances in Engineering Software
American Institute of Aeronautics and Astronautics Journal (AIAAJ)
Algorithmica
Applied Computational Electromagnetics Society Journal
Applied Numerical Mathematics
Astronomy and Computing
Computational Geometry: Theory and Applications
Computer-Aided Design, often including a special issue devoted to extended papers from the IMR (see conferences below)
Computer Aided Geometric Design (CAGD)
Computer Graphics Forum (Eurographics)
Computer Methods in Applied Mechanics and Engineering
Discrete and Computational Geometry
Engineering with Computers
Finite Elements in Analysis and Design
International Journal for Numerical Methods in Engineering (IJNME)
International Journal for Numerical Methods in Fluids
International Journal for Numerical Methods in Biomedical Engineering
International Journal of Computational Geometry & Applications
Journal of Computational Physics (JCP)
Journal on Numerical Analysis
Journal on Scientific Computing (SISC)
Transactions on Graphics (ACM TOG)
Transactions on Mathematical Software (ACM TOMS)
Transactions on Visualization and Computer Graphics (IEEE TVCG)
Lecture Notes in Computational Science and Engineering (LNCSE)
Computational Mathematics and Mathematical Physics (CMMP)
Conferences
(Conferences whose primary topic is meshing are in bold.)
Aerospace Sciences Meeting AIAA (15 meshing talks/papers)
Canadian Conference on Computational Geometry CCCG
CompIMAGE: International Symposium Computational Modeling of Objects Represented in Images
Computational Fluid Dynamics Conference AIAA
Computational Fluid Dynamics Conference ECCOMAS
Computational Science & Engineering CS&E
Conference on Numerical Grid Generation ISGG
Eurographics Annual Conference (Eurographics)] (proceedings in Computer Graphics Forum)
Geometric & Physical Modeling SIAM
International Conference on Isogeometric Analysis IGA
International Symposium on Computational Geometry SoCG
Numerical Geometry, Grid Generation and Scientific Computing (NUMGRID) (proceedings in Lecture Notes in Computational Science and Engineering)
International Meshing Roundtable, SIAM IMR workshop. (Refereed proceedings and special journal issue.)
SIGGRAPH (proceedings in ACM Transactions on Graphics)
Symposium on Geometry Processing SGP (Eurographics) (proceedings in Computer Graphics Forum)
World Congress on Engineering
Workshops
Workshops whose primary topic is meshing are in bold.
Conference on Geometry: Theory and Applications CGTA
European Workshop on Computational Geometry EuroCG
Fall Workshop on Computational Geometry
Finite Elements in Fluids FEF
MeshTrends Symposium (in WCCM or USNCCM alternate years)
Polytopal Element Methods in Mathematics and Engineering
Tetrahedron workshop
See also
Grid classification
Mesh parameterization
Meshfree methods
Parallel mesh generation
Principles of grid generation
Polygon mesh
Regular grid
Stretched grid method
Tessellation (computer graphics)
Types of mesh
Unstructured grid
References
Bibliography
.
.
.
CGAL The Computational Geometry Algorithms Library
Jan Brandts, Sergey Korotov, Michal Krizek: "Simplicial Partitions with Applications to the Finite Element Method", Springer Monographs in Mathematics, (2020). url="https://www.springer.com/gp/book/9783030556761"
Grid Generation Methods - Liseikin, Vladimir D.
External links
Periodic Table of the Finite Elements
Literature on Mesh Generation
Conferences, Workshops, Summerschools
Mesh generators
Many commercial product descriptions emphasize simulation rather than the meshing technology that enables simulation.
Lists of mesh generators (external):
Free/open source mesh generators
Public domain and commercial mesh generators
ANSA Pre-processor
ANSYS
CD-adapco and Siemens DISW
Comet Solutions
CGAL Computational Geometry Algorithms Library
Mesh generation
2D Conforming Triangulations and Meshes
3D Mesh Generation
CUBIT
Ennova
Gmsh
Hextreme meshes
MeshLab
MSC Software
Omega_h Tri/Tet Adaptivity
Open FOAM Mesh generation and conversion
Salome Mesh module
TetGen
TetWild
TRIANGLE Mesh generation and Delaunay triangulation
Multi-domain partitioned mesh generators
These tools generate the partitioned meshes required for multi-material finite element modelling.
MDM(Multiple Domain Meshing) generates unstructured tetrahedral and hexahedral meshes for a composite domain made up of heterogeneous materials, automatically and efficiently
QMDM (Quality Multi-Domain Meshing) produces a high quality, mutually consistent triangular surface meshes for multiple domains
QMDMNG, (Quality Multi-Domain Meshing with No Gap), produces a quality meshes with each one a two-dimensional manifold and no gap between two adjacent meshes.
SOFA_mesh_partitioning_tools generates partitioned tetrahedral meshes for multi-material FEM, based on CGAL.
Articles
Another Fine Mesh, MeshTrends Blog, Pointwise
Mesh Generation & Grid Generation on the Web
Mesh Generation group on LinkedIn
Research groups and people
Mesh Generation people on Google Scholar
David Bommes, Computer Graphics Group, University of Bern
David Eppstein's Geometry in Action, Mesh Generation
Jonathan Shewchuk's Meshing and Triangulation in Graphics, Engineering, and Modeling
Scott A. Mitchell
Robert Schneiders
Models and meshes
Useful models (inputs) and meshes (outputs) for comparing meshing algorithms and meshes.
HexaLab has models and meshes that have been published in research papers, reconstructed or from the original paper.
Princeton Shape Benchmark
Shape Retrieval Contest SHREC has different models each year, e.g.,
Shape Retrieval Contest of Non-rigid 3D Watertight Meshes 2011
Thingi10k meshed models from the Thingiverse
CAD models
Modeling engines linked with mesh generation software to represent the domain geometry.
ACIS by Spatial
Open Cascade
Mesh file formats
Common (output) file formats for describing meshes.
NetCDF
Genesis/Exodus
XDMF
VTK/VTU
MEDIT
MED/Salome
Gmsh
ANSYS mesh
OFF
Wavefront OBJ
PLY
STL
meshio can convert between all of the above formats.
Mesh visualizers
Blender
Mesh Viewer
Paraview
Tutorials
Cubit tutorials
Mesh generation people
Mesh generators
Geometric algorithms
Computer-aided design
Triangulation (geometry)
Numerical analysis
Numerical differential equations
Computational fluid dynamics
3D computer graphics | Mesh generation | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 5,806 | [
"Triangulation (geometry)",
"Mesh generation",
"Design engineering",
"Computer-aided design",
"Computational fluid dynamics",
"Tessellation",
"Computational mathematics",
"Planar graphs",
"Computational physics",
"Mathematical relations",
"Numerical analysis",
"Planes (geometry)",
"Approxima... |
3,050,976 | https://en.wikipedia.org/wiki/Pilot%20plant | A pilot plant is a pre-commercial production system that employs new production technology and/or produces small volumes of new technology-based products, mainly for the purpose of learning about the new technology. The knowledge obtained is then used for design of full-scale production systems and commercial products, as well as for identification of further research objectives and support of investment decisions. Other (non-technical) purposes include gaining public support for new technologies and questioning government regulations. Pilot plant is a relative term in the sense that pilot plants are typically smaller than full-scale production plants, but are built in a range of sizes. Also, as pilot plants are intended for learning, they typically are more flexible, possibly at the expense of economy. Some pilot plants are built in laboratories using stock lab equipment, while others require substantial engineering efforts, cost millions of dollars, and are custom-assembled and fabricated from process equipment, instrumentation and piping. They can also be used to train personnel for a full-scale plant. Pilot plants tend to be smaller compared to demonstration plants.
Terminology
A word similar to pilot plant is pilot line. Essentially, pilot plants and pilot lines perform the same functions, but 'pilot plant' is used in the context of (bio)chemical and advanced materials production systems, whereas 'pilot line' is used for new technology in general. The term 'kilo lab' is also used for small pilot plants referring to the expected output quantities.
Risk management
Pilot plants are used to reduce the risk associated with construction of large process plants. They do so in several ways:
Computer simulations and semi-empirical methods are used to determine the limitations of the pilot scale system. These mathematical models are then tested in a physical pilot-scale plant. Various modeling methods are used for scale-up. These methods include:
Chemical similitude studies
Mathematical modeling
Chemical process simulation
Finite Elemental Analysis (FEA)
Computational Fluid Dynamics (CFD)
These theoretical modeling methods return the following:
Finalized mass and energy balances
Optimized system design and capacity
Equipment requirements
System limitations
The basis for determining the cost to build the pilot module
They are substantially less expensive to build than full-scale plants. The business does not put as much capital at risk on a project that may be inefficient or unfeasible. Further, design changes can be made more cheaply at the pilot scale and kinks in the process can be worked out before the large plant is constructed.
They provide valuable data for design of the full-scale plant. Scientific data about reactions, material properties, corrosiveness, for instance, may be available, but it is difficult to predict the behavior of a process of any complexity. Engineering data from other process may be available, but this data can not always be clearly applied to the process of interest. Designers use data from the pilot plant to refine their design of the production scale facility.
If a system is well defined and the engineering parameters are known, pilot plants are not used. For instance, a business that wants to expand production capacity by building a new plant that does the same thing as an existing plant may choose to not use a pilot plant.
Additionally, advances in process simulation on computers have increased the confidence of process designers and reduced the need for pilot plants. However, they are still used as even state-of-the-art simulation cannot accurately predict the behavior of complex systems.
Scale dependence of plant properties
As a system increases in size, system properties that depend on quantity of matter (with extensive properties) may change. The surface area to liquid ratio in a chemical plant is a good example of such a property. On a small chemical scale, in a flask, say, there is a relatively large surface area to liquid ratio. However, if the reaction in question is scaled up to fit in a 500-gallon tank, the surface area to liquid ratio becomes much smaller. As a result of this difference in surface area to liquid ratio, the exact nature of the thermodynamics and the reaction kinetics of the process change in a non-linear fashion. This is why a reaction in a beaker can behave vastly differently from the same reaction in a large-scale production process.
Other factors
Other factors that may change during the transformation to a production scale include:
Reaction kinetics
Chemical equilibrium
Material properties
Fluid dynamics
Thermodynamics
Equipment selection
Agitation
Uniformity / homogeneity
After data has been collected from operation of a pilot plant, a larger production-scale facility may be built. Alternatively, a demonstration plant, which is typically bigger than a pilot plant, but smaller than a full-scale production plant, may be built to demonstrate the commercial feasibility of the process. Businesses sometimes continue to operate the pilot plant in order to test ideas for new products, new feedstocks, or different operating conditions. Alternatively, they may be operated as production facilities, augmenting production from the main plant.
Bench scale vs pilot vs demonstration
The differences between bench scale, pilot scale and demonstration scale are strongly influenced by industry and application. Some industries use pilot plant and demonstration plant interchangeably. Some pilot plants are built as portable modules that can be easily transported as a contained unit.
For batch processes, in the pharmaceutical industry for example, bench scale is typically conducted on samples 1–20 kg or less, whereas pilot scale testing is performed with samples of 20–100 kg. Demonstration scale is essentially operating the equipment at full commercial feed rates over extended time periods to prove operational stability.
For continuous processes, in the petroleum industry for example, bench scale systems are typically microreactor or CSTR systems with less than 1000 mL of catalyst, studying reactions and/or separations on a once-through basis. Pilot plants will typically have reactors with catalyst volume between 1 and 100 litres, and will often incorporate product separation and gas/liquid recycle with the goal of closing the mass balance. Demonstration plants, also referred to as semi-works plants, will study the viability of the process on a pre-commercial scale, with typical catalyst volumes in the 100 - 1000 litre range. The design of a demonstration scale plant for a continuous process will closely resemble that of the anticipated future commercial plant, albeit at a much lower throughput, and its goal is to study catalyst performance and operating lifetime over an extended period, while generating significant quantities of product for market testing.
In the development of new processes, the design and operation of the pilot and demonstration plant will often run in parallel with the design of the future commercial plant, and the results from pilot testing programs are key to optimizing the commercial plant flowsheet. It is common in cases where process technology has been successfully implemented that the savings at the commercial scale resulting from pilot testing will significantly outweigh the cost of the pilot plant itself.
Steps to creating a custom pilot plant
Custom pilot plants are commonly designed either for research or commercial purposes. They can range in size from a small system with no automation and low flow, to a highly automated system producing relatively large amounts of products in a day. No matter the size, the steps to designing and fabricating a working pilot plant are the same. They are:
Pre-engineering - completing a process flow diagram (PFD), basic piping and instrumentation diagrams (P&ID's) and initial equipment layouts.
Engineering modeling and optimization - 2D and 3D models are created, using a simulation software to model the process parameters and scale the chemical processes. These modeling software help determine system limitations, non-linear chemical and physical changes, and potential equipment sizing. Mass and energy balances, finalized P&ID's and general arrangement drawings are produced.
Automation strategies for the system are developed (if needed). Controls system programming begins and will continue through fabrication and assembly
Fabrication and assembly - after an optimized design has been determined, the custom pilot is fabricated and assembled. Pilot plants can either be assembled on-site or off-site as modular skids that will be constructed and tested in a controlled environment.
Testing - testing of completed systems, including system controls, is conducted to ensure proper system function.
Installation and startup - if constructed offsite, pilot skids are installed onsite. After all equipment is in place, full system startup is completed by integrating the system with existing plant utilities and controls. Full operation is tested and affirmed.
Training - operator training is complete and full system documentation is handed over.
See also
Chemical engineering
Operations research
Process engineering
References
Bibliography
M. Levin (Editor), Pharmaceutical Process Scale-Up (Drugs and the Pharmaceutical), Informa Healthcare, 3rd edition, (2011)
M. Lackner (Editor), Scale-up in Combustion, ProcessEng Engineering GmbH, Wien, (2009).
M. Zlokarnik, Scale-up in Chemical Engineering, Wiley-VCH Verlag GmbH & Co. KGaA, 2nd edition, (2006).
Richard Palluzi, Pilot Plants: Design, Construction and Operation, McGraw-Hill, February, 1992.
Richard Palluzi, Pilot Plants, Chemical Engineering, March, 1990.
Industrial engineering
Industrial processes | Pilot plant | [
"Engineering"
] | 1,840 | [
"Industrial engineering"
] |
3,051,710 | https://en.wikipedia.org/wiki/Nano-thermite | Nano-thermite or super-thermite is a metastable intermolecular composite (MIC) characterized by a particle size of its main constituents, a metal fuel and oxidizer, under 100 nanometers. This allows for high and customizable reaction rates. Nano-thermites contain an oxidizer and a reducing agent, which are intimately mixed on the nanometer scale. MICs, including nano-thermitic materials, are a type of reactive materials investigated for military use, as well as for general applications involving propellants, explosives, and pyrotechnics.
What distinguishes MICs from traditional thermites is that the oxidizer and a reducing agent, normally iron oxide and aluminium, are in the form of extremely fine powders (nanoparticles). This dramatically increases the reactivity relative to micrometre-sized powder thermite. As the mass transport mechanisms that slow down the burning rates of traditional thermites are not so important at these scales, the reaction proceeds much more quickly.
Potential uses
Historically, pyrotechnic or explosive applications for traditional thermites have been limited due to their relatively slow energy release rates. Because nanothermites are created from reactant particles with proximities approaching the atomic scale, energy release rates are far greater.
MICs or super-thermites are generally developed for military use, propellants, explosives, incendiary devices, and pyrotechnics. Research into military applications of nano-sized materials began in the early 1990s. Because of their highly increased reaction rate, nano-thermitic materials are being studied by the U.S. military with the aim of developing new types of bombs several times more powerful than conventional explosives. Nanoenergetic materials can store more energy than conventional energetic materials and can be used in innovative ways to tailor the release of this energy. Thermobaric weapons are one potential application of nanoenergetic materials.
Types
There are many possible thermodynamically stable fuel-oxidizer combinations. Some of them are:
Aluminium-molybdenum(VI) oxide
Aluminium-copper(II) oxide
Aluminium-iron(II,III) oxide
Antimony-potassium permanganate
Aluminium-potassium permanganate
Aluminium-bismuth(III) oxide
Aluminium-tungsten(VI) oxide hydrate
Aluminium-fluoropolymer (typically Viton)
Titanium-boron (burns to titanium diboride, which belongs to a class of compounds called intermetallic composites).
In military research, aluminium-molybdenum oxide, aluminium-Teflon and aluminium-copper(II) oxide have received considerable attention. Other compositions tested were based on nanosized RDX and with thermoplastic elastomers. PTFE or other fluoropolymer can be used as a binder for the composition. Its reaction with the aluminium, similar to magnesium/teflon/viton thermite, adds energy to the reaction. Of the listed compositions, that with potassium permanganate has the highest pressurization rate.
The most common method of preparing nanoenergetic materials is by ultrasonification in quantities of less than 2g. Some research has been developed to increase production scales. Due to the very high electrostatic discharge (ESD) sensitivity of these materials, sub 1 gram scales are currently typical.
Production
Nanoaluminum, or ultra fine grain (UFG) aluminum, powders are a key component of most nano-thermitic materials. A method for producing this material is the dynamic gas-phase condensation method, pioneered by Wayne Danen and Steve Son at Los Alamos National Laboratory. A variant of the method is being used at the Indian Head Division of the Naval Surface Warfare Center. Another method for production is electrothermal synthesis, developed by NovaCentrix, which uses a pulsed plasma arc to vaporize the aluminum. The powders made by the dynamic gas-phase condensation and the electrothermal synthesis processes are indistinguishable. A critical aspect of the production is the ability to produce particles of sizes in the tens of nano-meter range, as well as with a limited distribution of particle sizes. In 2002, the production of nano-sized aluminum particles required considerable effort, and commercial sources for the material were limited.
An application of the sol-gel method, developed by Randall Simpson, Alexander Gash and others at the Lawrence Livermore National Laboratory, can be used to make the actual mixtures of nano-structured composite energetic materials. Depending on the process, MICs of different density can be produced. Highly porous and uniform products can be achieved by super-critical extraction.
The most common types of production are in liquids or via resonant acoustic mixing. However, more complicated methods like the ones previously mentioned are used.
Ignition
As with all explosives, research into control yet simplicity has been a goal of research into nanoscale explosives. Some can be ignited with laser pulses.
MICs have been investigated as a possible replacement for lead (e.g. lead styphnate, lead azide) in percussion caps and electric matches. Compositions based on Al-Bi2O3 tend to be used. PETN may be optionally added.
Aluminium powder can be added to nano explosives. Aluminium has a relatively low combustion rate and a high enthalpy of combustion.
The products of a thermite reaction, resulting from ignition of the nano-thermitic mixture, are usually metal oxides and elemental metals. At the temperatures prevailing during the reaction, the products can be solid, liquid or gaseous, depending on the components of the mixture.
Hazards
Like conventional thermite, super thermite reacts at very high temperature and is difficult to extinguish. The reaction produces dangerous ultra-violet (UV) light, requiring that the reaction not be viewed directly or that special eye protection (for example, a welder's mask) be worn.
In addition, super thermites are very sensitive to electrostatic discharge (ESD). Surrounding the metal oxide particles with carbon nanofibers may make nanothermites safer to handle.
See also
Thermate
Pyrotechnic composition
References
External links
Synthesis and Reactivity of a Super-Reactive Metastable Intermolecular Composite Formulation of Al/KMnO4
Metastable Intermolecular Composites for Small Caliber Cartridges and Cartridge Actuated Devices
Performance of Nanocomposite Energetic Materials Al-MoO3
Pyrotechnic compositions
Incendiary weapons
Explosives
Nanoparticles | Nano-thermite | [
"Chemistry"
] | 1,370 | [
"Pyrotechnic compositions",
"Explosives",
"Explosions"
] |
3,053,507 | https://en.wikipedia.org/wiki/Crystal%20growth | A crystal is a solid material whose constituent atoms, molecules, or ions are arranged in an orderly repeating pattern extending in all three spatial dimensions. Crystal growth is a major stage of a crystallization process, and consists of the addition of new atoms, ions, or polymer strings into the characteristic arrangement of the crystalline lattice. The growth typically follows an initial stage of either homogeneous or heterogeneous (surface catalyzed) nucleation, unless a "seed" crystal, purposely added to start the growth, was already present.
The action of crystal growth yields a crystalline solid whose atoms or molecules are close packed, with fixed positions in space relative to each other.
The crystalline state of matter is characterized by a distinct structural rigidity and very high resistance to deformation (i.e. changes of shape and/or volume). Most crystalline solids have high values both of Young's modulus and of the shear modulus of elasticity. This contrasts with most liquids or fluids, which have a low shear modulus, and typically exhibit the capacity for macroscopic viscous flow.
Overview
After successful formation of a stable nucleus, a growth stage ensues in which free particles (atoms or molecules) adsorb onto the nucleus and propagate its crystalline structure outwards from the nucleating site. This process is significantly faster than nucleation. The reason for such rapid growth is that real crystals contain dislocations and other defects, which act as a catalyst for the addition of particles to the existing crystalline structure. By contrast, perfect crystals (lacking defects) would grow exceedingly slowly. On the other hand, impurities can act as crystal growth inhibitors and can also modify crystal habit.
Nucleation
Nucleation can be either homogeneous, without the influence of foreign particles, or heterogeneous, with the influence of foreign particles. Generally, heterogeneous nucleation takes place more quickly since the foreign particles act as a scaffold for the crystal to grow on, thus eliminating the necessity of creating a new surface and the incipient surface energy requirements.
Heterogeneous nucleation can take place by several methods. Some of the most typical are small inclusions, or cuts, in the container the crystal is being grown on. This includes scratches on the sides and bottom of glassware. A common practice in crystal growing is to add a foreign substance, such as a string or a rock, to the solution, thereby providing nucleation sites for facilitating crystal growth and reducing the time to fully crystallize.
The number of nucleating sites can also be controlled in this manner. If a brand-new piece of glassware or a plastic container is used, crystals may not form because the container surface is too smooth to allow heterogeneous nucleation. On the other hand, a badly scratched container will result in many lines of small crystals. To achieve a moderate number of medium-sized crystals, a container which has a few scratches works best. Likewise, adding small previously made crystals, or seed crystals, to a crystal growing project will provide nucleating sites to the solution. The addition of only one seed crystal should result in a larger single crystal.
Mechanisms of growth
The interface between a crystal and its vapor can be molecularly sharp at temperatures well below the melting point. An ideal crystalline surface grows by the spreading of single layers, or equivalently, by the lateral advance of the growth steps bounding the layers. For perceptible growth rates, this mechanism requires a finite driving force (or degree of supercooling) in order to lower the nucleation barrier sufficiently for nucleation to occur by means of thermal fluctuations. In the theory of crystal growth from the melt, Burton and Cabrera have distinguished between two major mechanisms:
Non-uniform lateral growth
The surface advances by the lateral motion of steps which are one interplanar spacing in height (or some integral multiple thereof). An element of surface undergoes no change and does not advance normal to itself except during the passage of a step, and then it advances by the step height. It is useful to consider the step as the transition between two adjacent regions of a surface which are parallel to each other and thus identical in configuration—displaced from each other by an integral number of lattice planes. Note here the distinct possibility of a step in a diffuse surface, even though the step height would be much smaller than the thickness of the diffuse surface.
Uniform normal growth
The surface advances normal to itself without the necessity of a stepwise growth mechanism. This means that in the presence of a sufficient thermodynamic driving force, every element of surface is capable of a continuous change contributing to the advancement of the interface. For a sharp or discontinuous surface, this continuous change may be more or less uniform over large areas for each successive new layer. For a more diffuse surface, a continuous growth mechanism may require changes over several successive layers simultaneously.
Non-uniform lateral growth is a geometrical motion of steps—as opposed to motion of the entire surface normal to itself. Alternatively, uniform normal growth is based on the time sequence of an element of surface. In this mode, there is no motion or change except when a step passes via a continual change. The prediction of which mechanism will be operative under any set of given conditions is fundamental to the understanding of crystal growth. Two criteria have been used to make this prediction:
Whether or not the surface is diffuse: a diffuse surface is one in which the change from one phase to another is continuous, occurring over several atomic planes. This is in contrast to a sharp surface for which the major change in property (e.g. density or composition) is discontinuous, and is generally confined to a depth of one interplanar distance.
Whether or not the surface is singular: a singular surface is one in which the surface tension as a function of orientation has a pointed minimum. Growth of singular surfaces is known to requires steps, whereas it is generally held that non-singular surfaces can continuously advance normal to themselves.
Driving force
Consider next the necessary requirements for the appearance of lateral growth. It is evident that the lateral growth mechanism will be found when any area in the surface can reach a metastable equilibrium in the presence of a driving force. It will then tend to remain in such an equilibrium configuration until the passage of a step. Afterward, the configuration will be identical except that each part of the step will have advanced by the step height. If the surface cannot reach equilibrium in the presence of a driving force, then it will continue to advance without waiting for the lateral motion of steps.
Thus, Cahn concluded that the distinguishing feature is the ability of the surface to reach an equilibrium state in the presence of the driving force. He also concluded that for every surface or interface in a crystalline medium, there exists a critical driving force, which, if exceeded, will enable the surface or interface to advance normal to itself, and, if not exceeded, will require the lateral growth mechanism.
Thus, for sufficiently large driving forces, the interface can move uniformly without the benefit of either a heterogeneous nucleation or screw dislocation mechanism. What constitutes a sufficiently large driving force depends upon the diffuseness of the interface, so that for extremely diffuse interfaces, this critical driving force will be so small that any measurable driving force will exceed it. Alternatively, for sharp interfaces, the critical driving force will be very large, and most growth will occur by the lateral step mechanism.
Note that in a typical solidification or crystallization process, the thermodynamic driving force is dictated by the degree of supercooling.
Morphology
It is generally believed that the mechanical and other properties of the crystal are also pertinent to the subject matter, and that crystal morphology provides the missing link between growth kinetics and physical properties. The necessary thermodynamic apparatus was provided by Josiah Willard Gibbs' study of heterogeneous equilibrium. He provided a clear definition of surface energy, by which the concept of surface tension is made applicable to solids as well as liquids. He also appreciated that an anisotropic surface free energy implied a non-spherical equilibrium shape, which should be thermodynamically defined as the shape which minimizes the total surface free energy.
It may be instructional to note that whisker growth provides the link between the mechanical phenomenon of high strength in whiskers and the various growth mechanisms which are responsible for their fibrous morphologies. (Prior to the discovery of carbon nanotubes, single-crystal whiskers had the highest tensile strength of any materials known). Some mechanisms produce defect-free whiskers, while others may have single screw dislocations along the main axis of growth—producing high strength whiskers.
The mechanism behind whisker growth is not well understood, but seems to be encouraged by compressive mechanical stresses including mechanically induced stresses, stresses induced by diffusion of different elements, and thermally induced stresses. Metal whiskers differ from metallic dendrites in several respects. Dendrites are fern-shaped like the branches of a tree, and grow across the surface of the metal. In contrast, whiskers are fibrous and project at a right angle to the surface of growth, or substrate.
Diffusion-control
Very commonly when the supersaturation (or degree of supercooling) is high, and sometimes even when it is not high, growth kinetics may be diffusion-controlled, which means the transport of atoms or molecules to the growing nucleus is limiting the velocity of crystal growth. Assuming the nucleus in such a diffusion-controlled system is a perfect sphere, the growth velocity, corresponding to the change of the radius with time , can be determined with Fick’s Laws.
1. Fick' s Law: ,
where is the flux of atoms in the dimension of , is the diffusion coefficient and is the concentration gradient.
2. Fick' s Law: ,
where is the change of the concentration with time.
The first Law can be adjusted to the flux of matter onto a specific surface, in this case the surface of the spherical nucleus:
,
where now is the flux onto the spherical surface in the dimension of and being the area of the spherical nucleus. can also be expressed as the change of number of atoms in the nucleus over time, with the number of atoms in the nucleus being:
,
where is the volume of the spherical nucleus and is the atomic volume. Therefore, the change if number of atoms in the nucleus over time will be:
Combining both equations for the following expression for the growth velocity is obtained:
From second Fick’s Law for spheres the equation below can be obtained:
Assuming that the diffusion profile does not change over time but is only shifted with the growing radius it can be said that , which leads to being constant. This constant can be indicated with the letter and integrating will result in the following equation:
,
where is the radius of the nucleus, is the distance from the nucleus where the equilibrium concentration is recovered and is the concentration right at the surface of the nucleus. Now the expression for can be found by:
Therefore, the growth velocity for a diffusion-controlled system can be described as:
Under such diffusion controlled conditions, the polyhedral crystal form will be unstable, it will sprout protrusions at its corners and edges where the degree of supersaturation is at its highest level. The tips of these protrusions will clearly be the points of highest supersaturation. It is generally believed that the protrusion will become longer (and thinner at the tip) until the effect of interfacial free energy in raising the chemical potential slows the tip growth and maintains a constant value for the tip thickness.
In the subsequent tip-thickening process, there should be a corresponding instability of shape. Minor bumps or "bulges" should be exaggerated—and develop into rapidly growing side branches. In such an unstable (or metastable) situation, minor degrees of anisotropy should be sufficient to determine directions of significant branching and growth. The most appealing aspect of this argument, of course, is that it yields the primary morphological features of dendritic growth.
See also
Abnormal grain growth
Chvorinov's rule
Cloud condensation nuclei
Crystal structure
Czochralski process
Dendrite (metal)
Diana's Tree
Fractional crystallization
Ice nucleus
Laser-heated pedestal growth
Manganese nodule
Micro-pulling-down
Monocrystalline whisker
Protocrystalline
Recrystallization (chemistry)
Seed crystal
Single crystal
Whisker (metallurgy)
Simulation
Kinetic Monte Carlo surface growth method
References
Crystallography
Crystals
Materials science
Mineralogy
Articles containing video clips | Crystal growth | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,594 | [
"Applied and interdisciplinary physics",
"Materials science",
"Crystallography",
"Crystals",
"Condensed matter physics",
"nan"
] |
3,054,008 | https://en.wikipedia.org/wiki/Sensitization | Sensitization is a non-associative learning process in which repeated administration of a stimulus results in the progressive amplification of a response. Sensitization often is characterized by an enhancement of response to a whole class of stimuli in addition to the one that is repeated. For example, repetition of a painful stimulus may make one more responsive to a loud noise.
History
Eric Kandel was one of the first to study the neural basis of sensitization, conducting experiments in the 1960s and 1970s on the gill withdrawal reflex of the seaslug Aplysia. Kandel and his colleagues first habituated the reflex, weakening the response by repeatedly touching the animal's siphon. They then paired noxious electrical stimulus to the tail with a touch to the siphon, causing the gill withdrawal response to reappear. After this sensitization, a light touch to the siphon alone produced a strong gill withdrawal response, and this sensitization effect lasted for several days. (After Squire and Kandel, 1999). In 2000, Eric Kandel was awarded the Nobel Prize in Physiology or Medicine for his research in neuronal learning processes.
Neural substrates
The neural basis of behavioral sensitization is often not known, but it typically seems to result from a cellular receptor becoming more likely to respond to a stimulus. Several examples of neural sensitization include:
Electrical or chemical stimulation of the rat hippocampus causes strengthening of synaptic signals, a process known as long-term potentiation or LTP. LTP of AMPA receptors is a potential mechanism underlying memory and learning in the brain.
In "kindling", repeated stimulation of hippocampal or amygdaloid neurons in the limbic system eventually leads to seizures in laboratory animals. After sensitization, very little stimulation may be required to produce seizures. Thus, kindling has been suggested as a model for temporal lobe epilepsy in humans, where stimulation of a repetitive type (flickering lights for instance) can cause epileptic seizures. Often, people suffering from temporal lobe epilepsy report symptoms of negative effects such as anxiety and depression that might result from limbic dysfunction.
In "central sensitization", nociceptive neurons in the dorsal horns of the spinal cord become sensitized by peripheral tissue damage or inflammation. This type of sensitization has been suggested as a possible causal mechanism for chronic pain conditions. The changes of central sensitization occur after repeated trials to pain. Research from animals has consistently shown that when a trial is repeatedly exposed to a painful stimulus, the animal’s pain threshold will change and result in a stronger pain response. Researchers believe that there are parallels that can be drawn between these animal trials and persistent pain in people. For example, after a back surgery that removed a herniated disc from causing a pinched nerve, the patient may still continue to feel pain. Also, newborns who are circumcised without anesthesia have shown tendencies to react more greatly to future injections, vaccinations, and other similar procedures. The responses of these children are an increase in crying and a greater hemodynamic response (tachycardia and tachypnea).
Drug sensitization occurs in drug addiction, and is defined as an increased effect of drug following repeated doses (the opposite of drug tolerance). Such sensitization involves changes in brain mesolimbic dopamine transmission, as well as a protein inside mesolimbic neurons called delta FosB. An associative process may contribute to addiction, for environmental stimuli associated with drug taking may increase craving. This process may increase the risk for relapse in addicts attempting to quit.
Cross-sensitization
Cross-sensitization is a phenomenon in which sensitization to a stimulus is generalized to a related stimulus, resulting in the amplification of a particular response to both the original stimulus and the related stimulus. For example, cross-sensitization to the neural and behavioral effects of addictive drugs are well characterized, such as sensitization to the locomotor response of a stimulant resulting in cross-sensitization to the motor-activating effects of other stimulants. Similarly, reward sensitization to a particular addictive drug often results in reward cross-sensitization, which entails sensitization to the rewarding property of other addictive drugs in the same drug class or even certain natural rewards.
In animals, cross-sensitization has been established between the consumption of many different types of drugs of abuse – in line with the gateway drug theory – and also between sugar consumption and the self-administration of drugs of abuse.
As a causal factor in pathology
Sensitization has been implied as a causal or maintaining mechanism in a wide range of apparently unrelated pathologies including addiction, allergies, asthma, overactive bladder and some medically unexplained syndromes such as fibromyalgia and multiple chemical sensitivity. Sensitization may also contribute to psychological disorders such as post-traumatic stress disorder, panic anxiety and mood disorders.
See also
Long-term potentiation
Multiple chemical sensitivity
Neuroplasticity
Synaptic plasticity
References
Behaviorism
Learning | Sensitization | [
"Biology"
] | 1,069 | [
"Behavior",
"Behaviorism"
] |
16,138,812 | https://en.wikipedia.org/wiki/PFA-100 | The PFA-100 (Platelet Function Assay or Platelet Function Analyser) is a platelet function analyser that aspirates blood in vitro from a blood specimen into disposable test cartridges through a microscopic aperture cut into a biologically active membrane at the end of a capillary. The membrane of the cartridges are coated with collagen and adenosine diphosphate (ADP) or collagen and epinephrine inducing a platelet plug to form which closes the aperture.
The PFA test result is dependent on platelet function, plasma von Willebrand Factor level, platelet
number, and (to some extent) the hematocrit (that is, the percent composition of red blood cells in the sample).
The PFA test is initially performed with the Collagen/Epinepherine membrane. A normal Col/Epi closure time
(<180 seconds) excludes the presence of a significant platelet function defect.
If the Col/Epi closure time is prolonged (>180 seconds), the Col/ADP test is automatically
performed. If the Col/ADP result is normal (<120 seconds), aspirin-induced platelet dysfunction is
most likely.
Prolongation of both test results (Col/Epi >180 seconds, Col/ADP >120 seconds) may indicate
the following;
Anemia (hematocrit <0.28)
Thrombocytopenia (platelet count < 100 x 10/L)
A significant platelet function defect other than aspirin
Once anemia and thrombocytopenia have been excluded, further evaluation to exclude von
Willebrand disease and inherited/acquired platelet dysfunction can be made.
References
External links
Practical Haemostasis
Medical testing equipment | PFA-100 | [
"Chemistry",
"Biology"
] | 372 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
16,143,832 | https://en.wikipedia.org/wiki/Minimal%20prime%20ideal | In mathematics, especially in commutative algebra, certain prime ideals called minimal prime ideals play an important role in understanding rings and modules. The notion of height and Krull's principal ideal theorem use minimal prime ideals.
Definition
A prime ideal P is said to be a minimal prime ideal over an ideal I if it is minimal among all prime ideals containing I. (Note: if I is a prime ideal, then I is the only minimal prime over it.) A prime ideal is said to be a minimal prime ideal if it is a minimal prime ideal over the zero ideal.
A minimal prime ideal over an ideal I in a Noetherian ring R is precisely a minimal associated prime (also called isolated prime) of ; this follows for instance from the primary decomposition of I.
Examples
In a commutative Artinian ring, every maximal ideal is a minimal prime ideal.
In an integral domain, the only minimal prime ideal is the zero ideal.
In the ring Z of integers, the minimal prime ideals over a nonzero principal ideal (n) are the principal ideals (p), where p is a prime divisor of n. The only minimal prime ideal over the zero ideal is the zero ideal itself. Similar statements hold for any principal ideal domain.
If I is a p-primary ideal (for example, a symbolic power of p), then p is the unique minimal prime ideal over I.
The ideals and are the minimal prime ideals in since they are the extension of prime ideals for the morphism , contain the zero ideal (which is not prime since , but, neither nor are contained in the zero ideal) and are not contained in any other prime ideal.
In the minimal primes over the ideal are the ideals and .
Let and the images of x, y in A. Then and are the minimal prime ideals of A (and there are no others). Let be the set of zero-divisors in A. Then is in D (since it kills nonzero ) while neither in nor ; so .
Properties
All rings are assumed to be commutative and unital.
Every proper ideal I in a ring has at least one minimal prime ideal above it. The proof of this fact uses Zorn's lemma. Any maximal ideal containing I is prime, and such ideals exist, so the set of prime ideals containing I is non-empty. The intersection of a decreasing chain of prime ideals is prime. Therefore, the set of prime ideals containing I has a minimal element, which is a minimal prime over I.
Emmy Noether showed that in a Noetherian ring, there are only finitely many minimal prime ideals over any given ideal. The fact remains true if "Noetherian" is replaced by the ascending chain conditions on radical ideals.
The radical of any proper ideal I coincides with the intersection of the minimal prime ideals over I. This follows from the fact that every prime ideal contains a minimal prime ideal.
The set of zero divisors of a given ring contains the union of the minimal prime ideals.
Krull's principal ideal theorem says that, in a Noetherian ring, each minimal prime over a principal ideal has height at most one.
Each proper ideal I of a Noetherian ring contains a product of the possibly repeated minimal prime ideals over it (Proof: is the intersection of the minimal prime ideals over I. For some n, and so I contains .)
A prime ideal in a ring R is a unique minimal prime over an ideal I if and only if , and such an I is -primary if is maximal. This gives a local criterion for a minimal prime: a prime ideal is a minimal prime over I if and only if is a -primary ideal. When R is a Noetherian ring, is a minimal prime over I if and only if is an Artinian ring (i.e., is nilpotent module I). The pre-image of under is a primary ideal of called the -primary component of I.
When is Noetherian local, with maximal ideal , is minimal over if and only if there exists a number such that .
Equidimensional ring
For a minimal prime ideal in a local ring , in general, it need not be the case that , the Krull dimension of .
A Noetherian local ring is said to be equidimensional if for each minimal prime ideal , . For example, a local Noetherian integral domain and a local Cohen–Macaulay ring are equidimensional.
See also equidimensional scheme and quasi-unmixed ring.
See also
Extension and contraction of ideals
Normalization
Notes
References
Further reading
http://stacks.math.columbia.edu/tag/035E
http://stacks.math.columbia.edu/tag/035P
Commutative algebra
Prime ideals | Minimal prime ideal | [
"Mathematics"
] | 996 | [
"Fields of abstract algebra",
"Commutative algebra"
] |
16,150,021 | https://en.wikipedia.org/wiki/Boris%20Mamyrin | Boris Aleksandrovich Mamyrin (; 25 May 1919 5 March 2007) was a Soviet and Russian physicist, best known for his invention of the electrostatic ion mirror mass spectrometer known as the reflectron.
Biography
Mamyrin was born in 1919 in Lipetsk, Soviet Russia during Russian Civil War. Both of his parents were medical doctors and his early aim was to follow in their footsteps. However, shortly after he obtained his M.S. degree in physics from the Leningrad Polytechnic Institute, World War II cut his studies short. He served in the army throughout the war, finally being discharged from military service in 1948. He returned to the Polytechnic Institute and obtained his doctoral degree within a year. He became the head and leading research scientist of the laboratory for mass spectrometry at Ioffe Physico-Technical Institute of the Russian Academy of Sciences. He was a corresponding member of the Russian Academy of Sciences and a full member of the Russian Academy of Natural Sciences.
See also
Time-of-flight mass spectrometry
References
External links
1919 births
2007 deaths
20th-century Russian physicists
21st-century Russian physicists
People from Lipetsk
Corresponding Members of the Russian Academy of Sciences
Peter the Great St. Petersburg Polytechnic University alumni
Recipients of the Order of Honour (Russia)
Recipients of the Order of the Red Banner of Labour
Mass spectrometrists
Soviet physicists
Russian scientists | Boris Mamyrin | [
"Physics",
"Chemistry"
] | 283 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
17,383,329 | https://en.wikipedia.org/wiki/Hemozoin | Haemozoin is a disposal product formed from the digestion of blood by some blood-feeding parasites. These hematophagous organisms such as malaria parasites (Plasmodium spp.), Rhodnius and Schistosoma digest haemoglobin and release high quantities of free heme, which is the non-protein component of haemoglobin. Heme is a prosthetic group consisting of an iron atom contained in the center of a heterocyclic porphyrin ring. Free heme is toxic to cells, so the parasites convert it into an insoluble crystalline form called hemozoin. In malaria parasites, hemozoin is often called malaria pigment.
Since the formation of hemozoin is essential to the survival of these parasites, it is an attractive target for developing drugs and is much-studied in Plasmodium as a way to find drugs to treat malaria (malaria's Achilles' heel). Several currently used antimalarial drugs, such as chloroquine and mefloquine, are thought to kill malaria parasites by inhibiting haemozoin biocrystallization.
Discovery
Black-brown pigment was observed by Johann Heinrich Meckel in 1847, in the blood and spleen of a person suffering from insanity. However, it was not until 1849 that the presence of this pigment was connected to infection with malaria. Initially, it was thought that this pigment was produced by the body in response to infection, but Charles Louis Alphonse Laveran realized in 1880 that "malaria pigment" is, instead, produced by the parasites, as they multiplied within the red blood cell. The link between pigment and malaria parasites was used by Ronald Ross to identify the stages in the Plasmodium life cycle that occur within the mosquito, since, although these forms of the parasite are different in appearance to the blood stages, they still contain traces of pigment.
Later, in 1891, T. Carbone and W.H. Brown (1911) published papers linking the hemoglobin degradation with pigment production, describing the malaria pigment as a form of hematin and disproving the widely held idea that it is related to melanin. Brown observed that all melanins were bleaching rapidly with potassium permanganate, while with this reagent malarial pigment manifests not the slightest sign of a true bleach reaction. The name "hemozoin" was proposed by Louis Westenra Sambon. In the 1930s several authors identified hemozoin as a pure crystalline form of α-hematin and showed that the substance did not contain proteins within the crystals, but no explanation for the solubility differences between malaria pigment and α-hematin crystals was given.
Formation
During its intraerythrocytic asexual reproduction cycle Plasmodium falciparum consumes up to 80% of the host cell hemoglobin. The digestion of hemoglobin releases monomeric α-hematin (ferriprotoporphyrin IX). This compound is toxic, since it is a pro-oxidant and catalyzes the production of reactive oxygen species. Oxidative stress is believed to be generated during the conversion of heme (ferroprotoporphyrin) to hematin (ferriprotoporphyrin). Free hematin can also bind to and disrupt cell membranes, damaging cell structures and causing the lysis of the host erythrocyte. The unique reactivity of this molecule has been demonstrated in several in vitro and in vivo experimental conditions.
The malaria parasite, therefore, detoxifies the hematin, which it does by biocrystallization—converting it into insoluble and chemically inert β-hematin crystals (called hemozoin). In Plasmodium the food vacuole fills with hemozoin crystals, which are about 100–200 nanometres long and each contain about 80,000 heme molecules. Detoxification through biocrystallization is distinct from the detoxification process in mammals, where an enzyme called heme oxygenase instead breaks excess heme into biliverdin, iron, and carbon monoxide.
Several mechanisms have been proposed for the production of hemozoin in Plasmodium, and the area is highly controversial, with membrane lipids, histidine-rich proteins, or even a combination of the two, being proposed to catalyse the formation of hemozoin. Other authors have described a heme detoxification protein, which is claimed to be more potent than either lipids or histidine-rich proteins. It is possible that many processes contribute to the formation of hemozoin.
The formation of hemozoin in other blood-feeding organisms is not as well-studied as in Plasmodium. However, studies on Schistosoma mansoni have revealed that this parasitic worm produces large amounts of hemozoin during its growth in the human bloodstream. Although the shapes of the crystals are different from those produced by malaria parasites, chemical analysis of the pigment showed that it is made of hemozoin. In a similar manner, the crystals formed in the gut of the kissing bug Rhodnius prolixus during digestion of the blood meal also have a unique shape, but are composed of hemozoin. Hz formation in R. prolixus midgut occurs at physiologically relevant physico-chemical conditions and lipids play an important role in heme biocrystallization. Autocatalytic heme crystallization to Hz is revealed to be an inefficient process and this conversion is further reduced as the Hz concentration increases.
Several other mechanisms have been developed to protect a large variety of hematophagous organisms against the toxic effects of free heme. Mosquitoes digest their blood meals
extracellularly and do not produce hemozoin. Heme is retained in the peritrophic matrix, a layer of protein and polysaccharides that covers the midgut and separates gut cells from the blood bolus.
Although β-hematin can be produced in assays spontaneously at low pH, the development of a simple and reliable method to measure the production of hemozoin has been difficult. This is in part due to the continued uncertainty over what molecules are involved in producing hemozoin, and partly from the difficulty in measuring the difference between aggregated or precipitated heme, and genuine hemozoin. Current assays are sensitive and accurate, but require multiple washing steps so are slow and not ideal for high-throughput screening. However, some screens have been performed with these assays.
Structure
β-Hematin crystals are made of dimers of hematin molecules that are, in turn, joined together by hydrogen bonds to form larger structures. In these dimers, an iron-oxygen coordinate bond links the central iron of one hematin to the oxygen of the carboxylate side-chain of the adjacent hematin. These reciprocal iron–oxygen bonds are highly unusual and have not been observed in any other porphyrin dimer. β-Hematin can be either a cyclic dimer or a linear polymer, a polymeric form has never been found in hemozoin, disproving the widely held idea that hemozoin is produced by the enzyme heme-polymerase.
Hemozoin crystals have a distinct triclinic structure and are weakly magnetic. The difference between diamagnetic low-spin oxyhemoglobin and paramagnetic hemozoin can be used for isolation. They also exhibit optical dichroism, meaning they absorb light more strongly along their length than across their width, enabling the automated detection of malaria. Hemozoin is produced in a form that, under the action of an applied magnetic field, gives rise to an induced optical dichroism characteristic of the hemozoin concentration; and precise measurement of this induced dichroism (Magnetic circular dichroism) may be used to determine the level of malarial infection.
Inhibitors
Hemozoin formation is an excellent drug target, since it is essential to malaria parasite survival and absent from the human host. The drug target hematin is host-derived and largely outside the genetic control of the parasite, which makes the development of drug resistance more difficult. Many clinically used drugs are thought to act by inhibiting the formation of hemozoin in the food vacuole. This prevents the detoxification of the heme released in this compartment, and kills the parasite.
The best-understood examples of such hematin biocrystallization inhibitors are quinoline drugs such as chloroquine and mefloquine. These drugs bind to both free heme and hemozoin crystals, and therefore block the addition of new heme units onto the growing crystals. The small, most rapidly growing face is the face to which inhibitors are believed to bind.
Role in pathophysiology
Hemozoin is released into the circulation during reinfection and phagocytosed in vivo and in vitro by host phagocytes and alters important functions in those cells. Most functional alterations were long-term postphagocytic effects, including erythropoiesis inhibition shown in vitro.
In contrast, a powerful, short-term stimulation of oxidative burst by human monocytes was also shown to occur during phagocytosis of nHZ.
Lipid peroxidation non-enzymatically catalysed by hemozoin iron was described in immune cells.
Lipoperoxidation products, as hydroxyeicosatetraenoic acids (HETEs) and 4-hydroxynonenal (4-HNE), are functionally involved in immunomodulation.
See also
Biocrystallization
Drug discovery
History of malaria
Parasitic diseases
References
Malaria
Biomolecules | Hemozoin | [
"Chemistry",
"Biology"
] | 2,047 | [
"Natural products",
"Organic compounds",
"Biomolecules",
"Structural biology",
"Biochemistry",
"Molecular biology"
] |
17,384,910 | https://en.wikipedia.org/wiki/Observer%20%28special%20relativity%29 | In special relativity, an observer is a frame of reference from which a set of objects or events are being measured. Usually this is an inertial reference frame or "inertial observer". Less often an observer may be an arbitrary non-inertial reference frame such as a Rindler frame which may be called an "accelerating observer".
The special relativity usage differs significantly from the ordinary English meaning of "observer". Reference frames are inherently nonlocal constructs, covering all of space and time or a nontrivial part of it; thus it does not make sense to speak of an observer (in the special relativistic sense) having a location. Also, an inertial observer cannot accelerate at a later time, nor can an accelerating observer stop accelerating.
Physicists use the term "observer" as shorthand for a specific reference frame from which a set of objects or events is being measured. Speaking of an observer in special relativity is not specifically hypothesizing an individual person who is experiencing events, but rather it is a particular mathematical context which objects and events are to be evaluated from. The effects of special relativity occur whether or not there is a sentient being within the inertial reference frame to witness them.
History
Einstein made frequent use of the word "observer" (Beobachter) in his original 1905 paper on special relativity and in his early popular exposition of the subject. However he used the term in its vernacular sense, referring for example to "the man at the railway-carriage window" or "observers who take the railway train as their reference-body" or "an observer inside who is equipped with apparatus". Here the reference body or coordinate system—a physical arrangement of metersticks and clocks which covers the region of spacetime where the events take place—is distinguished from the observer—an experimenter who assigns spacetime coordinates to events far from himself by observing (literally seeing) coincidences between those events and local features of the reference body.
This distinction between observer and the observer's "apparatus" like coordinate systems, measurement tools etc. was dropped by many later writers, and today it is common to find the term "observer" used to imply an observer's associated coordinate system (usually assumed to be a coordinate lattice constructed from an orthonormal right-handed set of spacelike vectors perpendicular to a timelike vector (a frame field), see Doran). Where Einstein referred to "an observer who takes the train as his reference body" or "an observer located at the origin of the coordinate system", this group of modern writers says, for example, "an observer is represented by a coordinate system in the four variables of space and time"
or "the observer in frame S finds that a certain event A occurs at the origin of his coordinate system". However, there is no unanimity on this point, with a number of authors continuing a preference for distinguishing between observer (as a concept related to state of motion) from the more abstract general mathematical notion of coordinate system (which can be, but need not be, related to motion). This approach places more emphasis on the many choices for description open to an observer. The observer is then identified with an observational reference frame, rather than with the combination of coordinate system, measurement apparatus and state of motion.
It also has been suggested that the term "observer" is antiquated, and should be replaced by an observer team (or family of observers) in which each observer makes observations in their immediate vicinity, where delays are negligible, cooperating with the rest of the team to set up synchronized clocks across the entire region of observation, and all team members sending their various results back to a data collector for synthesis.
"Observer" as a form of relative coordinates
Relative direction is a concept found in many human languages. In English, a description of the spatial location of an object may use terms such as "left" and "right" which are relative to the speaker or relative to a particular object or perspective (e.g. "to your left, as you are facing the front door").
The degree to which such a description is subjective is rather subtle. See the Ozma Problem for an illustration of this.
Some impersonal examples of relative direction in language are the nautical terms bow, aft, port, and starboard. These are relative, egocentric-type spatial terms but they do not involve an ego: there is a bow, an aft, a port, and a starboard to a ship even when no one is aboard.
Special relativity statements involving an "observer" are in some measure articulating a similar kind of impersonal relative direction. An "observer" is a perspective in that it is a context from which events in other inertial reference frames are evaluated but it is not the sort of perspective that a single particular person would have: it is not localized and it is not associated with a particular point in space, but rather with an entire inertial reference frame that may exist anywhere in the universe (given certain lengthy mathematical specifications and caveats).
Usage in other scientific disciplines
The term observer also has special meaning in other areas of science, such as quantum mechanics, and information theory. See for example, Schrödinger's cat and Maxwell's demon.
In general relativity the term "observer" refers more commonly to a person (or a machine) making passive local measurements, a usage much closer to the ordinary English meaning of the word. In quantum mechanics, "observation" is synonymous with quantum measurement and "observer" with a measurement apparatus and observable with what can be measured. This conflict of usages within physics is sometimes a source of confusion.
See also
Frame of reference
Minkowski diagram
Observer (disambiguation)
References
Special relativity | Observer (special relativity) | [
"Physics"
] | 1,189 | [
"Special relativity",
"Theory of relativity"
] |
17,385,860 | https://en.wikipedia.org/wiki/Foldit | Foldit is an online puzzle video game about protein folding. It is part of an experimental research project developed by the University of Washington, Center for Game Science, in collaboration with the UW Department of Biochemistry. The objective of Foldit is to fold the structures of selected proteins as perfectly as possible, using tools provided in the game. The highest scoring solutions are analyzed by researchers, who determine whether or not there is a native structural configuration (native state) that can be applied to relevant proteins in the real world. Scientists can then use these solutions to target and eradicate diseases and create biological innovations. A 2010 paper in the science journal Nature credited Foldit's 57,000 players with providing useful results that matched or outperformed algorithmically computed solutions.
History
Rosetta
Prof. David Baker, a protein research scientist at the University of Washington, founded the Foldit project. Seth Cooper was the lead game designer. Before starting the project, Baker and his laboratory coworkers relied on another research project named Rosetta to predict the native structures of various proteins using special computer protein structure prediction algorithms. Rosetta was eventually extended to use the power of distributed computing: The Rosetta@home program was made available for public download, and displayed its protein-folding progress as a screensaver. Its results were sent to a central server for verification.
Some Rosetta@home users became frustrated when they saw ways to solve protein structures, but could not interact with the program. Hoping that humans could improve the computers' attempts to solve protein structures, Baker approached David Salesin and Zoran Popović, computer science professors at the same university, to help conceptualize and build an interactive program - a video game - that would appeal to the public and help efforts to find native protein structures.
Foldit
Many of the same people who created Rosetta@home worked on Foldit. The public beta version was released in May 2008 and has 240,000 registered players.
Since 2008, Foldit has participated in Critical Assessment of Techniques for Protein Structure Prediction (CASP) experiments, submitting its best solutions to targets based on unknown protein structures. CASP is an international program to assess methods of protein structure prediction and identify those that are most productive.
Goals
Protein structure prediction is important in several fields of science, including bioinformatics, molecular biology, and medicine. Identifying natural proteins' structural configurations enables scientists to understand them better. This can lead to creating novel proteins by design, advances in treating disease, and solutions for other real-world problems such as invasive species, waste, and pollution.
The process by which living beings create the primary structure of proteins, protein biosynthesis, is reasonably well understood, as is the means by which proteins are encoded as DNA. However, determining how a given protein's primary structure becomes a functioning three-dimensional structure, how the molecule folds, is more difficult. The general process is understood, but predicting a protein's eventual, functioning structure is computationally demanding.
Methods
Similarly to Rosetta@home, Foldit is a means to discover native protein structures faster through distributed computing. However, Foldit has a greater emphasis on community collaboration through its forums, where users can collaborate on certain folds. Furthermore, Foldit's crowdsourced approach places a greater emphasis on the user. Foldit's virtual interaction and gamification create a unique and innovative environment with the potential to greatly advance protein folding research.
Virtual interaction
Foldit attempts to apply the human brain's three-dimensional pattern matching and spatial reasoning abilities to help solve the problem of protein structure prediction. 2016 puzzles are based on well-understood proteins. By analysing how humans intuitively approach these puzzles, researchers hope to improve the algorithms used by protein-folding software.
Foldit includes a series of tutorials where users manipulate simple protein-like structures and a periodically updated set of puzzles based on real proteins. It shows a graphical representation of each protein which users can manipulate using a set of tools.
Gamification
Foldit's developers wanted to attract as many people as possible to the cause of protein folding. So, rather than only building a useful science tool, they used gamification (the inclusion of gaming elements) to make Foldit appealing and engaging to the general public.
As a protein structure is modified, a score is calculated based on how well-folded the protein is, and a list of high scores for each puzzle is maintained. Foldit users may create and join groups, and members of groups can share puzzle solutions. Groups have been found to be useful in training new players. A separate list of group high scores is maintained, as well as two leaderboards for groups and individuals.
Accomplishments
Results from Foldit have been included in a number of scientific publications.
Foldit players have been cited collectively as "Foldit players" or "Players, F." in some cases. Individual players have also been listed as authors on at least one paper, and on four related Protein Data Bank depositions.
An August 2010 paper in the journal Nature credited Foldit's 57,000 players with providing useful results that matched or outperformed algorithmically computed solutions, stating "[p]layers working collaboratively develop a rich assortment of new strategies and algorithms; unlike computational approaches, they explore not only conformational space but also the space of possible search strategies".
A November 2011 article in PNAS compared "recipes" developed by Foldit players to Rosetta scripts developed by members of the Baker Lab at the University of Washington. The player-developed "Blue Fuse" recipe compared favorably with the scientists' "Fast Relax" algorithm.
In 2011, Foldit players helped decipher the crystal structure of a retroviral protease from Mason-Pfizer monkey virus (M-PMV), a monkey virus which causes HIV/AIDS-like symptoms, a scientific problem that had been unsolved for 15 years. While the puzzle was available for three weeks, players produced a 3D model of the enzyme in only ten days that is accurate enough for molecular replacement.
In January 2012, Scientific American reported that Foldit gamers achieved the first crowdsourced redesign of a protein, an enzyme that catalysed the Diels–Alder reactions widely used in synthetic chemistry. A team including David Baker in the Center for Game Science at University of Washington in Seattle computationally designed the enzyme from scratch but found its potency needed improvement. Foldit players reengineered the enzyme by adding 13 amino acids, increasing its activity by more than 18 times.
A September 2016 article in Nature Communications detailed a "crystallographic model-building competition between trained crystallographers, undergraduate students, Foldit players and automatic model-building algorithms" in which "a team of Foldit players achieved the most accurate structure" fitting a protein to the results of an X-ray crystallography experiment.
A July 2018 article in Nature Communications reviewed the collaboration between Foldit players and teams in the WeFold consortium in biennial CASP competitions CASP11 and CASP12.
A June 2019 letter in Nature described the analysis of proteins designed by Foldit players. Four player-designed proteins were successfully grown in E. coli and then "solved" via X-ray crystallography. The proteins were added to the Protein Data Bank as 6MRR, 6MRS, 6MSP, and 6NUK.
In November 2019, an article in PLOS Biology reported how Foldit players were able to "build protein structures into crystallographic, high-resolution maps more accurately than expert crystallographers or automated model-building algorithms" using data from cryo EM experiments.
Future development
Foldit's toolbox is mainly for the design of protein molecules. The game's creator announced the plan to add, by 2013, the chemical building blocks of organic subcomponents to enable players to design small molecules. The small molecule design system termed Drugit was tested on the Von Hippel-Lindau tumor suppressor (VHL). Results of the VHL experiment were presented in a March 2023 preprint paper and at an August 2023 American Chemical Society conference session.
See also
Citizen science
Rosetta@home
EteRNA
Eyewire
Folding@home
Human-based computation game
Molecular graphics
Comparison of software for molecular mechanics modeling
Predictor@home
Quantum Moves
Protein structure prediction
Protein structure prediction software
Serious game
References
External links
official Foldit website
2008 video games
Linux games
MacOS games
Windows games
Puzzle video games
Human-based computation games
Lua (programming language)-scripted video games
Structural bioinformatics software
Computational biology
Molecular biology
Protein folding
Protein structure
Gamification
Video games developed in the United States | Foldit | [
"Chemistry",
"Biology"
] | 1,759 | [
"Structural biology",
"Computational biology",
"Biochemistry",
"Protein structure",
"Molecular biology"
] |
17,387,312 | https://en.wikipedia.org/wiki/Cation-anion%20radius%20ratio | In condensed matter physics and inorganic chemistry, the cation-anion radius ratio can be used to predict the crystal structure of an ionic compound based on the relative size of its atoms. It is defined as the ratio of the ionic radius of the positively charged cation to the ionic radius of the negatively charged anion in a cation-anion compound. Anions are larger than cations. Large sized anions occupy lattice sites, while small sized cations are found in voids.
In a given structure, the ratio of cation radius to anion radius is called the radius ratio. This is simply given by .
Ratio rule and stability
The radius ratio rule defines a critical radius ratio for different crystal structures, based on their coordination geometry. The idea is that the anions and cations can be treated as incompressible spheres, meaning the crystal structure can be seen as a kind of unequal sphere packing. The allowed size of the cation for a given structure is determined by the critical radius ratio. If the cation is too small, then it will attract the anions into each other and they will collide hence the compound will be unstable due to anion-anion repulsion; this occurs when the radius ratio drops below the critical radius ratio for that particular structure. At the stability limit the cation is touching all the anions and the anions are just touching at their edges. For radius ratios greater than the critical ratius ratio, the structure is expected to be stable.
The rule is not obeyed for all compounds. By one estimate, the crystal structure can only be guessed about 2/3 of the time. Errors in prediction are partly due to the fact that real chemical compounds are not purely ionic, they display some covalent character.
The table below gives the relation between critical radius ratio, , and coordination number, , which may be obtained from a simple geometrical proof.
History
The radius ratio rule was first proposed by Gustav F. Hüttig in 1920. In 1926, Victor Goldschmidt extended the use to ionic lattices. In 1929, the rule was incorporated as the first of Pauling's rules for crystal structures.
See also
Goldschmidt tolerance factor
Pauling's rules
Cubic crystal system
Sphere packing
References
Crystallography
Inorganic chemistry
Ratios
Atomic radius | Cation-anion radius ratio | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 468 | [
"Materials science",
"Ratios",
"Atomic radius",
"Crystallography",
"Arithmetic",
"Condensed matter physics",
"nan",
"Atoms",
"Matter"
] |
17,388,125 | https://en.wikipedia.org/wiki/Oxide%20dispersion-strengthened%20alloy | Oxide dispersion strengthened alloys (ODS) are alloys that consist of a metal matrix with small oxide particles dispersed within it. They have high heat resistance, strength, and ductility. Alloys of nickel are the most common but includes iron aluminum alloys.
Applications include high temperature turbine blades and heat exchanger tubing, while steels are used in nuclear applications. ODS materials are used on spacecraft to protect the vehicle, especially during re-entry. Noble metal ODS alloys, for example, platinum-based alloys, are used in glass production.
When it comes to re-entry at hypersonic speeds, the properties of gases change dramatically. Shock waves that can cause serious damage on any structure are created. At these speeds and temperatures, oxygen becomes aggressive.
Mechanism
Oxide dispersion strengthening is based on incoherency of the oxide particles within the lattice of the material. Coherent particles have a continuous lattice plane from the matrix to the particles whereas incoherent particles do not have this continuity and therefore both lattice planes end at the interface. This mismatch in interfaces results in a high interfacial energy, which impedes dislocation. The oxide particles instead are stable in the matrix, which helps prevent creep. Particle stability implies little dimensional change, embrittlement, effects on properties, stable particle spacing, and general resistance to change at high temperatures.
Since the oxide particles are incoherent, dislocations can only overcome the particles by climb. If instead the particles are semi-coherent or coherent with the lattice, dislocations can simply cut the particles by a more favourable process that requires less energy called dislocation glide or by Orowan bowing between particles, both of which are athermal mechanisms. Dislocation climb is a diffusional process, which is less energetically favourable, and mostly occurs at higher temperatures that provide enough energy to advance via the addition and removal of atoms. Because the particles are incoherent, glide mechanisms alone are not enough and the more energetically exhausting climb process is dominant, meaning that dislocations are stopped more effectively. Climb can occur either at the particle-dislocation interface (local climb) or by overcoming multiple particles at once (general climb). In local climb, the part of the dislocation that is between two particles stays in the glide plane while the rest of the dislocation is climbing along the surface of the particle. For general climb, the dislocations all come out the glide plane. General climb requires less energy because the mechanism decreases the dislocation line length which reduces the elastic strain energy and therefore is the common climb mechanism. For γ’ volume fractions of 0.4 to 0.6 in nickel-based alloys, the threshold stress for local climb is only about 1.25 to 1.40 times higher than general climb.
Dislocations are not limited to either all local or all general climb as the path that requires less energy is taken. Cooperative climb is an example of a more nuanced mechanism where a dislocation travels around a group of particles rather than climbing past each particle individually. McLean stated that the dislocation is most relaxed when climbing over multiple particles because of the skipping of some of the abrupt interfaces between segments in the glide plane to segments that travel along the particle surface.
The presence of incoherent particles introduces a threshold stress (σt), since an additional stress will have to be applied for the dislocations to move past the oxides by climb. After overcoming a particle by climb, dislocations can remain pinned at the particle-matrix interface with an attractive phenomenon called interfacial pinning, which requires additional threshold stress to free a dislocation out of this pinning, which must be overcome for plastic deformation to occur. This detachment phenomenon is a result of the interaction between the particle and the dislocation where total elastic strain energy is reduced. Schroder and Arzt explain that the additional stress required is due to the relaxation caused by the reduction in the stress field as the dislocation climbs and accommodates the shear traction. The following equations represent the strain rate and stress as a result of oxide introduction.
Strain Rate:
Threshold Shear Stress:
Synthesis
Ball-milling
ODS steels creep properties are dependent on the characteristics of the oxide particles in the metal matrix, specifically their ability to prevent dislocation motion as well as the size and distribution of the particles. Hoelzer and coworkers showed that an alloy containing a homogeneous dispersion of 1-5 nm Y2Ti2O7 nanoclusters has superior creep properties to an alloy with a heterogeneous dispersion of 5-20 nm nanoclusters of the same composition.
ODS steels are commonly produced through ball-milling an oxide of interest (e.g. Y2O3, Al2O3) with pre-alloyed metal powders followed by compression and sintering. It is believed that the oxides enter into solid solution with the metal during ball-milling and subsequently precipitate during the thermal treatment. This process seems simple but many parameters need to be carefully controlled to produce a successful alloy. Leseigneur and coworkers carefully controlled some of these parameters and achieved more consistent and better microstructures. In this two step method the oxide is ball-milled for longer periods to ensure a homogeneous solid solution of the oxide. The powder is annealed at higher temperatures to begin a controlled nucleation of the oxide clusters. Finally the powder is again compressed and sintered to yield the final material.
Additive manufacturing
NASA used ResonantAcoustic mixing and additive manufacturing to synthesize an alloy they termed GRX-810, which survived temperatures over . The alloy also featured improved strength, malleability, and durability. The printer dispersed oxide particles uniformly throughout the metal matrix. The alloy was identified using 30 simulations of thermodynamic modeling.
Advantages and disadvantages
Advantages:
Can be machined, brazed, formed, cut with available processes.
Develops a protective oxide layer that is self-healing.
This oxide layer is stable and has a high emission coefficient.
Allows the design of thin-walled structures (sandwich).
Resistant to harsh weather conditions in the troposphere.
Low maintenance cost.
Low material cost.
Disadvantages:
It has a higher expansion coefficient than other materials, causing higher thermal stresses.
Higher density.
Lower maximum allowable temperature.
See also
Superalloy
References
Alloys
Metallurgy | Oxide dispersion-strengthened alloy | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,335 | [
"Metallurgy",
"Materials science",
"Alloys",
"Chemical mixtures",
"nan"
] |
2,212,479 | https://en.wikipedia.org/wiki/Georgi%E2%80%93Jarlskog%20mass%20relation | In grand unified theories of the SU(5) or SO(10) type, there is a mass relation predicted between the electron and the down quark, the muon and the strange quark and the tau lepton and the bottom quark called the Georgi–Jarlskog mass relations. The relations were formulated by Howard Georgi and Cecilia Jarlskog.
At GUT scale, these are sometimes quoted as:
In the same paper it is written that:
Meaning that:
References
Grand Unified Theory | Georgi–Jarlskog mass relation | [
"Physics"
] | 105 | [
"Unsolved problems in physics",
"Particle physics",
"Grand Unified Theory",
"Particle physics stubs",
"Physics beyond the Standard Model"
] |
2,212,557 | https://en.wikipedia.org/wiki/Crystallographic%20defects%20in%20diamond | Imperfections in the crystal lattice of diamond are common. Such defects may be the result of lattice irregularities or extrinsic substitutional or interstitial impurities, introduced during or after the diamond growth. The defects affect the material properties of diamond and determine to which type a diamond is assigned; the most dramatic effects are on the diamond color and electrical conductivity, as explained by the electronic band structure.
The defects can be detected by different types of spectroscopy, including electron paramagnetic resonance (EPR), luminescence induced by light (photoluminescence, PL) or electron beam (cathodoluminescence, CL), and absorption of light in the infrared (IR), visible and UV parts of the spectrum. The absorption spectrum is used not only to identify the defects, but also to estimate their concentration; it can also distinguish natural from synthetic or enhanced diamonds.
Labeling of diamond centers
There is a tradition in diamond spectroscopy to label a defect-induced spectrum by a numbered acronym (e.g. GR1). This tradition has been followed in general with some notable deviations, such as A, B and C centers. Many acronyms are confusing though:
Some symbols are too similar (e.g., 3H and H3).
Accidentally, the same labels were given to different centers detected by EPR and optical techniques (e.g., N3 EPR center and N3 optical center have no relation).
Whereas some acronyms are logical, such as N3 (N for natural, i.e. observed in natural diamond) or H3 (H for heated, i.e. observed after irradiation and heating), many are not. In particular, there is no clear distinction between the meaning of labels GR (general radiation), R (radiation) and TR (type-II radiation).
Defect symmetry
The symmetry of defects in crystals is described by the point groups. They differ from the space groups describing the symmetry of crystals by absence of translations, and thus are much fewer in number. In diamond, only defects of the following symmetries have been observed thus far: tetrahedral (Td), tetragonal (D2d), trigonal (D3d, C3v), rhombic (C2v), monoclinic (C2h, C1h, C2) and triclinic (C1 or CS).
The defect symmetry allows predicting many optical properties. For example, one-phonon (infrared) absorption in pure diamond lattice is forbidden because the lattice has an inversion center. However, introducing any defect (even "very symmetrical", such as N-N substitutional pair) breaks the crystal symmetry resulting in defect-induced infrared absorption, which is the most common tool to measure the defect concentrations in diamond.
In synthetic diamond grown by the high-pressure high-temperature synthesis or chemical vapor deposition, defects with symmetry lower than tetrahedral align to the direction of the growth. Such alignment has also been observed in gallium arsenide and thus is not unique to diamond.
Extrinsic defects
Various elemental analyses of diamond reveal a wide range of impurities. They mostly originate, however, from inclusions of foreign materials in diamond, which could be nanometer-small and invisible in an optical microscope. Also, virtually any element can be hammered into diamond by ion implantation. More essential are elements that can be introduced into the diamond lattice as isolated atoms (or small atomic clusters) during the diamond growth. By 2008, those elements are nitrogen, boron, hydrogen, silicon, phosphorus, nickel, cobalt and perhaps sulfur. Manganese and tungsten have been unambiguously detected in diamond, but they might originate from foreign inclusions. Detection of isolated iron in diamond has later been re-interpreted in terms of micro-particles of ruby produced during the diamond synthesis. Oxygen is believed to be a major impurity in diamond, but it has not been spectroscopically identified in diamond yet. Two electron paramagnetic resonance centers (OK1 and N3) have been initially assigned to nitrogen–oxygen complexes, and later to titanium-related complexes. However, the assignment is indirect and the corresponding concentrations are rather low (few parts per million).
Nitrogen
The most common impurity in diamond is nitrogen, which can comprise up to 1% of a diamond by mass. Previously, all lattice defects in diamond were thought to be the result of structural anomalies; later research revealed nitrogen to be present in most diamonds and in many different configurations. Most nitrogen enters the diamond lattice as a single atom (i.e. nitrogen-containing molecules dissociate before incorporation), however, molecular nitrogen incorporates into diamond as well.
Absorption of light and other material properties of diamond are highly dependent upon nitrogen content and aggregation state. Although all aggregate configurations cause absorption in the infrared, diamonds containing aggregated nitrogen are usually colorless, i.e. have little absorption in the visible spectrum. The four main nitrogen forms are as follows:
C-nitrogen center
The C center corresponds to electrically neutral single substitutional nitrogen atoms in the diamond lattice. These are easily seen in electron paramagnetic resonance spectra (in which they are confusingly called P1 centers). C centers impart a deep yellow to brown color; these diamonds are classed as type Ib and are commonly known as "canary diamonds", which are rare in gem form. Most synthetic diamonds produced by high-pressure high-temperature (HPHT) technique contain a high level of nitrogen in the C form; nitrogen impurity originates from the atmosphere or from the graphite source. One nitrogen atom per 100,000 carbon atoms will produce yellow color. Because the nitrogen atoms have five available electrons (one more than the carbon atoms they replace), they act as "deep donors"; that is, each substituting nitrogen has an extra electron to donate and forms a donor energy level within the band gap. Light with energy above ~2.2 eV can excite the donor electrons into the conduction band, resulting in the yellow color.
The C center produces a characteristic infrared absorption spectrum with a sharp peak at 1344 cm−1 and a broader feature at 1130 cm−1. Absorption at those peaks is routinely used to measure the concentration of single nitrogen. Another proposed way, using the UV absorption at ~260 nm, has later been discarded as unreliable.
Acceptor defects in diamond ionize the fifth nitrogen electron in the C center converting it into C+ center. The latter has a characteristic IR absorption spectrum with a sharp peak at 1332 cm−1 and broader and weaker peaks at 1115, 1046 and 950 cm−1.
A-nitrogen center
The A center is probably the most common defect in natural diamonds. It consists of a neutral nearest-neighbor pair of nitrogen atoms substituting for the carbon atoms. The A center produces UV absorption threshold at ~4 eV (310 nm, i.e. invisible to eye) and thus causes no coloration. Diamond containing nitrogen predominantly in the A form as classed as type IaA.
The A center is diamagnetic, but if ionized by UV light or deep acceptors, it produces an electron paramagnetic resonance spectrum W24, whose analysis unambiguously proves the N=N structure.
The A center shows an IR absorption spectrum with no sharp features, which is distinctly different from that of the C or B centers. Its strongest peak at 1282 cm−1 is routinely used to estimate the nitrogen concentration in the A form.
B-nitrogen center
There is a general consensus that B center (sometimes called B1) consists of a carbon vacancy surrounded by four nitrogen atoms substituting for carbon atoms. This model is consistent with other experimental results, but there is no direct spectroscopic data corroborating it. Diamonds where most nitrogen forms B centers are rare and are classed as type IaB; most gem diamonds contain a mixture of A and B centers, together with N3 centers.
Similar to the A centers, B centers do not induce color, and no UV or visible absorption can be attributed to the B centers. Early assignment of the N9 absorption system to the B center have been disproven later. The B center has a characteristic IR absorption spectrum (see the infrared absorption picture above) with a sharp peak at 1332 cm−1 and a broader feature at 1280 cm−1. The latter is routinely used to estimate the nitrogen concentration in the B form.
Many optical peaks in diamond accidentally have similar spectral positions, which causes much confusion among gemologists. Spectroscopists use the whole spectrum rather than one peak for defect identification and consider the history of the growth and processing of individual diamond.
N3 nitrogen center
The N3 center consists of three nitrogen atoms surrounding a vacancy. Its concentration is always just a fraction of the A and B centers. The N3 center is paramagnetic, so its structure is well justified from the analysis of the EPR spectrum P2. This defect produces a characteristic absorption and luminescence line at 415 nm and thus does not induce color on its own. However, the N3 center is always accompanied by the N2 center, having an absorption line at 478 nm (and no luminescence). As a result, diamonds rich in N3/N2 centers are yellow in color.
Boron
Diamonds containing boron as a substitutional impurity are termed type IIb. Only one percent of natural diamonds are of this type, and most are blue to grey. Boron is an acceptor in diamond: boron atoms have one less available electron than the carbon atoms; therefore, each boron atom substituting for a carbon atom creates an electron hole in the band gap that can accept an electron from the valence band. This allows red light absorption, and due to the small energy (0.37 eV) needed for the electron to leave the valence band, holes can be thermally released from the boron atoms to the valence band even at room temperatures. These holes can move in an electric field and render the diamond electrically conductive (i.e., a p-type semiconductor). Very few boron atoms are required for this to happen—a typical ratio is one boron atom per 1,000,000 carbon atoms.
Boron-doped diamonds transmit light down to ~250 nm and absorb some red and infrared light (hence the blue color); they may phosphoresce blue after exposure to shortwave ultraviolet light. Apart from optical absorption, boron acceptors have been detected by electron paramagnetic resonance.
Phosphorus
Phosphorus could be intentionally introduced into diamond grown by chemical vapor deposition (CVD) at concentrations up to ~0.01%. Phosphorus substitutes carbon in the diamond lattice. Similar to nitrogen, phosphorus has one more electron than carbon and thus acts as a donor; however, the ionization energy of phosphorus (0.6 eV) is much smaller than that of nitrogen (1.7 eV) and is small enough for room-temperature thermal ionization. This important property of phosphorus in diamond favors electronic applications, such as UV light-emitting diodes (LEDs, at 235 nm).
Hydrogen
Hydrogen is one of the most technological important impurities in semiconductors, including diamond. Hydrogen-related defects are very different in natural diamond and in synthetic diamond films. Those films are produced by various chemical vapor deposition (CVD) techniques in an atmosphere rich in hydrogen (typical hydrogen/carbon ratio >100), under strong bombardment of growing diamond by the plasma ions. As a result, CVD diamond is always rich in hydrogen and lattice vacancies. In polycrystalline films, much of the hydrogen may be located at the boundaries between diamond 'grains', or in non-diamond carbon inclusions. Within the diamond lattice itself, hydrogen-vacancy and hydrogen-nitrogen-vacancy complexes have been identified in negative charge states by electron paramagnetic resonance. In addition, numerous hydrogen-related IR absorption peaks are documented.
It is experimentally demonstrated that hydrogen passivates electrically active boron and phosphorus impurities. As a result of such passivation, shallow donor centers are presumably produced.
In natural diamonds, several hydrogen-related IR absorption peaks are commonly observed; the strongest ones are located at 1405, 3107 and 3237 cm−1 (see IR absorption figure above). The microscopic structure of the corresponding defects is yet unknown and it is not even certain whether or not those defects originate in diamond or in foreign inclusions. Gray color in some diamonds from the Argyle mine in Australia is often associated with those hydrogen defects, but again, this assignment is yet unproven.
Nickel, cobalt and chromium
When diamonds are grown by the high-pressure high-temperature technique, nickel, cobalt, chromium or some other metals are usually added into the growth medium to facilitate catalytically the conversion of graphite into diamond. As a result, metallic inclusions are formed. Besides, isolated nickel and cobalt atoms incorporate into diamond lattice, as demonstrated through characteristic hyperfine structure in electron paramagnetic resonance, optical absorption and photoluminescence spectra, and the concentration of isolated nickel can reach 0.01%. This fact is by all means unusual considering the large difference in size between carbon and transition metal atoms and the superior rigidity of the diamond lattice.
Numerous Ni-related defects have been detected by electron paramagnetic resonance, optical absorption and photoluminescence, both in synthetic and natural diamonds. Three major structures can be distinguished: substitutional Ni, nickel-vacancy and nickel-vacancy complex decorated by one or more substitutional nitrogen atoms. The "nickel-vacancy" structure, also called "semi-divacancy" is specific for most large impurities in diamond and silicon (e.g., tin in silicon). Its production mechanism is generally accepted as follows: large nickel atom incorporates substitutionally, then expels a nearby carbon (creating a neighboring vacancy), and shifts in-between the two sites.
Although the physical and chemical properties of cobalt and nickel are rather similar, the concentrations of isolated cobalt in diamond are much smaller than those of nickel (parts per billion range). Several defects related to isolated cobalt have been detected by electron paramagnetic resonance and photoluminescence, but their structure is yet unknown.
A chromium-related optical center was reported after ion implantation and subsequent annealing of Type IIA synthetic diamonds. However a subsequent study repeating the annealing conditions but without chromium implantation has questioned the original attribution of the defect centre to chromium.
Silicon, germanium, tin and lead
Silicon is a common impurity in diamond films grown by chemical vapor deposition and it originates either from silicon substrate or from silica windows or walls of the CVD reactor. It was also observed in natural diamonds in dispersed form. Isolated silicon defects have been detected in diamond lattice through the sharp optical absorption peak at 738 nm and electron paramagnetic resonance. Similar to other large impurities, the major form of silicon in diamond has been identified with a Si-vacancy complex (semi-divacancy site). This center is a deep donor having an ionization energy of 2 eV, and thus again is unsuitable for electronic applications.
Si-vacancies constitute minor fraction of total silicon. It is believed (though no proof exists) that much silicon substitutes for carbon thus becoming invisible to most spectroscopic techniques because silicon and carbon atoms have the same configuration of the outer electronic shells.
Germanium, tin and lead are normally absent in diamond, but they can be introduced during the growth or by subsequent ion implantation. Those impurities can be detected optically via the germanium-vacancy, tin-vacancy and lead-vacancy centers, respectively, which have similar properties to those of the Si-vacancy center.
Similar to N-V centers, Si-V, Ge-V, Sn-V and Pb-V complexes all have potential applications in quantum computing.
Sulfur
Around the year 2000, there was a wave of attempts to dope synthetic CVD diamond films by sulfur aiming at n-type conductivity with low activation energy. Successful reports have been published, but then dismissed as the conductivity was rendered p-type instead of n-type and associated not with sulfur, but with residual boron, which is a highly efficient p-type dopant in diamond.
So far (2009), there is only one reliable evidence (through hyperfine interaction structure in electron paramagnetic resonance) for isolated sulfur defects in diamond. The corresponding center called W31 has been observed in natural type-Ib diamonds in small concentrations (parts per million). It was assigned to a sulfur-vacancy complex – again, as in case of nickel and silicon, a semi-divacancy site.
Intrinsic defects
The easiest way to produce intrinsic defects in diamond is by displacing carbon atoms through irradiation with high-energy particles, such as alpha (helium), beta (electrons) or gamma particles, protons, neutrons, ions, etc. The irradiation can occur in the laboratory or in nature (see Diamond enhancement – Irradiation); it produces primary defects named Frenkel defects (carbon atoms knocked off their normal lattice sites to interstitial sites) and remaining lattice vacancies. An important difference between the vacancies and interstitials in diamond is that whereas interstitials are mobile during the irradiation, even at liquid nitrogen temperatures, however vacancies start migrating only at temperatures ~700 °C.
Vacancies and interstitials can also be produced in diamond by plastic deformation, though in much smaller concentrations.
Isolated carbon interstitial
Isolated interstitial has never been observed in diamond and is considered unstable. Its interaction with a regular carbon lattice atom produces a "split-interstitial", a defect where two carbon atoms share a lattice site and are covalently bonded with the carbon neighbors. This defect has been thoroughly characterized by electron paramagnetic resonance (R2 center) and optical absorption, and unlike most other defects in diamond, it does not produce photoluminescence.
Interstitial complexes
The isolated split-interstitial moves through the diamond crystal during irradiation. When it meets other interstitials it aggregates into larger complexes of two and three split-interstitials, identified by electron paramagnetic resonance (R1 and O3 centers), optical absorption and photoluminescence.
Vacancy-interstitial complexes
Most high-energy particles, beside displacing carbon atom from the lattice site, also pass it enough surplus energy for a rapid migration through the lattice. However, when relatively gentle gamma irradiation is used, this extra energy is minimal. Thus the interstitials remain near the original vacancies and form vacancy-interstitials pairs identified through optical absorption.
Vacancy-di-interstitial pairs have been also produced, though by electron irradiation and through a different mechanism: Individual interstitials migrate during the irradiation and aggregate to form di-interstitials; this process occurs preferentially near the lattice vacancies.
Isolated vacancy
Isolated vacancy is the most studied defect in diamond, both experimentally and theoretically. Its most important practical property is optical absorption, like in the color centers, which gives diamond green, or sometimes even green–blue color (in pure diamond). The characteristic feature of this absorption is a series of sharp lines called GR1-8, where GR1 line at 741 nm is the most prominent and important.
The vacancy behaves as a deep electron donor/acceptor, whose electronic properties depend on the charge state. The energy level for the +/0 states is at 0.6 eV and for the 0/- states is at 2.5 eV above the valence band.
Multivacancy complexes
Upon annealing of pure diamond at ~700 °C, vacancies migrate and form divacancies, characterized by optical absorption and electron paramagnetic resonance.
Similar to single interstitials, divacancies do not produce photoluminescence. Divacancies, in turn, anneal out at ~900 °C creating multivacancy chains detected by EPR and presumably hexavacancy rings. The latter should be invisible to most spectroscopies, and indeed, they have not been detected thus far. Annealing of vacancies changes diamond color from green to yellow-brown. Similar mechanism (vacancy aggregation) is also believed to cause brown color of plastically deformed natural diamonds.
Dislocations
Dislocations are the most common structural defect in natural diamond. The two major types of dislocations are the glide set, in which bonds break between layers of atoms with different indices (those not lying directly above each other) and the shuffle set, in which the breaks occur between atoms of the same index. The dislocations produce dangling bonds which introduce energy levels into the band gap, enabling the absorption of light. Broadband blue photoluminescence has been reliably identified with dislocations by direct observation in an electron microscope, however, it was noted that not all dislocations are luminescent, and there is no correlation between the dislocation type and the parameters of the emission.
Platelets
Most natural diamonds contain extended planar defects in the <100> lattice planes, which are called "platelets". Their size ranges from nanometers to many micrometers, and large ones are easily observed in an optical microscope via their luminescence. For a long time, platelets were tentatively associated with large nitrogen complexes — nitrogen sinks produced as a result of nitrogen aggregation at high temperatures of the diamond synthesis. However, the direct measurement of nitrogen in the platelets by EELS (an analytical technique of electron microscopy) revealed very little nitrogen. The currently accepted model of platelets is a large regular array of carbon interstitials.
Platelets produce sharp absorption peaks at 1359–1375 and 330 cm−1 in IR absorption spectra; remarkably, the position of the first peak depends on the platelet size. As with dislocations, a broad photoluminescence centered at ~1000 nm was associated with platelets by direct observation in an electron microscope. By studying this luminescence, it was deduced that platelets have a "bandgap" of ~1.7 eV.
Voidites
Voidites are octahedral nanometer-sized clusters present in many natural diamonds, as revealed by electron microscopy. Laboratory experiments demonstrated that annealing of type-IaB diamond at high temperatures and pressures (>2600 °C) results in break-up of the platelets and formation of dislocation loops and voidites, i.e. that voidites are a result of thermal degradation of platelets. Contrary to platelets, voidites do contain much nitrogen, in the molecular form.
Interaction between intrinsic and extrinsic defects
Extrinsic and intrinsic defects can interact producing new defect complexes. Such interaction usually occurs if a diamond containing extrinsic defects (impurities) is either plastically deformed or is irradiated and annealed.
Most important is the interaction of vacancies and interstitials with nitrogen. Carbon interstitials react with substitutional nitrogen producing a bond-centered nitrogen interstitial showing strong IR absorption at 1450 cm−1. Vacancies are efficiently trapped by the A, B and C nitrogen centers. The trapping rate is the highest for the C centers, 8 times lower for the A centers and 30 times lower for the B centers. The C center (single nitrogen) by trapping a vacancy forms the famous nitrogen-vacancy center, which can be neutral or negatively charged; the negatively charged state has potential applications in quantum computing. A and B centers upon trapping a vacancy create corresponding 2N-V (H3 and H2 centers, where H2 is simply a negatively charged H3 center) and the neutral 4N-2V (H4 center). The H2, H3 and H4 centers are important because they are present in many natural diamonds and their optical absorption can be strong enough to alter the diamond color (H3 or H4 – yellow, H2 – green).
Boron interacts with carbon interstitials forming a neutral boron–interstitial complex with a sharp optical absorption at 0.552 eV (2250 nm). No evidence is known so far (2009) for complexes of boron and vacancy.
In contrast, silicon does react with vacancies, creating the described above optical absorption at 738 nm. The assumed mechanism is trapping of migrating vacancy by substitutional silicon resulting in the Si-V (semi-divacancy) configuration.
A similar mechanism is expected for nickel, for which both substitutional and semi-divacancy configurations are reliably identified (see subsection "nickel and cobalt" above). In an unpublished study, diamonds rich in substitutional nickel were electron irradiated and annealed, with following careful optical measurements performed after each annealing step, but no evidence for creation or enhancement of Ni-vacancy centers was obtained.
See also
Chemical vapor deposition of diamond
Crystallographic defect
Diamond color
Diamond enhancement
Gemstone irradiation
Material properties of diamond
Nitrogen-vacancy center
Synthetic diamond
References
Diamond
Crystallographic defects | Crystallographic defects in diamond | [
"Chemistry",
"Materials_science",
"Engineering"
] | 5,211 | [
"Crystallographic defects",
"Crystallography",
"Materials degradation",
"Materials science"
] |
2,212,817 | https://en.wikipedia.org/wiki/Umklapp%20scattering | In crystalline materials, Umklapp scattering (also U-process or Umklapp process) is a scattering process that results in a wave vector (usually written k) which falls outside the first Brillouin zone. If a material is periodic, it has a Brillouin zone, and any point outside the first Brillouin zone can also be expressed as a point inside the zone. So, the wave vector is then mathematically transformed to a point inside the first Brillouin zone. This transformation allows for scattering processes which would otherwise violate the conservation of momentum: two wave vectors pointing to the right can combine to create a wave vector that points to the left. This non-conservation is why crystal momentum is not a true momentum.
Examples include electron-lattice potential scattering or an anharmonic phonon-phonon (or electron-phonon) scattering process, reflecting an electronic state or creating a phonon with a momentum k-vector outside the first Brillouin zone. Umklapp scattering is one process limiting the thermal conductivity in crystalline materials, the others being phonon scattering on crystal defects and at the surface of the sample.
The left panel of Figure 1 schematically shows the possible scattering processes of two incoming phonons with wave-vectors (k-vectors) k1 and k2 (red) creating one outgoing phonon with a wave vector k3 (blue). As long as the sum of k1 and k2 stay inside the first Brillouin zone (grey squares), k3 is the sum of the former two, thus conserving phonon momentum. This process is called normal scattering (N-process).
With increasing phonon momentum and thus larger wave vectors k1 and k2, their sum might point outside the first Brillouin zone (k'3). As shown in the right panel of Figure 1, k-vectors outside the first Brillouin zone are physically equivalent to vectors inside it and can be mathematically transformed into each other by the addition of a reciprocal lattice vector G. These processes are called Umklapp scattering and change the total phonon momentum.
Umklapp scattering is the dominant process for electrical resistivity at low temperatures for low defect crystals (as opposed to phonon-electron scattering, which dominates at high temperatures, and high-defect lattices which lead to scattering at any temperature.)
Umklapp scattering is the dominant process for thermal resistivity at high temperatures for low defect crystals. The thermal conductivity for an insulating crystal where the U-processes are dominant has 1/T dependence.
History
The name derives from the German word umklappen (to turn over). Rudolf Peierls, in his autobiography Bird of Passage states he was the originator of this phrase and coined it during his 1929 crystal lattice studies under the tutelage of Wolfgang Pauli. Peierls wrote, "…I used the German term Umklapp (flip-over) and this rather ugly word has remained in use…".
The term Umklapp appears in the 1920 paper of Wilhelm Lenz's seed paper of the Ising model.
See also
Sampling theorem
References
Scattering | Umklapp scattering | [
"Physics",
"Chemistry",
"Materials_science"
] | 657 | [
"Condensed matter physics",
"Scattering",
"Particle physics",
"Nuclear physics"
] |
2,212,867 | https://en.wikipedia.org/wiki/Detailed%20balance | The principle of detailed balance can be used in kinetic systems which are decomposed into elementary processes (collisions, or steps, or elementary reactions). It states that at equilibrium, each elementary process is in equilibrium with its reverse process.
History
The principle of detailed balance was explicitly introduced for collisions by Ludwig Boltzmann. In 1872, he proved his H-theorem using this principle. The arguments in favor of this property are founded upon microscopic reversibility.
Five years before Boltzmann, James Clerk Maxwell used the principle of detailed balance for gas kinetics with the reference to the principle of sufficient reason. He compared the idea of detailed balance with other types of balancing (like cyclic balance) and found that "Now it is impossible to assign a reason" why detailed balance should be rejected (pg. 64).
In 1901, Rudolf Wegscheider introduced the principle of detailed balance for chemical kinetics. In particular, he demonstrated that the irreversible cycles A1 -> A2 -> \cdots -> A_\mathit{n} -> A1 are impossible and found explicitly the relations between kinetic constants that follow from the principle of detailed balance. In 1931, Lars Onsager used these relations in his works, for which he was awarded the 1968 Nobel Prize in Chemistry.
Albert Einstein in 1916 used the principle of detailed balance in a background for his quantum theory of emission and absorption of radiation.
The principle of detailed balance has been used in Markov chain Monte Carlo methods since their invention in 1953. In particular, in the Metropolis–Hastings algorithm and in its important particular case, Gibbs sampling, it is used as a simple and reliable condition to provide the desirable equilibrium state.
Now, the principle of detailed balance is a standard part of the university courses in statistical mechanics, physical chemistry, chemical and physical kinetics.
Microscopic background
The microscopic "reversing of time" turns at the kinetic level into the "reversing of arrows": the elementary processes transform into their reverse processes. For example, the reaction
transforms into
and conversely. (Here, are symbols of components or states, are coefficients). The equilibrium ensemble should be invariant with respect to this transformation because of microreversibility and the uniqueness of thermodynamic equilibrium. This leads us immediately to the concept of detailed balance: each process is equilibrated by its reverse process.
This reasoning is based on three assumptions:
does not change under time reversal;
Equilibrium is invariant under time reversal;
The macroscopic elementary processes are microscopically distinguishable. That is, they represent disjoint sets of microscopic events.
Any of these assumptions may be violated. For example, Boltzmann's collision can be represented as where is a particle with velocity v. Under time reversal transforms into . Therefore, the collision is transformed into the reverse collision by the PT transformation, where P is the space inversion and T is the time reversal. Detailed balance for Boltzmann's equation requires PT-invariance of collisions' dynamics, not just T-invariance. Indeed, after the time reversal the collision transforms into For the detailed balance we need transformation into
For this purpose, we need to apply additionally the space reversal P. Therefore, for the detailed balance in Boltzmann's equation not T-invariance but PT-invariance is needed.
Equilibrium may be not T- or PT-invariant even if the laws of motion are invariant. This non-invariance may be caused by the spontaneous symmetry breaking. There exist nonreciprocal media (for example, some bi-isotropic materials) without T and PT invariance.
If different macroscopic processes are sampled from the same elementary microscopic events then macroscopic detailed balance may be violated even when microscopic detailed balance holds.
Now, after almost 150 years of development, the scope of validity and the violations of detailed balance in kinetics seem to be clear.
Detailed balance
Reversibility
A Markov process is called a reversible Markov process or reversible Markov chain if there exists a positive stationary distribution π that satisfies the detailed balance equationswhere Pij is the Markov transition probability from state i to state j, i.e. , and πi and πj are the equilibrium probabilities of being in states i and j, respectively. When for all i, this is equivalent to the joint probability matrix, being symmetric in i and j; or symmetric in and t.
The definition carries over straightforwardly to continuous variables, where π becomes a probability density, and a transition kernel probability density from state s′ to state s:The detailed balance condition is stronger than that required merely for a stationary distribution, because there are Markov processes with stationary distributions that do not have detailed balance.
Transition matrices that are symmetric or always have detailed balance. In these cases, a uniform distribution over the states is an equilibrium distribution.
Kolmogorov's criterion
Reversibility is equivalent to Kolmogorov's criterion: the product of transition rates over any closed loop of states is the same in both directions.
For example, it implies that, for all a, b and c,For example, if we have a Markov chain with three states such that only these transitions are possible: , then they violate Kolmogorov's criterion.
Closest reversible Markov chain
For continuous systems with detailed balance, it may be possible to continuously transform the coordinates until the equilibrium distribution is uniform, with a transition kernel which then is symmetric. In the case of discrete states, it may be possible to achieve something similar by breaking the Markov states into appropriately-sized degenerate sub-states.
For a Markov transition matrix and a stationary distribution, the detailed balance equations may not be valid. However, it can be shown that a unique Markov transition matrix exists which is closest according to the stationary distribution and a given norm. The closest Matrix can be computed by solving a quadratic-convex optimization problem.
Detailed balance and entropy increase
For many systems of physical and chemical kinetics, detailed balance provides sufficient conditions for the strict increase of entropy in isolated systems. For example, the famous Boltzmann H-theorem states that, according to the Boltzmann equation, the principle of detailed balance implies positivity of entropy production. The Boltzmann formula (1872) for entropy production in rarefied gas kinetics with detailed balance served as a prototype of many similar formulas for dissipation in mass action kinetics and generalized mass action kinetics with detailed balance.
Nevertheless, the principle of detailed balance is not necessary for entropy growth. For example, in the linear irreversible cycle A1 -> A2 -> A3 -> A1, entropy production is positive but the principle of detailed balance does not hold.
Thus, the principle of detailed balance is a sufficient but not necessary condition for entropy increase in Boltzmann kinetics. These relations between the principle of detailed balance and the second law of thermodynamics were clarified in 1887 when Hendrik Lorentz objected to the Boltzmann H-theorem for polyatomic gases. Lorentz stated that the principle of detailed balance is not applicable to collisions of polyatomic molecules.
Boltzmann immediately invented a new, more general condition sufficient for entropy growth. Boltzmann's condition holds for all Markov processes, irrespective of time-reversibility. Later, entropy increase was proved for all Markov processes by a direct method. These theorems may be considered as simplifications of the Boltzmann result. Later, this condition was referred to as the "cyclic balance" condition (because it holds for irreversible cycles) or the "semi-detailed balance" or the "complex balance". In 1981, Carlo Cercignani and Maria Lampis proved that the Lorentz arguments were wrong and the principle of detailed balance is valid for polyatomic molecules. Nevertheless, the extended semi-detailed balance conditions invented by Boltzmann in this discussion remain the remarkable generalization of the detailed balance.
Wegscheider's conditions for the generalized mass action law
In chemical kinetics, the elementary reactions are represented by the stoichiometric equations
where are the components and are the stoichiometric coefficients. Here, the reverse reactions with positive constants are included in the list separately. We need this separation of direct and reverse reactions to apply later the general formalism to the systems with some irreversible reactions. The system of stoichiometric equations of elementary reactions is the reaction mechanism.
The stoichiometric matrix is , (gain minus loss). This matrix need not be square. The stoichiometric vector is the rth row of with coordinates .
According to the generalized mass action law, the reaction rate for an elementary reaction is
where is the activity (the "effective concentration") of .
The reaction mechanism includes reactions with the reaction rate constants . For each r the following notations are used: ; ; is the reaction rate constant for the reverse reaction if it is in the reaction mechanism and 0 if it is not; is the reaction rate for the reverse reaction if it is in the reaction mechanism and 0 if it is not. For a reversible reaction, is the equilibrium constant.
The principle of detailed balance for the generalized mass action law is: For given values there exists a positive equilibrium that satisfies detailed balance, that is, . This means that the system of linear detailed balance equations
is solvable (). The following classical result gives the necessary and sufficient conditions for the existence of a positive equilibrium with detailed balance (see, for example, the textbook).
Two conditions are sufficient and necessary for solvability of the system of detailed balance equations:
If then and, conversely, if then (reversibility);
For any solution of the system
the Wegscheider's identity holds:
Remark. It is sufficient to use in the Wegscheider conditions a basis of solutions of the system .
In particular, for any cycle in the monomolecular (linear) reactions the product of the reaction rate constants in the clockwise direction is equal to the product of the reaction rate constants in the counterclockwise direction. The same condition is valid for the reversible Markov processes (it is equivalent to the "no net flow" condition).
A simple nonlinear example gives us a linear cycle supplemented by one nonlinear step:
A1 <=> A2
A2 <=> A3
A3 <=> A1
{A1}+A2 <=> 2A3
There are two nontrivial independent Wegscheider's identities for this system:
and
They correspond to the following linear relations between the stoichiometric vectors:
and
The computational aspect of the Wegscheider conditions was studied by D. Colquhoun with co-authors.
The Wegscheider conditions demonstrate that whereas the principle of detailed balance states a local property of equilibrium, it implies the relations between the kinetic constants that are valid for all states far from equilibrium. This is possible because a kinetic law is known and relations between the rates of the elementary processes at equilibrium can be transformed into relations between kinetic constants which are used globally. For the Wegscheider conditions this kinetic law is the law of mass action (or the generalized law of mass action).
Dissipation in systems with detailed balance
To describe dynamics of the systems that obey the generalized mass action law, one has to represent the activities as functions of the concentrations cj and temperature. For this purpose, use the representation of the activity through the chemical potential:
where μi is the chemical potential of the species under the conditions of interest, is the chemical potential of that species in the chosen standard state, R is the gas constant and T is the thermodynamic temperature.
The chemical potential can be represented as a function of c and T, where c is the vector of concentrations with components cj. For the ideal systems, and : the activity is the concentration and the generalized mass action law is the usual law of mass action.
Consider a system in isothermal (T=const) isochoric (the volume V=const) condition. For these conditions, the Helmholtz free energy measures the “useful” work obtainable from a system. It is a functions of the temperature T, the volume V and the amounts of chemical components Nj (usually measured in moles), N is the vector with components Nj. For the ideal systems,
The chemical potential is a partial derivative: .
The chemical kinetic equations are
If the principle of detailed balance is valid then for any value of T there exists a positive point of detailed balance ceq:
Elementary algebra gives
where
For the dissipation we obtain from these formulas:
The inequality holds because ln is a monotone function and, hence, the expressions and have always the same sign.
Similar inequalities are valid for other classical conditions for the closed systems and the corresponding characteristic functions: for isothermal isobaric conditions the Gibbs free energy decreases, for the isochoric systems with the constant internal energy (isolated systems) the entropy increases as well as for isobaric systems with the constant enthalpy.
Onsager reciprocal relations and detailed balance
Let the principle of detailed balance be valid. Then, for small deviations from equilibrium, the kinetic response of the system can be approximated as linearly related to its deviation from chemical equilibrium, giving the reaction rates for the generalized mass action law as:
Therefore, again in the linear response regime near equilibrium, the kinetic equations are ():
This is exactly the Onsager form: following the original work of Onsager, we should introduce the thermodynamic forces and the matrix of coefficients in the form
The coefficient matrix is symmetric:
These symmetry relations, , are exactly the Onsager reciprocal relations. The coefficient matrix is non-positive. It is negative on the linear span of the stoichiometric vectors .
So, the Onsager relations follow from the principle of detailed balance in the linear approximation near equilibrium.
Semi-detailed balance
To formulate the principle of semi-detailed balance, it is convenient to count the direct and inverse elementary reactions separately. In this case, the kinetic equations have the form:
Let us use the notations , for the input and the output vectors of the stoichiometric coefficients of the rth elementary reaction. Let be the set of all these vectors .
For each , let us define two sets of numbers:
if and only if is the vector of the input stoichiometric coefficients for the rth elementary reaction; if and only if is the vector of the output stoichiometric coefficients for the rth elementary reaction.
The principle of semi-detailed balance means that in equilibrium the semi-detailed balance condition holds: for every
The semi-detailed balance condition is sufficient for the stationarity: it implies that
For the Markov kinetics the semi-detailed balance condition is just the elementary balance equation and holds for any steady state. For the nonlinear mass action law it is, in general, sufficient but not necessary condition for stationarity.
The semi-detailed balance condition is weaker than the detailed balance one: if the principle of detailed balance holds then the condition of semi-detailed balance also holds.
For systems that obey the generalized mass action law the semi-detailed balance condition is sufficient for the dissipation inequality (for the Helmholtz free energy under isothermal isochoric conditions and for the dissipation inequalities under other classical conditions for the corresponding thermodynamic potentials).
Boltzmann introduced the semi-detailed balance condition for collisions in 1887 and proved that it guaranties the positivity of the entropy production. For chemical kinetics, this condition (as the complex balance condition) was introduced by Horn and Jackson in 1972.
The microscopic backgrounds for the semi-detailed balance were found in the Markov microkinetics of the intermediate compounds that are present in small amounts and whose concentrations are in quasiequilibrium with the main components. Under these microscopic assumptions, the semi-detailed balance condition is just the balance equation for the Markov microkinetics according to the Michaelis–Menten–Stueckelberg theorem.
Dissipation in systems with semi-detailed balance
Let us represent the generalized mass action law in the equivalent form: the rate of the elementary process
is
where is the chemical potential and is the Helmholtz free energy. The exponential term is called the Boltzmann factor and the multiplier is the kinetic factor.
Let us count the direct and reverse reaction in the kinetic equation separately:
An auxiliary function of one variable is convenient for the representation of dissipation for the mass action law
This function may be considered as the sum of the reaction rates for deformed input stoichiometric coefficients . For it is just the sum of the reaction rates. The function is convex because .
Direct calculation gives that according to the kinetic equations
This is the general dissipation formula for the generalized mass action law.
Convexity of gives the sufficient and necessary conditions for the proper dissipation inequality:
The semi-detailed balance condition can be transformed into identity . Therefore, for the systems with semi-detailed balance .
Cone theorem and local equivalence of detailed and complex balance
For any reaction mechanism and a given positive equilibrium a cone of possible velocities for the systems with detailed balance is defined for any non-equilibrium state N
where cone stands for the conical hull and the piecewise-constant functions do not depend on (positive) values of equilibrium reaction rates and are defined by thermodynamic quantities under assumption of detailed balance.
The cone theorem states that for the given reaction mechanism and given positive equilibrium, the velocity (dN/dt) at a state N for a system with complex balance belongs to the cone . That is, there exists a system with detailed balance, the same reaction mechanism, the same positive equilibrium, that gives the same velocity at state N. According to cone theorem, for a given state N, the set of velocities of the semidetailed balance systems coincides with the set of velocities of the detailed balance systems if their reaction mechanisms and equilibria coincide. This means local equivalence of detailed and complex balance.
Detailed balance for systems with irreversible reactions
Detailed balance states that in equilibrium each elementary process is equilibrated by its reverse process and requires reversibility of all elementary processes. For many real physico-chemical complex systems (e.g. homogeneous combustion, heterogeneous catalytic oxidation, most enzyme reactions etc.), detailed mechanisms include both reversible and irreversible reactions. If one represents irreversible reactions as limits of reversible steps, then it becomes obvious that not all reaction mechanisms with irreversible reactions can be obtained as limits of systems or reversible reactions with detailed balance. For example, the irreversible cycle A1 -> A2 -> A3 -> A1 cannot be obtained as such a limit but the reaction mechanism A1 -> A2 -> A3 <- A1 can.
Gorban–Yablonsky theorem. A system of reactions with some irreversible reactions is a limit of systems with detailed balance when some constants tend to zero if and only if (i) the reversible part of this system satisfies the principle of detailed balance and (ii) the convex hull of the stoichiometric vectors of the irreversible reactions has empty intersection with the linear span of the stoichiometric vectors of the reversible reactions. Physically, the last condition means that the irreversible reactions cannot be included in oriented cyclic pathways.
See also
T-symmetry
Microscopic reversibility
Master equation
Balance equation
Gibbs sampling
Metropolis–Hastings algorithm
Atomic spectral line (deduction of the Einstein coefficients)
Random walks on graphs
References
Non-equilibrium thermodynamics
Statistical mechanics
Markov models
Chemical kinetics | Detailed balance | [
"Physics",
"Chemistry",
"Mathematics"
] | 4,096 | [
"Chemical reaction engineering",
"Non-equilibrium thermodynamics",
"Statistical mechanics",
"Chemical kinetics",
"Dynamical systems"
] |
2,213,942 | https://en.wikipedia.org/wiki/Anderson%20localization | In condensed matter physics, Anderson localization (also known as strong localization) is the absence of diffusion of waves in a disordered medium. This phenomenon is named after the American physicist P. W. Anderson, who was the first to suggest that electron localization is possible in a lattice potential, provided that the degree of randomness (disorder) in the lattice is sufficiently large, as can be realized for example in a semiconductor with impurities or defects.
Anderson localization is a general wave phenomenon that applies to the transport of electromagnetic waves, acoustic waves, quantum waves, spin waves, etc. This phenomenon is to be distinguished from weak localization, which is the precursor effect of Anderson localization (see below), and from Mott localization, named after Sir Nevill Mott, where the transition from metallic to insulating behaviour is not due to disorder, but to a strong mutual Coulomb repulsion of electrons.
Introduction
In the original Anderson tight-binding model, the evolution of the wave function ψ on the d-dimensional lattice Zd is given by the Schrödinger equation
where the Hamiltonian H is given by
where are lattice locations. The self-energy is taken as random and independently distributed. The interaction potential is required to fall off faster than in the limit. For example, one may take uniformly distributed within a band of energies and
Starting with localized at the origin, one is interested in how fast the probability distribution diffuses. Anderson's analysis shows the following:
If is 1 or 2 and is arbitrary, or if and is sufficiently large, then the probability distribution remains localized:
uniformly in . This phenomenon is called Anderson localization.
If and is small,
where D is the diffusion constant.
Analysis
The phenomenon of Anderson localization, particularly that of weak localization, finds its origin in the wave interference between multiple-scattering paths. In the strong scattering limit, the severe interferences can completely halt the waves inside the disordered medium.
For non-interacting electrons, a highly successful approach was put forward in 1979 by Abrahams et al. This scaling hypothesis of localization suggests that a disorder-induced metal-insulator transition (MIT) exists for non-interacting electrons in three dimensions (3D) at zero magnetic field and in the absence of spin-orbit coupling. Much further work has subsequently supported these scaling arguments both analytically and numerically (Brandes et al., 2003; see Further Reading). In 1D and 2D, the same hypothesis shows that there are no extended states and thus no MIT or only an apparent MIT. However, since 2 is the lower critical dimension of the localization problem, the 2D case is in a sense close to 3D: states are only marginally localized for weak disorder and a small spin-orbit coupling can lead to the existence of extended states and thus an MIT. Consequently, the localization lengths of a 2D system with potential-disorder can be quite large so that in numerical approaches one can always find a localization-delocalization transition when either decreasing system size for fixed disorder or increasing disorder for fixed system size.
Most numerical approaches to the localization problem use the standard tight-binding Anderson Hamiltonian with onsite-potential disorder. Characteristics of the electronic eigenstates are then investigated by studies of participation numbers obtained by exact diagonalization, multifractal properties, level statistics and many others. Especially fruitful is the transfer-matrix method (TMM) which allows a direct computation of the localization lengths and further validates the scaling hypothesis by a numerical proof of the existence of a one-parameter scaling function. Direct numerical solution of Maxwell equations to demonstrate Anderson localization of light has been implemented (Conti and Fratalocchi, 2008).
Recent work has shown that a non-interacting Anderson localized system can become many-body localized even in the presence of weak interactions. This result has been rigorously proven in 1D, while perturbative arguments exist even for two and three dimensions.
Experimental evidence
Anderson localization can be observed in a perturbed periodic potential where the transverse localization of light is caused by random fluctuations on a photonic lattice. Experimental realizations of transverse localization were reported for a 2D lattice (Schwartz et al., 2007) and a 1D lattice (Lahini et al., 2006). Transverse Anderson localization of light has also been demonstrated in an optical fiber medium (Karbasi et al., 2012) and a biological medium (Choi et al., 2018), and has also been used to transport images through the fiber (Karbasi et al., 2014). It has also been observed by localization of a Bose–Einstein condensate in a 1D disordered optical potential (Billy et al., 2008; Roati et al., 2008).
In 3D, observations are more rare. Anderson localization of elastic waves in a 3D disordered medium has been reported (Hu et al., 2008). The observation of the MIT has been reported in a 3D model with atomic matter waves (Chabé et al., 2008). The MIT, associated with the nonpropagative electron waves has been reported in a cm-sized crystal (Ying et al., 2016). Random lasers can operate using this phenomenon.
The existence of Anderson localization for light in 3D was debated for years (Skipetrov et al., 2016) and remains unresolved today. Reports of Anderson localization of light in 3D random media were complicated by the competing/masking effects of absorption (Wiersma et al., 1997; Storzer et al., 2006; Scheffold et al., 1999; see Further Reading) and/or fluorescence (Sperling et al., 2016). Recent experiments (Naraghi et al., 2016; Cobus et al., 2023) support theoretical predictions that the vector nature of light prohibits the transition to Anderson localization (John, 1992; Skipetrov et al., 2019).
Comparison with diffusion
Standard diffusion has no localization property, being in disagreement with quantum predictions. However, it turns out that it is based on approximation of the principle of maximum entropy, which says that the probability distribution which best represents the current state of knowledge is the one with largest entropy. This approximation is repaired in maximal entropy random walk, also repairing the disagreement: it turns out to lead to exactly the quantum ground state stationary probability distribution with its strong localization properties.
See also
Aubry–André model
Notes
Further reading
External links
Fifty years of Anderson localization, Ad Lagendijk, Bart van Tiggelen, and Diederik S. Wiersma, Physics Today 62(8), 24 (2009).
Example of an electronic eigenstate at the MIT in a system with 1367631 atoms Each cube indicates by its size the probability to find the electron at the given position. The color scale denotes the position of the cubes along the axis into the plane
Videos of multifractal electronic eigenstates at the MIT
Anderson localization of elastic waves
Popular scientific article on the first experimental observation of Anderson localization in matter waves
Mesoscopic physics
Condensed matter physics | Anderson localization | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,470 | [
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Mesoscopic physics",
"Matter"
] |
2,213,992 | https://en.wikipedia.org/wiki/Thouless%20energy | The Thouless energy is a characteristic energy scale of diffusive disordered conductors. It was first introduced by the Scottish-American physicist David J. Thouless when studying Anderson localization,
as a measure of the sensitivity of energy levels to a change in the boundary conditions of the system. Though being a classical quantity, it has been shown to play an important role in the quantum-mechanical treatment of disordered systems.
It is defined by
,
where D is the diffusion constant and L the size of the system, and thereby inversely proportional to the diffusion time
through the system.
References
Mesoscopic physics
Condensed matter physics | Thouless energy | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 127 | [
"Materials science stubs",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Condensed matter stubs",
"Mesoscopic physics",
"Matter"
] |
2,214,041 | https://en.wikipedia.org/wiki/Sector%20mass%20spectrometer | A sector instrument is a general term for a class of mass spectrometer that uses a static electric (E) or magnetic (B) sector or some combination of the two (separately in space) as a mass analyzer. Popular combinations of these sectors have been the EB, BE (of so-called reverse geometry), three-sector BEB and four-sector EBEB (electric-magnetic-electric-magnetic) instruments. Most modern sector instruments are double-focusing instruments (first developed by Francis William Aston, Arthur Jeffrey Dempster, Kenneth Bainbridge and Josef Mattauch in 1936) in that they focus the ion beams both in direction and velocity.
Theory
The behavior of ions in a homogeneous, linear, static electric or magnetic field (separately) as is found in a sector instrument is simple. The physics are described by a single equation called the Lorentz force law. This equation is the fundamental equation of all mass spectrometric techniques and applies in non-linear, non-homogeneous cases too and is an important equation in the field of electrodynamics in general.
where E is the electric field strength, B is the magnetic field induction, q is the charge of the particle, v is its current velocity (expressed as a vector), and × is the cross product.
So the force on an ion in a linear homogenous electric field (an electric sector) is:
,
in the direction of the electric field, with positive ions and opposite that with negative ions.
The force is only dependent on the charge and electric field strength. The lighter ions will be deflected more and heavier ions less due to the difference in inertia and the ions will physically separate from each other in space into distinct beams of ions as they exit the electric sector.
And the force on an ion in a linear homogenous magnetic field (a magnetic sector) is:
,
perpendicular to both the magnetic field and the velocity vector of the ion itself, in the direction determined by the right-hand rule of cross products and the sign of the charge.
The force in the magnetic sector is complicated by the velocity dependence but with the right conditions (uniform velocity for example) ions of different masses will separate physically in space into different beams as with the electric sector.
Classic geometries
These are some of the classic geometries from mass spectrographs which are often used to distinguish different types of sector arrangements, although most current instruments do not fit precisely into any of these categories as the designs have evolved further.
Bainbridge–Jordan
The sector instrument geometry consists of a 127.30° electric sector without an initial drift length followed by a 60° magnetic sector with the same direction of curvature. Sometimes called a "Bainbridge mass spectrometer," this configuration is often used to determine isotopic masses. A beam of positive particles is produced from the isotope under study. The beam is subject to the combined action of perpendicular electric and magnetic fields. Since the forces due to these two fields are equal and opposite when the particles have a velocity given by
they do not experience a resultant force; they pass freely through a slit, and are then subject to another magnetic field, transversing a semi-circular path and striking a photographic plate. The mass of the isotope is determined through subsequent calculation.
Mattauch–Herzog
The Mattauch–Herzog geometry consists of a 31.82° ( radians) electric sector, a drift length which is followed by a 90° magnetic sector of opposite curvature direction. The entry of the ions sorted primarily by charge into the magnetic field produces an energy focussing effect and much higher transmission than a standard energy filter. This geometry is often used in applications with a high energy spread in the ions produced where sensitivity is nonetheless required, such as spark source mass spectrometry (SSMS) and secondary ion mass spectrometry (SIMS).
The advantage of this geometry over the Nier–Johnson geometry is that the ions of different masses are all focused onto the same flat plane. This allows the use of a photographic plate or other flat detector array.
Nier–Johnson
The Nier–Johnson geometry consists of a 90° electric sector, a long intermediate drift length and a 60° magnetic sector of the same curvature direction.
Hinterberger–Konig
The Hinterberger–Konig geometry consists of a 42.43° electric sector, a long intermediate drift length and a 130° magnetic sector of the same curvature direction.
Takeshita
The Takeshita geometry consists of a 54.43° electric sector, and short drift length, a second electric sector of the same curvature direction followed by another drift length before a 180° magnetic sector of opposite curvature direction.
Matsuda
The Matsuda geometry consists of an 85° electric sector, a quadrupole lens and a 72.5° magnetic sector of the same curvature direction. This geometry is used in the SHRIMP and Panorama (gas source, high-resolution, multicollector to measure isotopologues in geochemistry).
See also
Mass-analyzed ion kinetic energy spectrometry
Charge remote fragmentation
Kenneth Bainbridge
Alfred O. C. Nier
References
Further reading
Thomson, J. J.: Rays of Positive Electricity and their Application to Chemical Analyses; Longmans Green: London, 1913
Mass spectrometry
Measuring instruments | Sector mass spectrometer | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 1,081 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Measuring instruments",
"Mass spectrometry",
"Matter"
] |
2,214,609 | https://en.wikipedia.org/wiki/Ullmann%20reaction | The Ullmann reaction or Ullmann coupling, named after Fritz Ullmann, couples two aryl or alkyl groups with the help of copper. The reaction was first reported by Ullmann and his student Bielecki in 1901. It has been later shown that palladium and nickel can also be effectively used.
Aryl-Aryl bond formation is a fundamental tool in modern organic synthesis, with applications spanning natural product synthesis, pharmaceuticals, agrochemicals, and the development of commercial dyes and polyaromatics. With over a century of history, the Ullmann reaction has been one of the first to use a transition metal, primarily copper, in its higher oxidation states. Despite the significant implications of biaryl coupling in industries, the Ullmann reaction was plagued by a number of problems in its early development. However, in modern times the Ullmann reaction has revived interest due to several advantages of copper over other catalytic metals.
Mechanism
The reaction mechanism of the Ullmann reaction has been extensively studied. Electron spin resonance rules out a radical intermediate. This was confirmed in a set of experiments performed in 2008 by Hartwig and co-workers. The oxidative addition / reductive elimination sequence observed with palladium catalysts is unlikely for copper because copper(III) is rarely observed. The reaction likely involves the formation of an organocopper compound (RCuX) which reacts with the other aryl reactant in a nucleophilic aromatic substitution. Alternative mechanisms have been proposed such as σ-bond metathesis. The simplified mechanism shown below is generally accepted.
Scope
Fritz Ullmann and his student Bielecki were the first to report the reaction. This groundbreaking result was the first to show that a transition metal could help perform an aryl carbon-carbon bond formation.
A typical example of classic Ullmann biaryl coupling is the conversion of ortho-chloronitrobenzene into 2,2'-dinitrobiphenyl with a copper - bronze alloy.
The reaction has been applied to fairly elaborate substrates.
The traditional version of the Ullmann reaction requires stoichimoetric equivalents of copper, harsh reaction conditions, and the reaction has a reputation for erratic yields. The traditional Ullmann reaction thus had poor atom economy and produced toxic CuI. Because of these problems many improvements and alternative procedures have been introduced.
The classical Ullmann reaction is limited to electron deficient aryl halides (hence the example of 2-nitrophenyl chloride above) and requires harsh reaction conditions. Modern variants of the Ullman reaction employing palladium and nickel have widened the substrate scope of the reaction and rendered reaction conditions more mild. Yields are generally still moderate, however. In organic synthesis this reaction is often replaced by palladium coupling reactions such as the Heck reaction, the Hiyama coupling, and the Sonogashira coupling.
Biphenylenes had been obtained before with reasonable yields using 2,2-diiodobiphenyl or 2,2-diiodobiphenylonium ion as starting material.
Closure of 5-membered rings is more facile, but larger rings have also been made using this approach.
Modern developments also include the use of heterogeneous copper catalysts and nanoparticles. These are highly desirable as the catalyst can be easily separated from the products, reducing waste and cost. In the case of copper nanoparticles, the catalytic activity depended on its size and the formation of aggregates.
Bidentate ligands for Ullmann Coupling
Around the year 2000, various bidentate ligands were found to improve the efficieny of the Ullmann reaction. Bidentate ligands allow for milder reaction conditions and higher functional group tolerance. They included amino acids, oxines, Schiff bases, and many other O-O or N-N bidentates. These initial bidentate systems elevated the practicality of Ullmann reactions but it still had drawbacks. High loadings of copper and ligand were required and activation of the notoriously difficult aryl-chloride was still not possible. These problems were solved in 2015 with the design of special oxalic diamine ligands, making the Ullmann reaction viable for industrial application.
Unsymmetric and asymmetric couplings
Ullmann synthesis of biaryl compounds can be used to generate chiral products from chiral reactants. Nelson and collaborators worked on the synthesis of asymmetric biaryl compounds and obtained the thermodynamically controlled product.
The diastereomeric ratio of the products is enhanced with bulkier R groups in the auxiliary oxazoline group.
Unsymmetrical Ullmann reactions are rarely pursued but have been achieved when one of the two coupling components is in excess.
Imidazole Ullmann reaction
The Ullmann reaction is limited to electron-deficient aryl halides and requires harsh reaction conditions. In organic synthesis this reaction is often replaced by palladium coupling reactions such as the Heck reaction, the Hiyama coupling, and the Sonogashira coupling
In a variation of the Ullmann reaction, β-bromostyrene is reacted with imidazole in an ionic liquid such as 1-butyl-3-methylimidazolium tetrafluoroborate to give an N-styrylimidazole. The reaction requires Lproline in addition to copper iodide as catalyst.
Industrial Applications
Aqueous Ullmann reactions have been used on the pilot plant scale.
See also
Ullmann condensation - copper-promoted conversion of aryl halides to ethers, also developed by Fritz Ullmann
Copper(I) thiophene-2-carboxylate, a copper reagent used in the Ullmann reaction
Wurtz–Fittig reaction, a similar reaction useful for alkylbenzenes synthesis
References
Carbon-carbon bond forming reactions
Name reactions | Ullmann reaction | [
"Chemistry"
] | 1,243 | [
"Coupling reactions",
"Name reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
2,216,044 | https://en.wikipedia.org/wiki/Nuclear%20reaction%20analysis | Nuclear reaction analysis (NRA) is a nuclear method of nuclear spectroscopy in materials science to obtain concentration vs. depth distributions for certain target chemical elements in a solid thin film.
Mechanism of NRA
If irradiated with select projectile nuclei at kinetic energies Ekin, target solid thin-film chemical elements can undergo a nuclear reaction under resonance conditions for a sharply defined resonance energy. The reaction product is usually a nucleus in an excited state which immediately decays, emitting ionizing radiation.
To obtain depth information the initial kinetic energy of the projectile nucleus (which has to exceed the resonance energy) and its stopping power (energy loss per distance traveled) in the sample has to be known. To contribute to the nuclear reaction the projectile nuclei have to slow down in the sample to reach the resonance energy. Thus each initial kinetic energy corresponds to a depth in the sample where the reaction occurs (the higher the energy, the deeper the reaction).
NRA profiling of hydrogen
For example, a commonly used reaction to profile hydrogen with an energetic 15N ion beam is
15N + 1H → 12C + α + γ (4.43 MeV)
with a sharp resonance in the reaction cross section at 6.385 MeV of only 1.8 keV. Since the incident 15N ion loses energy along its trajectory in the material it must have an energy higher than the resonance energy to induce the nuclear reaction with hydrogen nuclei deeper in the target.
This reaction is usually written 1H(15N,αγ)12C. It is inelastic because the Q-value is not zero (in this case it is 4.965 MeV). Rutherford backscattering (RBS) reactions are elastic (Q = 0), and the interaction (scattering) cross-section σ given by the famous formula derived by Lord Rutherford in 1911. But non-Rutherford cross-sections (so-called EBS, elastic backscattering spectrometry) can also be resonant: for example, the 16O(α,α)16O reaction has a strong and very useful resonance at 3038.1 ± 1.3 keV.
In the 1H(15N,αγ)12C reaction (or indeed the 15N(p,αγ)12C inverse reaction), the energetic emitted γ ray is characteristic of the reaction and the number that are detected at any incident energy is proportional to the hydrogen concentration at the respective depth in the sample. Due to the narrow peak in the reaction cross section primarily ions of the resonance energy undergo a nuclear reaction. Thus, information on the hydrogen distribution can be straight forward obtained by varying the 15N incident beam energy.
Hydrogen is an element inaccessible to Rutherford backscattering spectrometry since nothing can backscatter from H (since all atoms are heavier than hydrogen!). But it is often analysed by elastic recoil detection.
Non-resonant NRA
NRA can also be used non-resonantly (of course, RBS is non-resonant). For example, deuterium can easily be profiled with a 3He beam without changing the incident energy by using the
3He + D = α + p + 18.353 MeV
reaction, usually written 2H(3He,p)α. The energy of the fast proton detected depends on the depth of the deuterium atom in the sample.
See also
Rutherford backscattering spectrometry (RBS)
References
External links
Details of many known reactions are hosted by the IAEA at http://www-nds.iaea.org/ibandl/.
The energy released in nuclear reactions (the "Q value") can easily be calculated (from E=mc2): see http://nucleardata.nuclear.lu.se/database/masses/.
NRA at JSI Microanalytical center in Ljubljana, Slovenia
Materials science
Surface science | Nuclear reaction analysis | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 808 | [
"Ion beam methods",
"Applied and interdisciplinary physics",
"Materials science",
"Surface science",
"Condensed matter physics",
"nan"
] |
1,010,127 | https://en.wikipedia.org/wiki/Carbon-13 | Carbon-13 (13C) is a natural, stable isotope of carbon with a nucleus containing six protons and seven neutrons. As one of the environmental isotopes, it makes up about 1.1% of all natural carbon on Earth.
Detection by mass spectrometry
A mass spectrum of an organic compound will usually contain a small peak of one mass unit greater than the apparent molecular ion peak (M) of the whole molecule. This is known as the M+1 peak and comes from the few molecules that contain a 13C atom in place of a 12C. A molecule containing one carbon atom will be expected to have an M+1 peak of approximately 1.1% of the size of the M peak, as 1.1% of the molecules will have a 13C rather than a 12C. Similarly, a molecule containing two carbon atoms will be expected to have an M+1 peak of approximately 2.2% of the size of the M peak, as there is double the previous likelihood that any molecule will contain a 13C atom.
In the above, the mathematics and chemistry have been simplified, however it can be used effectively to give the number of carbon atoms for small- to medium-sized organic molecules. In the following formula the result should be rounded to the nearest integer:
where C = number of C atoms, X = amplitude of the M ion peak, and Y = amplitude of the M +1 ion peak.
13C-enriched compounds are used in the research of metabolic processes by means of mass spectrometry. Such compounds are safe because they are non-radioactive. In addition, 13C is used to quantify proteins (quantitative proteomics). One important application is in stable isotope labeling by amino acids in cell culture (SILAC). 13C-enriched compounds are used in medical diagnostic tests such as the urea breath test. Analysis in these tests is usually of the ratio of 13C to 12C by isotope ratio mass spectrometry.
The ratio of 13C to 12C is slightly higher in plants employing C4 carbon fixation than in plants employing C3 carbon fixation. Because the different isotope ratios for the two kinds of plants propagate through the food chain, it is possible to determine if the principal diet of a human or other animal consists primarily of C3 plants or C4 plants by measuring the isotopic signature of their collagen and other tissues.
Uses in science
Due to differential uptake in plants as well as marine carbonates of 13C, it is possible to use these isotopic signatures in earth science. Biological processes preferentially take up the lower mass isotope through kinetic fractionation. In aqueous geochemistry, by analyzing the δ13C value of carbonaceous material found in surface and ground waters, the source of the water can be identified. This is because atmospheric, carbonate, and plant derived δ13C values all differ. In biology, the ratio of carbon-13 and carbon-12 isotopes in plant tissues is different depending on the type of plant photosynthesis and this can be used, for example, to determine which types of plants were consumed by animals. Greater carbon-13 concentrations indicate stomatal limitations, which can provide information on plant behaviour during drought. Tree ring analysis of carbon isotopes can be used to retrospectively understand forest photosynthesis and how it is impacted by drought.
In geology, the 13C/12C ratio is used to identify the layer in sedimentary rock created at the time of the Permian extinction 252 Mya when the ratio changed abruptly by 1%. More information about usage of 13C/12C ratio in science can be found in the article about isotopic signatures.
Carbon-13 has a non-zero spin quantum number of , and hence allows the structure of carbon-containing substances to be investigated using carbon-13 nuclear magnetic resonance.
The carbon-13 urea breath test is a safe and highly accurate diagnostic tool to detect the presence of Helicobacter pylori infection in the stomach. The urea breath test utilizing carbon-13 is preferred to carbon-14 for certain vulnerable populations due to its non-radioactive nature.
Production
Bulk carbon-13 for commercial use, e.g. in chemical synthesis, is enriched from its natural 1% abundance. Although carbon-13 can be separated from the major carbon-12 isotope via techniques such as thermal diffusion, chemical exchange, gas diffusion, and laser and cryogenic distillation, currently only cryogenic distillation of methane (boiling point −161.5°C) or carbon monoxide (b.p. −191.5°C) is an economically feasible industrial production technique. Industrial carbon-13 production plants represent a substantial investment, greater than 100 meter tall cryogenic distillation columns are needed to separate the carbon-12 or carbon-13 containing compounds. The largest reported commercial carbon-13 production plant in the world as of 2014 has a production capability of ~400 kg of carbon-13 annually. In contrast, a 1969 carbon monoxide cryogenic distillation pilot plant at Los Alamos Scientific Laboratories could produce 4 kg of carbon-13 annually.
See also
Isotopes of carbon
Isotope fractionation
Notes
Isotopes of carbon
Medical isotopes
Environmental isotopes | Carbon-13 | [
"Chemistry"
] | 1,080 | [
"Environmental isotopes",
"Isotopes of carbon",
"Isotopes",
"Chemicals in medicine",
"Medical isotopes"
] |
1,010,167 | https://en.wikipedia.org/wiki/Ceruloplasmin | Ceruloplasmin (or caeruloplasmin) is a ferroxidase enzyme that in humans is encoded by the CP gene.
Ceruloplasmin is the major copper-carrying protein in the blood, and in addition plays a role in iron metabolism. It was first described in 1948. Another protein, hephaestin, is noted for its homology to ceruloplasmin, and also participates in iron and probably copper metabolism.
Function
Ceruloplasmin (CP) is an enzyme () synthesized in the liver containing 6 atoms of copper in its structure. Ceruloplasmin carries more than 95% of the total copper in healthy human plasma. The rest is accounted for by macroglobulins. Ceruloplasmin exhibits a copper-dependent oxidase activity, which is associated with possible oxidation of Fe2+ (ferrous iron) into Fe3+ (ferric iron), therefore assisting in its transport in the plasma in association with transferrin, which can carry iron only in the ferric state. The molecular weight of human ceruloplasmin is reported to be 151kDa.
Despite extensive research, much is still unknown about the exact functions of CP, most of the functions are attributed to CP focus on the presence of the Cu centers. These include copper transport to deliver the Cu to extrahepatic tissues, amine oxidase activity that controls the level of biogenic amines in intestinal fluids and plasma, removal of oxygen and other free radicals from plasma, and the export of iron from cells for transport through transferrin.
Mutations have been known to disrupt the binding of copper to CP and will disrupt iron metabolism and cause an iron overload.
Ceruloplasmin is a relatively large enzyme (~10 nm); the larger size prevents the bound copper from being lost in a person's urine during transport.
Active site structure
The multicopper active site of CP contains a type I (T1) mononuclear copper site and a trinuclear copper center ~ 12-13 Å away (see figure 2). The tricopper center consists of two type III (T3) coppers and one type II (T2) copper ion. The two T3 copper ions are bridged by a hydroxide ligand while another hydroxide ligand links the T2 copper ion to the protein. The T1 center is bridged to the tricopper center by two histidine (His1020, His1022) residues and one Cys(1021) residue. The substrate binds near the T1 center and is oxidized by the T1 Cu2+ ion forming the reduced Cu+ oxidation state. The reduced T1 Cu+ then transfers the electron through the one Cys and two His bridging residues to the tricopper center. After four electrons have been transferred from the substrates to the copper centers, an O2 binds at the tricopper center and undergoes a four-electron reduction to form two molecules of water.
Regulation
A cis-regulatory element called the GAIT element is involved in the selective translational silencing of the Ceruloplasmin transcript.
The silencing requires binding of a cytosolic inhibitor complex called IFN-gamma-activated inhibitor of translation (GAIT) to the GAIT element.
Clinical significance
Like any other plasma protein, levels drop in patients with hepatic disease due to reduced synthesizing capabilities.
Mechanisms of low ceruloplasmin levels:
Gene expression genetically low (aceruloplasminemia)
Copper levels are low in general
Malnutrition/trace metal deficiency in the food source
Zinc toxicity, due to induced copper deficiency
Copper does not cross the intestinal barrier due to ATP7A deficiency (Menkes disease and Occipital horn syndrome)
Delivery of copper into the lumen of the ER-Golgi network is absent in hepatocytes due to absent ATP7B (Wilson's disease)
Copper availability doesn't affect the translation of the nascent protein. However, the apoenzyme without copper is unstable. Apoceruloplasmin is largely degraded intracellularly in the hepatocyte and the small amount that is released has a short circulation half life of 5 hours as compared to the 5.5 days for the holo-ceruloplasmin.
Ceruloplasmin can be measured by means of a blood test; this can be done using immunoassays . The sample is spun and separated; it is stored around 4 °C Celsius for three days. This test is to determine if there are signs of Wilson disease. Another test that can be done is a urine copper level test; this has been found to be less accurate than the blood test. A liver tissue test can be done as well.
Mutations in the ceruloplasmin gene (CP), which are very rare, can lead to the genetic disease aceruloplasminemia, characterized by hyperferritinemia with iron overload. In the brain, this iron overload may lead to characteristic neurologic signs and symptoms, such as cerebellar ataxia, progressive dementia, and extrapyramidal signs. Excess iron may also deposit in the liver, pancreas, and retina, leading to cirrhosis, endocrine abnormalities, and loss of vision, respectively.
Deficiency
Lower-than-normal ceruloplasmin levels may indicate the following:
Wilson disease (a rare [UK incidence 2/100,000] copper storage disease).
Menkes disease (Menkes kinky hair syndrome) (rare – UK incidence 1/100,000)
Copper deficiency
Aceruloplasminemia
Zinc toxicity
Excess
Greater-than-normal ceruloplasmin levels may indicate or be noticed in:
copper toxicity / zinc deficiency
pregnancy
oral contraceptive pill use
lymphoma
acute and chronic inflammation (it is an acute-phase reactant)
rheumatoid arthritis
Angina
Alzheimer's disease
Schizophrenia
Obsessive-compulsive disorder
Reference ranges
Normal blood concentration of ceruloplasmin in humans is 20–50 mg/dL.
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Aceruloplasminemia
OMIM entries on Aceruloplasminemia
Acute-phase proteins
Chemical pathology
EC 1.16.3
Hepatology
Iron metabolism
Copper enzymes | Ceruloplasmin | [
"Chemistry",
"Biology"
] | 1,351 | [
"Biochemistry",
"Chemical pathology"
] |
1,010,189 | https://en.wikipedia.org/wiki/Retinal | Retinal (also known as retinaldehyde) is a polyene chromophore. Retinal, bound to proteins called opsins, is the chemical basis of visual phototransduction, the light-detection stage of visual perception (vision).
Some microorganisms use retinal to convert light into metabolic energy. One study suggests that approximately three billion years ago, most living organisms on Earth used retinal, rather than chlorophyll, to convert sunlight into energy. Because retinal absorbs mostly green light and transmits purple light, this gave rise to the Purple Earth hypothesis.
Retinal itself is considered to be a form of vitamin A when eaten by an animal. There are many forms of vitamin A, all of which are converted to retinal, which cannot be made without them. The number of different molecules that can be converted to retinal varies from species to species. Retinal was originally called retinene, and was renamed after it was discovered to be vitamin A aldehyde.
Vertebrate animals ingest retinal directly from meat, or they produce retinal from carotenoids – either from α-carotene or β-carotene – both of which are carotenes. They also produce it from β-cryptoxanthin, a type of xanthophyll. These carotenoids must be obtained from plants or other photosynthetic organisms. No other carotenoids can be converted by animals to retinal. Some carnivores cannot convert any carotenoids at all. The other main forms of vitamin A – retinol and a partially active form, retinoic acid – may both be produced from retinal.
Invertebrates such as insects and squid use hydroxylated forms of retinal in their visual systems, which derive from conversion from other xanthophylls.
Vitamin A metabolism
Living organisms produce retinal by irreversible oxidative cleavage of carotenoids.
For example:
catalyzed by a beta-carotene 15,15'-monooxygenase or a beta-carotene 15,15'-dioxygenase.
Just as carotenoids are the precursors of retinal, retinal is the precursor of the other forms of vitamin A. Retinal is interconvertible with retinol, the transport and storage form of vitamin A:
catalyzed by retinol dehydrogenases (RDHs) and alcohol dehydrogenases (ADHs).
Retinol is called vitamin A alcohol or, more often, simply vitamin A. Retinal can also be oxidized to retinoic acid:
catalyzed by retinal dehydrogenases also known as retinaldehyde dehydrogenases (RALDHs) as well as retinal oxidases.
Retinoic acid, sometimes called vitamin A acid, is an important signaling molecule and hormone in vertebrate animals.
Vision
Retinal is a conjugated chromophore. In the Vertebrate eyes, retinal begins in an 11-cis-retinal configuration, which — upon capturing a photon of the correct wavelength — straightens out into an all-trans-retinal configuration. This configuration change pushes against an opsin protein in the retina, which triggers a chemical signaling cascade, which results in perception of light or images by the brain. The absorbance spectrum of the chromophore depends on its interactions with the opsin protein to which it is bound, so that different retinal-opsin complexes will absorb photons of different wavelengths (i.e., different colors of light).
Opsins
Retinal is bound to opsins, which are G protein-coupled receptors (GPCRs). Opsins, like other GPCRs, have seven transmembrane alpha-helices connected by six loops. They are found in the photoreceptor cells in the retina of eye. The opsin in the vertebrate rod cells is rhodopsin. The rods form disks, which contain the rhodopsin molecules in their membranes and which are entirely inside of the cell. The N-terminus head of the molecule extends into the interior of the disk, and the C-terminus tail extends into the cytoplasm of the cell. The opsins in the cone cells are OPN1SW, OPN1MW, and OPN1LW. The cones form incomplete disks that are part of the plasma membrane, so that the N-terminus head extends outside of the cell. In opsins, retinal binds covalently to a lysine in the seventh transmembrane helix through a Schiff base. Forming the Schiff base linkage involves removing the oxygen atom from retinal and two hydrogen atoms from the free amino group of lysine, giving H2O. Retinylidene is the divalent group formed by removing the oxygen atom from retinal, and so opsins have been called retinylidene proteins.
Opsins are prototypical G protein-coupled receptors (GPCRs). Cattle rhodopsin, the opsin of the rod cells, was the first GPCR to have its amino acid sequence and 3D-structure (via X-ray crystallography) determined. Cattle rhodopsin contains 348 amino acid residues. Retinal binds as chromophore at Lys296. This lysine is conserved in almost all opsins, only a few opsins have lost it during evolution. Opsins without the retinal binding lysine are not light sensitive. Such opsins may have other functions.
Although mammals use retinal exclusively as the opsin chromophore, other groups of animals additionally use four chromophores closely related to retinal: 3,4-didehydroretinal (vitamin A2), (3R)-3-hydroxyretinal, (3S)-3-hydroxyretinal (both vitamin A3), and (4R)-4-hydroxyretinal (vitamin A4). Many fish and amphibians use 3,4-didehydroretinal, also called dehydroretinal. With the exception of the dipteran suborder Cyclorrhapha (the so-called higher flies), all insects examined use the (R)-enantiomer of 3-hydroxyretinal. The (R)-enantiomer is to be expected if 3-hydroxyretinal is produced directly from xanthophyll carotenoids. Cyclorrhaphans, including Drosophila, use (3S)-3-hydroxyretinal. Firefly squid have been found to use (4R)-4-hydroxyretinal.
Visual cycle
The visual cycle is a circular enzymatic pathway, which is the front-end of phototransduction. It regenerates 11-cis-retinal. For example, the visual cycle of mammalian rod cells is as follows:
all-trans-retinyl ester + H2O → 11-cis-retinol + fatty acid; RPE65 isomerohydrolases;
11-cis-retinol + NAD+ → 11-cis-retinal + NADH + H+; 11-cis-retinol dehydrogenases;
11-cis-retinal + aporhodopsin → rhodopsin + H2O; forms Schiff base linkage to lysine, -CH=N+H-;
rhodopsin + hν → metarhodopsin II (i.e., 11-cis photoisomerizes to all-trans):
(rhodopsin + hν → photorhodopsin → bathorhodopsin → lumirhodopsin → metarhodopsin I → metarhodopsin II);
metarhodopsin II + H2O → aporhodopsin + all-trans-retinal;
all-trans-retinal + NADPH + H+ → all-trans-retinol + NADP+; all-trans-retinol dehydrogenases;
all-trans-retinol + fatty acid → all-trans-retinyl ester + H2O; lecithin retinol acyltransferases (LRATs).
Steps 3, 4, 5, and 6 occur in rod cell outer segments; Steps 1, 2, and 7 occur in retinal pigment epithelium (RPE) cells.
RPE65 isomerohydrolases are homologous with beta-carotene monooxygenases; the homologous ninaB enzyme in Drosophila has both retinal-forming carotenoid-oxygenase activity and all-trans to 11-cis isomerase activity.
Microbial rhodopsins
All-trans-retinal is also an essential component of microbial opsins such as bacteriorhodopsin, channelrhodopsin, and halorhodopsin, which are important in bacterial and archaeal anoxygenic photosynthesis. In these molecules, light causes the all-trans-retinal to become 13-cis retinal, which then cycles back to all-trans-retinal in the dark state. These proteins are not evolutionarily related to animal opsins and are not GPCRs; the fact that they both use retinal is a result of convergent evolution.
History
The American biochemist George Wald and others had outlined the visual cycle by 1958. For his work, Wald won a share of the 1967 Nobel Prize in Physiology or Medicine with Haldan Keffer Hartline and Ragnar Granit.
See also
Purple Earth hypothesis
Sensory nervous system
Visual perception
Visual phototransduction
References
Further reading
Good historical review.
The oceans are full of type 1 rhodopsin.
External links
First Steps of Vision - National Health Museum
Vision and Light-Induced Molecular Changes
Retinal Anatomy and Visual Capacities
Retinal, Imperial College v-chemlib
Aldehydes
Apocarotenoids
Cyclohexenes
Photosynthetic pigments
Signal transduction
Vision
Vitamin A
he:אופסין#רטינל | Retinal | [
"Chemistry",
"Biology"
] | 2,222 | [
"Vitamin A",
"Photosynthetic pigments",
"Photosynthesis",
"Signal transduction",
"Biomolecules",
"Biochemistry",
"Neurochemistry"
] |
1,010,309 | https://en.wikipedia.org/wiki/Bleomycin | Bleomycin is a medication primarily used to treat cancer. This includes Hodgkin's lymphoma, non-Hodgkin's lymphoma, testicular cancer, ovarian cancer, and cervical cancer among others. Typically used with other cancer medications, it can be given intravenously, by injection into a muscle or under the skin. It may also be administered inside the chest to help prevent the recurrence of a pleural effusion due to cancer; however talc is better for this. It may sometimes be used to treat other difficult-to-treat skin lesions such as plantars warts in immunocompromised patients.
Common side effects include fever, weight loss, vomiting, and rash. A severe type of anaphylaxis may occur. It may also cause inflammation of the lungs that can result in lung scarring. Chest X-rays every couple of weeks are recommended to check for this. Bleomycin may cause harm to the baby if used during pregnancy. It is believed to primarily work by preventing the synthesis of DNA.
Bleomycin was discovered in 1962. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. It is made by the bacterium Streptomyces verticillus.
Medical uses
Cancer
Bleomycin is mostly used to treat cancer. This includes testicular cancer, ovarian cancer, and Hodgkin's disease, and less commonly non-Hodgkin's disease. It can be given intravenously, by intramuscular injection, or under the skin.
Other uses
It may also be put inside the chest to help prevent the recurrence of a pleural effusion due to cancer. However, for scarring down the pleura, talc appears to be the better option although indwelling pleural catheters are at least as effective in reducing the symptoms of an effusion(such as dyspnea).
While potentially effective against bacterial infections, its toxicity prevents its use for this purpose. It has been studied in the treatment of warts but is of unclear benefit.
Side effects
The most common side effects are flu-like symptoms and include fever, rash, dermatographism, hyperpigmentation, alopecia (hair loss), chills, and Raynaud's phenomenon (discoloration of fingers and toes). The most serious complication of bleomycin, occurring upon increasing dosage, is pulmonary fibrosis and impaired lung function. It has been suggested that bleomycin induces sensitivity to oxygen toxicity and recent studies support the role of the proinflammatory cytokines IL-18 and IL-1beta in the mechanism of bleomycin-induced lung injury. Any previous treatment with bleomycin should therefore always be disclosed to the anaesthetist prior to undergoing a procedure requiring general anaesthesia. Due to the oxygen sensitive nature of bleomycin, and the theorised increased likelihood of developing pulmonary fibrosis following supplemental oxygen therapy, it has been questioned whether patients should take part in scuba diving following treatment with the drug. Bleomycin has also been found to disrupt the sense of taste.
Lifetime cumulative dose
Bleomycin should not exceed a lifetime cumulative dose greater than 400 units. Pulmonary toxicities, most commonly presenting as pulmonary fibrosis, are associated with doses of bleomycin greater than 400 units.
Mechanism of action
Bleomycin acts by induction of DNA strand breaks. Some studies suggest bleomycin also inhibits incorporation of thymidine into DNA strands. DNA cleavage by bleomycin depends on oxygen and metal ions, at least in vitro. The exact mechanism of DNA strand scission is unresolved, but it has been suggested that bleomycin chelates metal ions (primarily iron), producing a pseudoenzyme that reacts with oxygen to produce superoxide and hydroxide free radicals that cleave DNA. An alternative hypothesis states that bleomycin may bind at specific sites in the DNA strand and induce scission by abstracting the hydrogen atom from the base, resulting in strand cleavage as the base undergoes a Criegee-type rearrangement, or forms an alkali-labile lesion.
Biosynthesis
Biosynthesis of bleomycin is completed by glycosylation of the aglycones. Bleomycin naturally occurring-analogues have two to three sugar molecules, and DNA cleavage activities of these analogues have been assessed, primarily by the plasmid relaxation and break light assays.
History
Bleomycin was first discovered in 1962 when the Japanese scientist Hamao Umezawa found anticancer activity while screening culture filtrates of Streptomyces verticillus. Umezawa published his discovery in 1966. The drug was launched in Japan by Nippon Kayaku in 1969. In the US, bleomycin gained FDA approval in July 1973. It was initially marketed in the US by the Bristol-Myers Squibb precursor, Bristol Laboratories, under the brand name Blenoxane.
Research
Bleomycin is used in research to induce pulmonary fibrosis in mice. It accomplishes this by preventing alveolar cell proliferation, which in turn leads to cellular senescence.
See also
Flagellate pigmentation from bleomycin
Pingyangmycin (Bleomycin A5)
References
Further reading
Cancer research
DNA intercalaters
DNA replication inhibitors
Glycopeptide antibiotics
IARC Group 2B carcinogens
Sulfonium compounds
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
Eukaryotic selection compounds
Hydroxymethyl compounds
Japanese inventions | Bleomycin | [
"Chemistry"
] | 1,197 | [
"Glycopeptide antibiotics",
"Glycopeptides"
] |
1,010,454 | https://en.wikipedia.org/wiki/Maldevelopment | Maldevelopment is the state of an organism or an organisation that did not develop in the "normal" way (used in medicine, e.g. "brain maldevelopment of a fetus"). It was introduced as a human and social development term in France in the 1990s by Samir Amin to challenge the concept of "underdevelopment." The word maldéveloppement did not exist before then (the medical terms are malformation or développement anormal), so the word is a neologism meant to be analogous to the difference between undernutrition and malnutrition.
Maldevelopment is a global concept that includes human and social development. Under the philosophy of sustainable development, economic development is only a "tool" that allows for greater human and social development, not the final goal. Under-development is a quantitative notion, implying that a nation has a lack and must gain something to reach a particular reference state—the state of the nation that judges another nation as underdeveloped. So this notion also implies a unique development model—the one of the judging nation.
Mal-development, or ill-development, is a qualitative notion that expresses a mismatch, a discrepancy between the conditions (economic, political, meteorological, cultural, etc.) and the needs and means of the people.
See also
Human development theory.
References
Human development | Maldevelopment | [
"Biology"
] | 295 | [
"Behavioural sciences",
"Behavior",
"Human development"
] |
1,010,522 | https://en.wikipedia.org/wiki/Disjunction%20and%20existence%20properties | In mathematical logic, the disjunction and existence properties are the "hallmarks" of constructive theories such as Heyting arithmetic and constructive set theories (Rathjen 2005).
Definitions
The disjunction property is satisfied by a theory if, whenever a sentence A ∨ B is a theorem, then either A is a theorem, or B is a theorem.
The existence property or witness property is satisfied by a theory if, whenever a sentence is a theorem, where A(x) has no other free variables, then there is some term t such that the theory proves .
Related properties
Rathjen (2005) lists five properties that a theory may possess. These include the disjunction property (DP), the existence property (EP), and three additional properties:
The numerical existence property (NEP) states that if the theory proves , where φ has no other free variables, then the theory proves for some Here is a term in representing the number n.
Church's rule (CR) states that if the theory proves then there is a natural number e such that, letting be the computable function with index e, the theory proves .
A variant of Church's rule, CR1, states that if the theory proves then there is a natural number e such that the theory proves is total and proves .
These properties can only be directly expressed for theories that have the ability to quantify over natural numbers and, for CR1, quantify over functions from to . In practice, one may say that a theory has one of these properties if a definitional extension of the theory has the property stated above (Rathjen 2005).
Results
Non-examples and examples
Almost by definition, a theory that accepts excluded middle while having independent statements does not have the disjunction property. So all classical theories expressing Robinson arithmetic do not have it. Most classical theories, such as Peano arithmetic and ZFC in turn do not validate the existence property either, e.g. because they validate the least number principle existence claim. But some classical theories, such as ZFC plus the axiom of constructibility, do have a weaker form of the existence property (Rathjen 2005).
Heyting arithmetic is well known for having the disjunction property and the (numerical) existence property.
While the earliest results were for constructive theories of arithmetic, many results are also known for constructive set theories (Rathjen 2005). John Myhill (1973) showed that IZF with the axiom of replacement eliminated in favor of the axiom of collection has the disjunction property, the numerical existence property, and the existence property. Michael Rathjen (2005) proved that CZF has the disjunction property and the numerical existence property.
Freyd and Scedrov (1990) observed that the disjunction property holds in free Heyting algebras and free topoi. In categorical terms, in the free topos, that corresponds to the fact that the terminal object, , is not the join of two proper subobjects. Together with the existence property it translates to the assertion that is an indecomposable projective object—the functor it represents (the global-section functor) preserves epimorphisms and coproducts.
Relationship between properties
There are several relationship between the five properties discussed above.
In the setting of arithmetic, the numerical existence property implies the disjunction property. The proof uses the fact that a disjunction can be rewritten as an existential formula quantifying over natural numbers:
.
Therefore, if
is a theorem of , so is .
Thus, assuming the numerical existence property, there exists some such that
is a theorem. Since is a numeral, one may concretely check the value of : if then is a theorem and if then is a theorem.
Harvey Friedman (1974) proved that in any recursively enumerable extension of intuitionistic arithmetic, the disjunction property implies the numerical existence property. The proof uses self-referential sentences in way similar to the proof of Gödel's incompleteness theorems. The key step is to find a bound on the existential quantifier in a formula (∃x)A(x), producing a bounded existential formula
(∃x<n)A(x). The bounded formula may then be written as a finite disjunction A(1)∨A(2)∨...∨A(n). Finally, disjunction elimination may be used to show that one of the disjuncts is provable.
History
Kurt Gödel (1932) stated without proof that intuitionistic propositional logic (with no additional axioms) has the disjunction property; this result was proven and extended to intuitionistic predicate logic by Gerhard Gentzen (1934, 1935). Stephen Cole Kleene (1945) proved that Heyting arithmetic has the disjunction property and the existence property. Kleene's method introduced the technique of realizability, which is now one of the main methods in the study of constructive theories (Kohlenbach 2008; Troelstra 1973).
See also
Constructive set theory
Heyting arithmetic
Law of excluded middle
Realizability
Existential quantifier
References
Peter J. Freyd and Andre Scedrov, 1990, Categories, Allegories. North-Holland.
Harvey Friedman, 1975, The disjunction property implies the numerical existence property, State University of New York at Buffalo.
Gerhard Gentzen, 1934, "Untersuchungen über das logische Schließen. I", Mathematische Zeitschrift v. 39 n. 2, pp. 176–210.
Gerhard Gentzen, 1935, "Untersuchungen über das logische Schließen. II", Mathematische Zeitschrift v. 39 n. 3, pp. 405–431.
Kurt Gödel, 1932, "Zum intuitionistischen Aussagenkalkül", Anzeiger der Akademie der Wissenschaftischen in Wien, v. 69, pp. 65–66.
Stephen Cole Kleene, 1945, "On the interpretation of intuitionistic number theory," Journal of Symbolic Logic, v. 10, pp. 109–124.
Ulrich Kohlenbach, 2008, Applied proof theory, Springer.
John Myhill, 1973, "Some properties of Intuitionistic Zermelo-Fraenkel set theory", in A. Mathias and H. Rogers, Cambridge Summer School in Mathematical Logic, Lectures Notes in Mathematics v. 337, pp. 206–231, Springer.
Michael Rathjen, 2005, "The Disjunction and Related Properties for Constructive Zermelo-Fraenkel Set Theory", Journal of Symbolic Logic, v. 70 n. 4, pp. 1233–1254.
Anne S. Troelstra, ed. (1973), Metamathematical investigation of intuitionistic arithmetic and analysis, Springer.
External links
Proof theory
Constructivism (mathematics) | Disjunction and existence properties | [
"Mathematics"
] | 1,473 | [
"Mathematical logic",
"Constructivism (mathematics)",
"Proof theory"
] |
1,010,708 | https://en.wikipedia.org/wiki/Neem%20oil | Neem oil, also known as margosa oil, is a vegetable oil pressed from the fruits and seeds of the neem (Azadirachta indica), a tree which is indigenous to the Indian subcontinent and has been introduced to many other areas in the tropics. It is the most important of the commercially available products of neem, and its chemical properties have found widespread use as a pesticide in organic farming.
Composition
Azadirachtin is the most well known and studied triterpenoid in neem oil. Nimbin is another triterpenoid which has been credited with some of neem oil's properties as an antiseptic, antifungal, antipyretic and antihistamine.
Uses
Ayurveda
Neem oil has a history of use in Ayurvedic folk medicine.
Pesticide
Formulations that include neem oil have found wide usage as a biopesticide for horticulturists and for organic farming, as it repels a wide variety of insect pests including mealy bugs, beet armyworms, aphids, cabbage worms, thrips, whiteflies, mites, fungus gnats, beetles, moth larvae, mushroom flies, leaf miners, caterpillars, locusts, nematodes and Japanese beetles.
When sufficiently diluted and not concentrated directly into their area of habitat or on their food source, neem oil is not known to be harmful to mammals, birds, earthworms or some beneficial insects such as butterflies, honeybees and ladybugs. It can be used as a household pesticide for ants, bedbugs, cockroaches, houseflies, sand flies, snails, termites and mosquitoes both as a repellent and as a larvicide.
Neem extracts act as an antifeedant and block the action of the insect molting hormone ecdysone. Azadirachtin is the most active of these growth regulators (limonoids), occurring at 0.2–0.4% in the seeds of the neem tree.
Toxicity
The ingestion of neem oil is potentially toxic and can cause metabolic acidosis, seizures, kidney failure, encephalopathy and severe brain ischemia in infants and young children. Neem oil should not be consumed alone without any other solutions, particularly by pregnant women, women trying to conceive or children. It can also be associated with allergic contact dermatitis.
References
Plant toxin insecticides
Vegetable oils | Neem oil | [
"Chemistry"
] | 529 | [
"Plant toxin insecticides",
"Chemical ecology"
] |
1,010,773 | https://en.wikipedia.org/wiki/Heiligenschein | (; ) is an optical phenomenon in which a bright spot appears around the shadow of the viewer's head in the presence of dew. In photogrammetry and remote sensing, it is more commonly known as the hotspot. It is also occasionally known as Cellini's halo after the Italian artist and writer Benvenuto Cellini (15001571), who described the phenomenon in his memoirs in 1562.
Nearly spherical dew droplets act as lenses to focus the light onto the surface behind them. When this light scatters or reflects off that surface, the same lens re-focuses that light into the direction from which it came. This configuration is similar to a cat's eye retroreflector. However a cat's eye retroreflector needs a refractive index of around 2, while water has a much smaller refractive index of approximately 1.33. This means that the water droplets focus the light about 20% to 50% of the diameter beyond the rear surface of the droplet. When dew droplets are suspended on trichomes at approximately this distance away from the surface of a plant, the combination of droplet and plant acts as a retroreflector. Any retroreflective surface is brightest around the antisolar point.
Opposition surge by other particles than water and the glory in water vapour are similar effects caused by different mechanisms.
See also
Aureole effect
Brocken spectre, the magnified shadow of an observer cast upon the upper surfaces of clouds opposite the Sun
Gegenschein, a faint spot of dust lit by sunlight focused by Earth's atmosphere, visible in the night sky toward the antisolar point
Retroreflector
Subparhelic circle
Sylvanshine
References
External links
A site showing examples of a Heiligenschein
What causes heiligenschein
Atmospheric optical phenomena | Heiligenschein | [
"Physics"
] | 379 | [
"Optical phenomena",
"Physical phenomena",
"Atmospheric optical phenomena",
"Earth phenomena"
] |
1,011,242 | https://en.wikipedia.org/wiki/Value%20investing | Value investing is an investment paradigm that involves buying securities that appear underpriced by some form of fundamental analysis. Modern value investing derives from the investment philosophy taught by Benjamin Graham and David Dodd at Columbia Business School starting in 1928 and subsequently developed in their 1934 text Security Analysis.
The early value opportunities identified by Graham and Dodd included stock in public companies trading at discounts to book value or tangible book value, those with high dividend yields and those having low price-to-earning multiples or low price-to-book ratios.
Proponents of value investing, including Berkshire Hathaway chairman Warren Buffett, have argued that the essence of value investing is buying stocks at less than their intrinsic value. The discount of the market price to the intrinsic value is what Benjamin Graham called the "margin of safety". Buffett further expanded the value investing concept with a focus on "finding an outstanding company at a sensible price" rather than generic companies at a bargain price. Hedge fund manager Seth Klarman has described value investing as rooted in a rejection of the efficient-market hypothesis (EMH). While the EMH proposes that securities are accurately priced based on all available data, value investing proposes that some equities are not accurately priced.
Graham himself did not use the phrase value investing. The term was coined later to help describe his ideas. The term however has also led to misinterpretation of his principles - most notably the notion that Graham simply recommended cheap stocks. Columbia Business School is the current home for value investing.
History
Early predecessors
The concept of intrinsic value for equities was recognized as early as the 1600s, as was the idea that paying substantially above intrinsic value was likely to be a poor long-term investment. Daniel Defoe observed in the 1690s how stock for the East India Company was trading at what he believed was an elevated price of over 300% more than face value, "without any material difference in Intrinsick [sic] value."
Hetty Green (1834-1916) was retrospectively described as "America's first value investor." She had a habit of buying unwanted assets at low prices, which she held, as she stated in 1905, "until they go up [in price] and people are anxious to buy."
The investing firm Tweedy, Browne was founded in 1920 and has been described as "the oldest value investing firm on Wall Street". Founder Forest Berwind "Bill" Tweedy initially focused on shares of smaller companies, often family owned, which traded in lower numbers and lower volume than stock for larger companies. This niche allowed Tweedy to buy stocks at a significant discount to estimated book value due to the limited options for sellers. Tweedy and Benjamin Graham eventually became friends and worked out of the same New York City office building at 52 broadway.
Economist John Maynard Keynes is also recognized as an early value investor. While managing the endowment of King's College, Cambridge starting in the 1920s, Keynes first attempted a stock trading strategy based on market timing. When this method was unsuccessful, he turned to a strategy similar to value investing. In 2017, Joel Tillinghast of Fidelity Investments wrote:
Instead of using big-picture economics, Keynes increasingly focused on a small number of companies that he knew very well. Rather than chasing momentum, he bought undervalued stocks with generous dividends. [...] Most were small and midsize companies in dull or out of favor industries, such as mining and autos in the midst of the Great Depression. Despite his rough start [by timing markets], Keynes beat the market averages by 6 percent a year over more than two decades.
Keynes used similar terms and concepts as Graham and Dodd (e.g. an emphasis on the intrinsic value of equities). A review of his archives at King's College found no evidence of contact between Keynes and his American counterparts and Keynes is believed to have developed his investing theories independently. Keynes did not teach his concepts in classes or seminars, unlike Graham and Dodd, and details of his investing theories became widely known only decades after his death in 1946. There was "considerable overlap" of Keynes's ideas with those of Graham and Dodd, though their ideas were not entirely congruent.
Benjamin Graham
Value investing was established by Benjamin Graham and David Dodd. Both were professors at Columbia Business School. In Graham's book The Intelligent Investor, he advocated the concept of margin of safety. The concept was introduced in the book Security Analysis which he co-authored with David Dodd in 1934 and calls for an approach to investing that is focused on purchasing equities at prices less than their intrinsic values. In terms of picking or screening stocks, he recommended purchasing firms which have steady profits, are trading at low prices to book value, have low price-to-earnings (P/E) ratios and which have relatively low debt.
Further evolution
However, the concept of value (as well as "book value") has evolved significantly since the 1970s. Book value is most useful in industries where most assets are tangible. Intangible assets such as patents, brands, or goodwill are difficult to quantify, and may not survive the break-up of a company. When an industry is going through fast technological advancements, the value of its assets is not easily estimated. Sometimes, the production power of an asset can be significantly reduced due to competitive disruptive innovation and therefore its value can suffer permanent impairment. One good example of decreasing asset value is a personal computer. An example of where book value does not mean much is the service and retail sectors. One modern model of calculating value is the discounted cash flow model (DCF), where the value of an asset is the sum of its future cash flows, discounted back to the present.
Quantitative value investing
Quantitative value investing, also known as Systematic value investing, is a form of value investing that analyzes fundamental data such as financial statement line items, economic data, and unstructured data in a rigorous and systematic manner. Practitioners often employ quantitative applications such as statistical / empirical finance or mathematical finance, behavioral finance, natural language processing, and machine learning.
Quantitative investment analysis can trace its origin back to Security Analysis by Benjamin Graham and David Dodd in which the authors advocated detailed analysis of objective financial metrics of specific stocks. Quantitative investing replaces much of the ad-hoc financial analysis used by human fundamental investment analysts with a systematic framework designed and programmed by a person but largely executed by a computer in order to avoid cognitive biases that lead to inferior investment decisions. In an interview, Benjamin Graham admitted that even by that time ad-hoc detailed financial analysis of single stocks was unlikely to produce good risk-adjusted returns. Instead, he advocated a rules-based approach focused on constructing a coherent portfolio based on a relatively limited set of objective fundamental financial factors.
Joel Greenblatt's magic formula investing is a simple illustration of a quantitative value investing strategy. Many modern practitioners employ more sophisticated forms of quantitative analysis and evaluate numerous financial metrics, as opposed to just two as in the "magic formula". James O'Shaughnessy's What Works on Wall Street is a classic guide to quantitative value investing, containing backtesting performance data of various quantitative value strategies and value factors based on Compustat data from January 1927 until December 2009.
Value investing performance
Performance of value strategies
Value investing has proven to be a successful investment strategy. There are several ways to evaluate the success. One way is to examine the performance of simple value strategies, such as buying low PE ratio stocks, low price-to-cash-flow ratio stocks, or low price-to-book ratio stocks. Numerous academics have published studies investigating the effects of buying value stocks. These studies have consistently found that value stocks outperform growth stocks and the market as a whole, not necessarily over short periods but when tracked over long periods, even going back to the 19th century. A review of 26 years of data (1990 to 2015) from US markets found that the over-performance of value investing was more pronounced in stocks for smaller and mid-size companies than for larger companies and recommended a "value tilt" with greater emphasis on value than growth investing in personal portfolios.
Performance of value investors
Since examining only the performance of the best known value investors introduces a selection bias (as typically investors might not become well known unless they are successful) a way to investigate the performance of a group of value investors was suggested by Warren Buffett in his 1984 speech The Superinvestors of Graham-and-Doddsville. In this speech, Buffett examined the performance of those investors who worked at Graham-Newman Corporation and were influenced by Benjamin Graham. Buffett's conclusion was that value investing is on average successful in the long run. This was also the conclusion of the academic research on simple value investing strategies.
From 1965 to 1990 there was little published research and articles in leading journals on value investing.
Well-known value investors
The Graham-and-Dodd Disciples
Ben Graham's students
Benjamin Graham is regarded by many to be the father of value investing. Along with David Dodd, he wrote Security Analysis, first published in 1934. The most lasting contribution of this book to the field of security analysis was to emphasize the quantifiable aspects of security analysis (such as the evaluations of earnings and book value) while minimizing the importance of more qualitative factors such as the quality of a company's management. Graham later wrote The Intelligent Investor, a book that brought value investing to individual investors. Aside from Buffett, many of Graham's other students, such as William J. Ruane, Irving Kahn, Walter Schloss, and Charles Brandes went on to become successful investors in their own right.
Irving Kahn was one of Graham's teaching assistants at Columbia University in the 1930s. He was a close friend and confidant of Graham's for decades and made research contributions to Graham's texts Security Analysis, Storage and Stability, World Commodities and World Currencies and The Intelligent Investor. Kahn was a partner at various finance firms until 1978 when he and his sons, Thomas Graham Kahn and Alan Kahn, started the value investing firm, Kahn Brothers & Company. Irving Kahn remained chairman of the firm until his death at age 109.
Walter Schloss was another Graham-and-Dodd disciple. Schloss never had a formal education. When he was 18, he started working as a runner on Wall Street. He then attended investment courses taught by Ben Graham at the New York Stock Exchange Institute, and eventually worked for Graham in the Graham-Newman Partnership. In 1955, he left Graham’s company and set up his own investment firm, which he ran for nearly 50 years. Walter Schloss was one of the investors Warren Buffett profiled in his famous Superinvestors of Graham-and-Doddsville article.
Christopher H. Browne of Tweedy, Browne was well known for value investing. According to The Wall Street Journal, Tweedy, Browne was the favorite brokerage firm of Benjamin Graham during his lifetime; also, the Tweedy, Browne Value Fund and Global Value Fund have both beat market averages since their inception in 1993. In 2006, Christopher H. Browne wrote The Little Book of Value Investing in order to teach ordinary investors how to value invest.
Peter Cundill was a well-known Canadian value investor who followed the Graham teachings. His flagship Cundill Value Fund allowed Canadian investors access to fund management according to the strict principles of Graham and Dodd. Warren Buffett had indicated that Cundill had the credentials he's looking for in a chief investment officer.
Warren Buffett and Charlie Munger
Graham's most famous student, however, is Warren Buffett, who ran successful investing partnerships before closing them in 1969 to focus on running Berkshire Hathaway. Buffett was a strong advocate of Graham's approach and strongly credits his success back to his teachings. Another disciple, Charlie Munger, who joined Buffett at Berkshire Hathaway in the 1970s and has since worked as Vice Chairman of the company, followed Graham's basic approach of buying assets below intrinsic value, but focused on companies with robust qualitative qualities, even if they weren't statistically cheap. This approach by Munger gradually influenced Buffett by reducing his emphasis on quantitatively cheap assets, and instead encouraged him to look for long-term sustainable competitive advantages in companies, even if they weren't quantitatively cheap relative to intrinsic value. Buffett is often quoted saying, "It's better to buy a great company at a fair price, than a fair company at a great price."
Buffett is a particularly skilled investor because of his temperament. He has a famous quote stating "be greedy when others are fearful, and fearful when others are greedy." In essence, he updated the teachings of Graham to fit a style of investing that prioritizes fundamentally good businesses over those that are deemed cheap by statistical measures. He is further known for a talk he gave titled the Super Investors of Graham and Doddsville. The talk was an outward appreciation for the fundamentals that Benjamin Graham instilled in him.
Michael Burry
Dr. Michael Burry, the founder of Scion Capital, is another strong proponent of value investing. Burry is famous for being the first investor to recognize and profit from the impending subprime mortgage crisis, as portrayed by Christian Bale in the movie The Big Short. Burry has said on multiple occasions that his investment style is built upon Benjamin Graham and David Dodd’s 1934 book Security Analysis: "All my stock picking is 100% based on the concept of a margin of safety."
Other Columbia Business School value investors
Columbia Business School has played a significant role in shaping the principles of the Value Investor, with professors and students making their mark on history and on each other. Ben Graham’s book, The Intelligent Investor, was Warren Buffett’s bible and he referred to it as "the greatest book on investing ever written.”
A young Warren Buffett studied under Ben Graham, took his course and worked for his small investment firm, Graham Newman, from 1954 to 1956. Twenty years after Ben Graham, Roger Murray arrived and taught value investing to a young student named Mario Gabelli.
About a decade or so later, Bruce Greenwald arrived and produced his own protégés, including Paul Sonkin—just as Ben Graham had Buffett as a protégé, and Roger Murray had Gabelli.
Mutual Series and Franklin Templeton disciples
Mutual Series has a well-known reputation of producing top value managers and analysts in this modern era. This tradition stems from two individuals: Max Heine, founder of the well regarded value investment firm Mutual Shares fund in 1949 and his protégé legendary value investor Michael F. Price. Mutual Series was sold to Franklin Templeton Investments in 1996. The disciples of Heine and Price quietly practice value investing at some of the most successful investment firms in the country. Franklin Templeton Investments takes its name from Sir John Templeton, another contrarian value oriented investor.
Seth Klarman, a Mutual Series alum, is the founder and president of The Baupost Group, a Boston-based private investment partnership, and author of Margin of Safety, Risk Averse Investing Strategies for the Thoughtful Investor, which since has become a value investing classic. Now out of print, Margin of Safety has sold on Amazon for $1,200 and eBay for $2,000.
Other value investors
Laurence Tisch, who led Loews Corporation with his brother, Robert Tisch, for more than half a century, also embraced value investing. Shortly after his death in 2003 at age 80, Fortune wrote, "Larry Tisch was the ultimate value investor. He was a brilliant contrarian: He saw value where other investors didn't -- and he was usually right." By 2012, Loews Corporation, which continues to follow the principles of value investing, had revenues of $14.6 billion and assets of more than $75 billion.
Michael Larson is the Chief Investment Officer of Cascade Investment, which is the investment vehicle for the Bill & Melinda Gates Foundation and the Gates personal fortune. Cascade is a diversified investment shop established in 1994 by Gates and Larson. Larson graduated from Claremont McKenna College in 1980 and the Booth School of Business at the University of Chicago in 1981. Larson is a well known value investor but his specific investment and diversification strategies are not known. Larson has consistently outperformed the market since the establishment of Cascade and has rivaled or outperformed Berkshire Hathaway's returns as well as other funds based on the value investing strategy.
Martin J. Whitman is another well-regarded value investor. His approach is called safe-and-cheap, which was hitherto referred to as financial-integrity approach. Martin Whitman focuses on acquiring common shares of companies with extremely strong financial position at a price reflecting meaningful discount to the estimated NAV of the company concerned. Whitman believes it is ill-advised for investors to pay much attention to the trend of macro-factors (like employment, movement of interest rate, GDP, etc.) because they are not as important and attempts to predict their movement are almost always futile. Whitman's letters to shareholders of his Third Avenue Value Fund (TAVF) are considered valuable resources "for investors to pirate good ideas" by Joel Greenblatt in his book on special-situation investment You Can Be a Stock Market Genius.
Joel Greenblatt achieved annual returns at the hedge fund Gotham Capital of over 50% per year for 10 years from 1985 to 1995 before closing the fund and returning his investors' money. He is known for investing in special situations such as spin-offs, mergers, and divestitures.
Charles de Vaulx and Jean-Marie Eveillard are well known global value managers. For a time, these two were paired up at the First Eagle Funds, compiling an enviable track record of risk-adjusted outperformance. For example, Morningstar designated them the 2001 "International Stock Manager of the Year" and de Vaulx earned second place from Morningstar for 2006. Eveillard is known for his Bloomberg appearances where he insists that securities investors never use margin or leverage. The point made is that margin should be considered the anathema of value investing, since a negative price move could prematurely force a sale. In contrast, a value investor must be able and willing to be patient for the rest of the market to recognize and correct whatever pricing issue created the momentary value. Eveillard correctly labels the use of margin or leverage as speculation, the opposite of value investing.
Other notable value investors include: Mason Hawkins, Thomas Forester, Whitney Tilson, Mohnish Pabrai, Li Lu, Guy Spier and Tom Gayner who manages the investment portfolio of Markel Insurance. San Francisco investing firm Dodge & Cox, founded in 1931 and with one of the oldest US mutual funds still in existence as of 2019, emphasizes value investing.
Criticism
Value stocks do not always beat growth stocks, as demonstrated in the late 1990s. Moreover, when value stocks perform well, it may not mean that the market is inefficient, though it may imply that value stocks are simply riskier and thus require greater returns. Furthermore, Foye and Mramor (2016) find that country-specific factors have a strong influence on measures of value (such as the book-to-market ratio). This leads them to conclude that the reasons why value stocks outperform are country-specific.
Also, one of the biggest criticisms of price centric value investing is that an emphasis on low prices (and recently depressed prices) regularly misleads retail investors; because fundamentally low (and recently depressed) prices often represent a fundamentally sound difference (or change) in a company's relative financial health. To that end, Warren Buffett has regularly emphasized that "it's far better to buy a wonderful company at a fair price, than to buy a fair company at a wonderful price."
In 2000, Stanford accounting professor Joseph Piotroski developed the F-score, which discriminates higher potential members within a class of value candidates. The F-score aims to discover additional value from signals in a firm's series of annual financial statements, after initial screening of static measures like book-to-market value. The F-score formula inputs financial statements and awards points for meeting predetermined criteria. Piotroski retrospectively analyzed a class of high book-to-market stocks in the period 1976–1996, and demonstrated that high F-score selections increased returns by 7.5% annually versus the class as a whole. The American Association of Individual Investors examined 56 screening methods in a retrospective analysis of the financial crisis of 2008, and found that only F-score produced positive results.
Over-simplification of value
The term "value investing" causes confusion because it suggests that it is a distinct strategy, as opposed to something that all investors (including growth investors) should do. In a 1992 letter to shareholders, Warren Buffett said, "We think the very term 'value investing' is redundant". In other words, there is no such thing as "non-value investing" because putting your money into assets that you believe are overvalued would be better described as speculation, conspicuous consumption, etc., but not investing. Unfortunately, the term still exists, and therefore the quest for a distinct "value investing" strategy leads to over-simplification, both in practice and in theory.
Firstly, various naive "value investing" schemes, promoted as simple, are grossly inaccurate because they completely ignore the value of growth, or even of earnings altogether. For example, many investors look only at dividend yield. Thus they would prefer a 5% dividend yield at a declining company over a modestly higher-priced company that earns twice as much, reinvests half of earnings to achieve 20% growth, pays out the rest in the form of buybacks (which is more tax efficient), and has huge cash reserves. These "dividend investors" tend to hit older companies with huge payrolls that are already highly indebted and behind technologically, and can least afford to deteriorate further. By consistently voting for increased debt, dividends, etc., these naive "value investors" (and the type of management they tend to appoint) serve to slow innovation, and to prevent the majority of the population from working at healthy businesses.
Furthermore, the method of calculating the "intrinsic value" may not be well-defined. Some analysts believe that two investors can analyze the same information and reach different conclusions regarding the intrinsic value of the company, and that there is no systematic or standard way to value a stock. In other words, a value investing strategy can only be considered successful if it delivers excess returns after allowing for the risk involved, where risk may be defined in many different ways, including market risk, multi-factor models or idiosyncratic risk.
See also
Contrarian investing
Index investing
Low-volatility investing
Quality investing
Value (economics)
Value averaging
Value premium
References
Further reading
The Theory of Investment Value (1938), by John Burr Williams.
The Intelligent Investor (1949), by Benjamin Graham.
You Can Be a Stock Market Genius (1997), by Joel Greenblatt. .
Contrarian Investment Strategies: The Next Generation (1998), by David Dreman. .
The Essays of Warren Buffett (2001), edited by Lawrence A. Cunningham. .
The Little Book That Beats the Market (2006), by Joel Greenblatt. .
The Little Book of Value Investing (2006), by Chris Browne. .
"The Rediscovered Benjamin Graham - selected writings of the wall street legend," by Janet Lowe. John Wiley & Sons
"Benjamin Graham on Value Investing," Janet Lowe, Dearborn
"Value Investing: From Graham to Buffett and Beyond" (2004), by Bruce C. N. Greenwald, Judd Kahn, Paul D. Sonkin, Michael van Biema
"Modern Security Analysis: Understand Wall Street Fundamentals" (2013), by Fernando Diz and Martin J. Whitman,
The Most Important Thing Illuminated (2013), by Howard Marks
"Stocks and Exchange - the only Book you need" (2021), by Ladis Konecny, ISBN 9783848220656
Business terms
Finance theories
Financial risk
Investment
Market trends
Mathematical finance
Personal finance
Securities (finance)
Stock market
Valuation (finance) | Value investing | [
"Mathematics"
] | 4,986 | [
"Applied mathematics",
"Mathematical finance"
] |
1,011,270 | https://en.wikipedia.org/wiki/Bourbaki%E2%80%93Witt%20theorem | In mathematics, the Bourbaki–Witt theorem in order theory, named after Nicolas Bourbaki and Ernst Witt, is a basic fixed-point theorem for partially ordered sets. It states that if X is a non-empty chain complete poset, and
such that
for all
then f has a fixed point. Such a function f is called inflationary or progressive.
Special case of a finite poset
If the poset X is finite then the statement of the theorem has a clear interpretation that leads to the proof. The sequence of successive iterates,
where x0 is any element of X, is monotone increasing. By the finiteness of X, it stabilizes:
for n sufficiently large.
It follows that x∞ is a fixed point of f.
Proof of the theorem
Pick some . Define a function K recursively on the ordinals as follows:
If is a limit ordinal, then by construction
is a chain in X. Define
This is now an increasing function from the ordinals into X. It cannot be strictly increasing, as if it were we would have an injective function from the ordinals into a set, violating Hartogs' lemma. Therefore the function must be eventually constant, so for some
that is,
So letting
we have our desired fixed point. Q.E.D.
Applications
The Bourbaki–Witt theorem has various important applications. One of the most common is in the proof that the axiom of choice implies Zorn's lemma. We first prove it for the case where X is chain complete and has no maximal element. Let g be a choice function on
Define a function
by
This is allowed as, by assumption, the set is non-empty. Then f(x) > x, so f is an inflationary function with no fixed point, contradicting the theorem.
This special case of Zorn's lemma is then used to prove the Hausdorff maximality principle, that every poset has a maximal chain, which is easily seen to be equivalent to Zorn's Lemma.
Bourbaki–Witt has other applications. In particular in computer science, it is used in the theory of computable functions.
It is also used to define recursive data types, e.g. linked lists, in domain theory.
See also
Kleene fixed-point theorem for Scott-continuous functions
Knaster–Tarski theorem for complete lattices
References
Order theory
Fixed-point theorems
Theorems in the foundations of mathematics
Articles containing proofs | Bourbaki–Witt theorem | [
"Mathematics"
] | 528 | [
"Theorems in mathematical analysis",
"Order theory",
"Foundations of mathematics",
"Mathematical logic",
"Fixed-point theorems",
"Theorems in topology",
"Mathematical problems",
"Articles containing proofs",
"Mathematical theorems",
"Theorems in the foundations of mathematics"
] |
1,011,379 | https://en.wikipedia.org/wiki/Multilevel%20security | Multilevel security or multiple levels of security (MLS) is the application of a computer system to process information with incompatible classifications (i.e., at different security levels), permit access by users with different security clearances and needs-to-know, and prevent users from obtaining access to information for which they lack authorization.
There are two contexts for the use of multilevel security. One context is to refer to a system that is adequate to protect itself from subversion and has robust mechanisms to separate information domains, that is, trustworthy. Another context is to refer to an application of a computer that will require the computer to be strong enough to protect itself from subversion, and have adequate mechanisms to separate information domains, that is, a system we must trust. This distinction is important because systems that need to be trusted are not necessarily trustworthy.
Trusted operating systems
An MLS operating environment often requires a highly trustworthy information processing system often built on an MLS operating system (OS), but not necessarily. Most MLS functionality can be supported by a system composed entirely from untrusted computers, although it requires multiple independent computers linked by hardware security-compliant channels (see section B.6.2 of the Trusted Network Interpretation, NCSC-TG-005). An example of hardware enforced MLS is asymmetric isolation. If one computer is being used in MLS mode, then that computer must use a trusted operating system. Because all information in an MLS environment is physically accessible by the OS, strong logical controls must exist to ensure that access to information is strictly controlled. Typically this involves mandatory access control that uses security labels, like the Bell–LaPadula model.
Customers that deploy trusted operating systems typically require that the product complete a formal computer security evaluation. The evaluation is stricter for a broader security range, which are the lowest and highest classification levels the system can process. The Trusted Computer System Evaluation Criteria (TCSEC) was the first evaluation criteria developed to assess MLS in computer systems. Under that criteria there was a clear uniform mapping between the security requirements and the breadth of the MLS security range. Historically few implementations have been certified capable of MLS processing with a security range of Unclassified through Top Secret. Among them were Honeywell's SCOMP, USAF SACDIN, NSA's Blacker, and Boeing's MLS LAN, all under TCSEC, 1980s vintage and Intel 80386-based. Currently, MLS products are evaluated under the Common Criteria. In late 2008, the first operating system (more below) was certified to a high evaluated assurance level: Evaluation Assurance Level (EAL) - EAL 6+ / High Robustness, under the auspices of a U.S. government program requiring multilevel security in a high threat environment. While this assurance level has many similarities to that of the old Orange Book A1 (such as formal methods), the functional requirements focus on fundamental isolation and information flow policies rather than higher level policies such as Bell-La Padula. Because the Common Criteria decoupled TCSEC's pairing of assurance (EAL) and functionality (Protection Profile), the clear uniform mapping between security requirements and MLS security range capability documented in CSC-STD-004-85 has largely been lost when the Common Criteria superseded the Rainbow Series.
Freely available operating systems with some features that support MLS include Linux with the Security-Enhanced Linux feature enabled and FreeBSD. Security evaluation was once thought to be a problem for these free MLS implementations for three reasons:
It is always very difficult to implement kernel self-protection strategy with the precision needed for MLS trust, and these examples were not designed to or certified to an MLS protection profile so they may not offer the self-protection needed to support MLS.
Aside from EAL levels, the Common Criteria lacks an inventory of appropriate high assurance protection profiles that specify the robustness needed to operate in MLS mode.
Even if (1) and (2) were met, the evaluation process is very costly and imposes special restrictions on configuration control of the evaluated software.
Notwithstanding such suppositions, Red Hat Enterprise Linux 5 was certified against LSPP, RBACPP, and CAPP at EAL4+ in June 2007. It uses Security-Enhanced Linux to implement MLS and was the first Common Criteria certification to enforce TOE security properties with Security-Enhanced Linux.
Vendor certification strategies can be misleading to laypersons. A common strategy exploits the layperson's overemphasis of EAL level with over-certification, such as certifying an EAL 3 protection profile (like CAPP) to elevated levels, like EAL 4 or EAL 5. Another is adding and certifying MLS support features (such as role-based access control protection profile (RBACPP) and labeled security protection profile (LSPP)) to a kernel that is not evaluated to an MLS-capable protection profile. Those types of features are services run on the kernel and depend on the kernel to protect them from corruption and subversion. If the kernel is not evaluated to an MLS-capable protection profile, MLS features cannot be trusted regardless of how impressive the demonstration looks. It is particularly noteworthy that CAPP is specifically not an MLS-capable profile as it specifically excludes self-protection capabilities critical for MLS.
General Dynamics offers PitBull, a trusted, MLS operating system. PitBull is currently offered only as an enhanced version of Red Hat Enterprise Linux, but earlier versions existed for Sun Microsystems Solaris, IBM AIX, and SVR4 Unix. PitBull provides a Bell LaPadula security mechanism, a Biba integrity mechanism, a privilege replacement for superuser, and many other features.
PitBull has the security base for General Dynamics' Trusted Network Environment (TNE) product since 2009. TNE enables Multilevel information sharing and access for users in the Department of Defense and Intelligence communities operating a varying classification levels. It's also the foundation for the Multilevel coalition sharing environment, the Battlefield Information Collection and Exploitation Systems Extended (BICES-X).
Sun Microsystems, now Oracle Corporation, offers Solaris Trusted Extensions as an integrated feature of the commercial OSs Solaris and OpenSolaris. In addition to the controlled access protection profile (CAPP), and role-based access control (RBAC) protection profiles, Trusted Extensions have also been certified at EAL4 to the labeled security protection profile (LSPP). The security target includes both desktop and network functionality. LSPP mandates that users are not authorized to override the labeling policies enforced by the kernel and X Window System (X11 server). The evaluation does not include a covert channel analysis. Because these certifications depend on CAPP, no Common Criteria certifications suggest this product is trustworthy for MLS.
BAE Systems offers XTS-400, a commercial system that supports MLS at what the vendor claims is "high assurance". Predecessor products (including the XTS-300) were evaluated at the TCSEC B3 level, which is MLS-capable. The XTS-400 has been evaluated under the Common Criteria at EAL5+ against the CAPP and LSPP protection profiles. CAPP and LSPP are both EAL3 protection profiles that are not inherently MLS-capable, but the security target for the Common Criteria evaluation of this product contains an enriched set of security functions that provide MLS capability.
Problem areas
Sanitization is a problem area for MLS systems. Systems that implement MLS restrictions, like those defined by Bell–LaPadula model, only allow sharing when it obviously does not violate security restrictions. Users with lower clearances can easily share their work with users holding higher clearances, but not vice versa. There is no efficient, reliable mechanism by which a Top Secret user can edit a Top Secret file, remove all Top Secret information, and then deliver it to users with Secret or lower clearances. In practice, MLS systems circumvent this problem via privileged functions that allow a trustworthy user to bypass the MLS mechanism and change a file's security classification. However, the technique is not reliable.
Covert channels pose another problem for MLS systems. For an MLS system to keep secrets perfectly, there must be no possible way for a Top Secret process to transmit signals of any kind to a Secret or lower process. This includes side effects such as changes in available memory or disk space, or changes in process timing. When a process exploits such a side effect to transmit data, it is exploiting a covert channel. It is extremely difficult to close all covert channels in a practical computing system, and it may be impossible in practice. The process of identifying all covert channels is a challenging one by itself. Most commercially available MLS systems do not attempt to close all covert channels, even though this makes it impractical to use them in high security applications.
Bypass is problematic when introduced as a means to treat a system high object as if it were MLS trusted. A common example is to extract data from a secret system high object to be sent to an unclassified destination, citing some property of the data as trusted evidence that it is 'really' unclassified (e.g. 'strict' format). A system high system cannot be trusted to preserve any trusted evidence, and the result is that an overt data path is opened with no logical way to securely mediate it. Bypass can be risky because, unlike narrow bandwidth covert channels that are difficult to exploit, bypass can present a large, easily exploitable overt leak in the system. Bypass often arises out of failure to use trusted operating environments to maintain continuous separation of security domains all the way back to their origin. When that origin lies outside the system boundary, it may not be possible to validate the trusted separation to the origin. In that case, the risk of bypass can be unavoidable if the flow truly is essential.
A common example of unavoidable bypass is a subject system that is required to accept secret IP packets from an untrusted source, encrypt the secret userdata and not the header and deposit the result to an untrusted network. The source lies outside the sphere of influence of the subject system. Although the source is untrusted (e.g. system high) it is being trusted as if it were MLS because it provides packets that have unclassified headers and secret plaintext userdata, an MLS data construct. Since the source is untrusted, it could be corrupt and place secrets in the unclassified packet header. The corrupted packet headers could be nonsense but it is impossible for the subject system to determine that with any reasonable reliability. The packet userdata is cryptographically well protected but the packet header can contain readable secrets. If the corrupted packets are passed to an untrusted network by the subject system they may not be routable but some cooperating corrupt process in the network could grab the packets and acknowledge them and the subject system may not detect the leak. This can be a large overt leak that is hard to detect. Viewing classified packets with unclassified headers as system high structures instead of the MLS structures they really are presents a very common but serious threat.
Most bypass is avoidable. Avoidable bypass often results when system architects design a system before correctly considering security, then attempt to apply security after the fact as add-on functions. In that situation, bypass appears to be the only (easy) way to make the system work. Some pseudo-secure schemes are proposed (and approved!) that examine the contents of the bypassed data in a vain attempt to establish that bypassed data contains no secrets. This is not possible without trusting something about the data such as its format, which is contrary to the assumption that the source is not trusted to preserve any characteristics of the source data. Assured "secure bypass" is a myth, just as a so-called High Assurance Guard (HAG) that transparently implements bypass. The risk these introduce has long been acknowledged; extant solutions are ultimately procedural, rather than technical. There is no way to know with certainty how much classified information is taken from our systems by exploitation of bypass.
Debate: "There is no such thing as MLS"
Some laypersons are designing secure computing systems and drawing the conclusion that MLS does not exist.
An explanation could be that there is a decline in COMPUSEC experts and the MLS term has been overloaded by two different meanings / uses. These two uses are: MLS as a processing environment vs MLS as a capability. The belief that MLS is non-existent is based on the belief that there are no products certified to operate in an MLS environment or mode and that therefore MLS as a capability does not exist. One does not imply the other. Many systems operate in an environment containing data that has unequal security levels and therefore is MLS by the Computer Security Intermediate Value Theorem (CS-IVT). The consequence of this confusion runs deeper. NSA-certified MLS operating systems, databases, and networks have existed in operational mode since the 1970s and that MLS products are continuing to be built, marketed, and deployed.
Laypersons often conclude that to admit that a system operates in an MLS environment (environment-centric meaning of MLS) is to be backed into the perceived corner of having a problem with no MLS solution (capability-centric meaning of MLS). MLS is deceptively complex and just because simple solutions are not obvious does not justify a conclusion that they do not exist. This can lead to a crippling ignorance about COMPUSEC that manifests itself as whispers that "one cannot talk about MLS," and "There's no such thing as MLS." These MLS-denial schemes change so rapidly that they cannot be addressed. Instead, it is important to clarify the distinction between MLS-environment and MLS-capable.
MLS as a security environment or security mode: A community whose users have differing security clearances may perceive MLS as a data sharing capability: users can share information with recipients whose clearance allows receipt of that information. A system is operating in MLS Mode when it has (or could have) connectivity to a destination that is cleared to a lower security level than any of the data the MLS system contains. This is formalized in the CS-IVT. Determination of security mode of a system depends entirely on the system's security environment; the classification of data it contains, the clearance of those who can get direct or indirect access to the system or its outputs or signals, and the system's connectivity and ports to other systems. Security mode is independent of capabilities, although a system should not be operated in a mode for which it is not worthy of trust.
MLS as a capability: Developers of products or systems intended to allow MLS data sharing tend to loosely perceive it in terms of a capability to enforce data-sharing restrictions or a security policy, like mechanisms that enforce the Bell–LaPadula model. A system is MLS-capable if it can be shown to robustly implement a security policy.
The original use of the term MLS applied to the security environment, or mode. One solution to this confusion is to retain the original definition of MLS and be specific about MLS-capable when that context is used.
MILS architecture
Multiple Independent Levels of Security (MILS) is an architecture that addresses the domain separation component of MLS. Note that UCDMO (the US government lead for cross domain and multilevel systems) created a term Cross Domain Access as a category in its baseline of DoD and Intelligence Community accredited systems, and this category can be seen as essentially analogous to MILS.
Security models such as the Biba model (for integrity) and the Bell–LaPadula model (for confidentiality) allow one-way flow between certain security domains that are otherwise assumed to be isolated. MILS addresses the isolation underlying MLS without addressing the controlled interaction between the domains addressed by the above models. Trusted security-compliant channels mentioned above can link MILS domains to support more MLS functionality.
The MILS approach pursues a strategy characterized by an older term, MSL (multiple single level), that isolates each level of information within its own single-level environment (System High).
The rigid process communication and isolation offered by MILS may be more useful to ultra high reliability software applications than MLS. MILS notably does not address the hierarchical structure that is embodied by the notion of security levels. This requires the addition of specific import/export applications between domains each of which needs to be accredited appropriately. As such, MILS might be better called Multiple Independent Domains of Security (MLS emulation on MILS would require a similar set of accredited applications for the MLS applications). By declining to address out of the box interaction among levels consistent with the hierarchical relations of Bell-La Padula, MILS is (almost deceptively) simple to implement initially but needs non-trivial supplementary import/export applications to achieve the richness and flexibility expected by practical MLS applications.
Any MILS/MLS comparison should consider if the accreditation of a set of simpler export applications is more achievable than accreditation of one, more complex MLS kernel. This question depends in part on the extent of the import/export interactions that the stakeholders require. In favour of MILS is the possibility that not all the export applications will require maximal assurance.
MSL systems
There is another way of solving such problems known as multiple single-level. Each security level is isolated in a separate untrusted domain. The absence of a medium of communication between the domains assures no interaction is possible. The mechanism for this isolation is usually physical separation in separate computers. This is often used to support applications or operating systems which have no possibility of supporting MLS such as Microsoft Windows.
Applications
Infrastructure such as trusted operating systems are an important component of MLS systems, but in order to fulfill the criteria required under the definition of MLS by CNSSI 4009 (paraphrased at the start of this article), the system must provide a user interface that is capable of allowing a user to access and process content at multiple classification levels from one system. The UCDMO ran a track specifically focused on MLS at the NSA Information Assurance Symposium in 2009, in which it highlighted several accredited (in production) and emergent MLS systems. Note the use of MLS in SELinux.
There are several databases classified as MLS systems. Oracle has a product named Oracle Label Security (OLS) that implements mandatory access controls - typically by adding a 'label' column to each table in an Oracle database. OLS is being deployed at the US Army INSCOM as the foundation of an "all-source" intelligence database spanning the JWICS and SIPRNet networks. There is a project to create a labeled version of PostgreSQL, and there are also older labeled-database implementations such as Trusted Rubix. These MLS database systems provide a unified back-end system for content spanning multiple labels, but they do not resolve the challenge of having users process content at multiple security levels in one system while enforcing mandatory access controls.
There are also several MLS end-user applications. The other MLS capability currently on the UCDMO baseline is called MLChat , and it is a chat server that runs on the XTS-400 operating system - it was created by the US Naval Research Laboratory. Given that content from users at different domains passes through the MLChat server, dirty-word scanning is employed to protect classified content, and there has been some debate about if this is truly an MLS system or more a form of cross-domain transfer data guard. Mandatory access controls are maintained by a combination of XTS-400 and application-specific mechanisms.
Joint Cross Domain eXchange (JCDX) is another example of an MLS capability currently on the UCDMO baseline. JCDX is the only Department of Defense (DoD), Defense Intelligence Agency (DIA) accredited Multilevel Security (MLS) Command, Control, Communication, Computers and Intelligence (C4I) system that provides near real-time intelligence and warning support to theater and forward deployed tactical commanders. The JCDX architecture is comprehensively integrated with a high assurance Protection Level Four (PL4) secure operating system, utilizing data labeling to disseminate near real-time data information on force activities and potential terrorist threats on and around the world's oceans. It is installed at locations in United States and Allied partner countries where it is capable of providing data from Top Secret/SCI down to Secret-Releasable levels, all on a single platform.
MLS applications not currently part of the UCDMO baseline include several applications from BlueSpace. BlueSpace has several MLS applications, including an MLS email client, an MLS search application and an MLS C2 system. BlueSpace uses a middleware strategy to enable its applications to be platform neutral, orchestrating one user interface across multiple Windows OS instances (virtualized or remote terminal sessions). The US Naval Research Laboratory has also implemented a multilevel web application framework called MLWeb which integrates the Ruby on Rails framework with a multilevel database based on SQLite3.
Trends
Perhaps the greatest change going on in the multilevel security arena today is the convergence of MLS with virtualization. An increasing number of trusted operating systems are moving away from labeling files and processes, and are instead moving towards UNIX containers or virtual machines. Examples include zones in Solaris 10 TX, and the padded cell hypervisor in systems such as Green Hill's Integrity platform, and XenClient XT from Citrix. The High Assurance Platform from NSA as implemented in General Dynamics' Trusted Virtualization Environment (TVE) is another example - it uses SELinux at its core, and can support MLS applications that span multiple domains.
See also
Bell–LaPadula model
Biba model, Biba Integrity Model
Clark–Wilson model
Discretionary access control (DAC)
Evaluation Assurance Level (EAL)
Graham-Denning model
Mandatory access control (MAC)
Multi categories security (MCS)
Multifactor authentication
Non-interference (security) model
Role-based access control (RBAC)
Security modes of operation
System high mode
Take-grant model
References
Further reading
(a.k.a. the TCSEC or "Orange Book").
(a.k.a. the TNI or "Red Book").
.
P. A. Loscocco, S. D. Smalley, P. A. Muckelbauer, R. C. Taylor, S. J. Turner, and J. F. Farrell. The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing Environments. In Proceedings of the 21st National Information Systems Security Conference, pages 303–314, Oct. 1998. .
External links
First RTOS Integrity 178B certified to support MILS
INTEGRITY 178B product Page
PitBull Trusted Operating System
Computer security models | Multilevel security | [
"Engineering"
] | 4,668 | [
"Cybersecurity engineering",
"Computer security models"
] |
1,011,474 | https://en.wikipedia.org/wiki/Potassium%20channel | Potassium channels are the most widely distributed type of ion channel found in virtually all organisms. They form potassium-selective pores that span cell membranes. Potassium channels are found in most cell types and control a wide variety of cell functions.
Function
Potassium channels function to conduct potassium ions down their electrochemical gradient, doing so both rapidly (up to the diffusion rate of K+ ions in bulk water) and selectively (excluding, most notably, sodium despite the sub-angstrom difference in ionic radius). Biologically, these channels act to set or reset the resting potential in many cells. In excitable cells, such as neurons, the delayed counterflow of potassium ions shapes the action potential.
By contributing to the regulation of the cardiac action potential duration in cardiac muscle, malfunction of potassium channels may cause life-threatening arrhythmias. Potassium channels may also be involved in maintaining vascular tone.
They also regulate cellular processes such as the secretion of hormones (e.g., insulin release from beta-cells in the pancreas) so their malfunction can lead to diseases (such as diabetes).
Some toxins, such as dendrotoxin, are potent because they block potassium channels.
Types
There are four major classes of potassium channels:
Calcium-activated potassium channel - open in response to the presence of calcium ions or other signalling molecules.
Inwardly rectifying potassium channel - passes current (positive charge) more easily in the inward direction (into the cell).
Tandem pore domain potassium channel - are constitutively open or possess high basal activation, such as the "resting potassium channels" or "leak channels" that set the negative membrane potential of neurons.
Voltage-gated potassium channel - are voltage-gated ion channels that open or close in response to changes in the transmembrane voltage.
The following table contains a comparison of the major classes of potassium channels with representative examples (for a complete list of channels within each class, see the respective class pages).
For more examples of pharmacological modulators of potassium channels, see potassium channel blocker and potassium channel opener.
Structure
Potassium channels have a tetrameric structure in which four identical protein subunits associate to form a fourfold symmetric (C4) complex arranged around a central ion conducting pore (i.e., a homotetramer). Alternatively four related but not identical protein subunits may associate to form heterotetrameric complexes with pseudo C4 symmetry. All potassium channel subunits have a distinctive pore-loop structure that lines the top of the pore and is responsible for potassium selective permeability.
There are over 80 mammalian genes that encode potassium channel subunits. However potassium channels found in bacteria are amongst the most studied of ion channels, in terms of their molecular structure. Using X-ray crystallography, profound insights have been gained into how potassium ions pass through these channels and why (smaller) sodium ions do not. The 2003 Nobel Prize for Chemistry was awarded to Rod MacKinnon for his pioneering work in this area.
Selectivity filter
Potassium ion channels remove the hydration shell from the ion when it enters the selectivity filter. The selectivity filter is formed by a five residue sequence, TVGYG, termed the signature sequence, within each of the four subunits. This signature sequence is within a loop between the pore helix and TM2/6, historically termed the P-loop. This signature sequence is highly conserved, with the exception that a valine residue in prokaryotic potassium channels is often substituted with an isoleucine residue in eukaryotic channels. This sequence adopts a unique main chain structure, structurally analogous to a nest protein structural motif. The four sets of electronegative carbonyl oxygen atoms are aligned toward the center of the filter pore and form a square antiprism similar to a water-solvating shell around each potassium binding site. The distance between the carbonyl oxygens and potassium ions in the binding sites of the selectivity filter is the same as between water oxygens in the first hydration shell and a potassium ion in water solution, providing an energetically-favorable route for de-solvation of the ions. Sodium ions, however, are too small to fill the space between the carbonyl oxygen atoms. Thus, it is energetically favorable for sodium ions to remain bound with water molecules in the extracellular space, rather than to pass through the potassium-selective ion pore. This width appears to be maintained by hydrogen bonding and van der Waals forces within a sheet of aromatic amino acid residues surrounding the selectivity filter. The selectivity filter opens towards the extracellular solution, exposing four carbonyl oxygens in a glycine residue (Gly79 in KcsA). The next residue toward the extracellular side of the protein is the negatively charged Asp80 (KcsA). This residue together with the five filter residues form the pore that connects the water-filled cavity in the center of the protein with the extracellular solution.
Selectivity mechanism
The mechanism of potassium channel selectivity remains under continued debate. The carbonyl oxygens are strongly electro-negative and cation-attractive. The filter can accommodate potassium ions at 4 sites usually labelled S1 to S4 starting at the extracellular side. In addition, one ion can bind in the cavity at a site called SC or one or more ions at the extracellular side at more or less well-defined sites called S0 or Sext. Several different occupancies of these sites are possible. Since the X-ray structures are averages over many molecules, it is, however, not possible to deduce the actual occupancies directly from such a structure. In general, there is some disadvantage due to electrostatic repulsion to have two neighboring sites occupied by ions. Proposals for the mechanism of selectivity have been made based on molecular dynamics simulations, toy models of ion binding, thermodynamic calculations, topological considerations, and structural differences between selective and non-selective channels.
The mechanism for ion translocation in KcsA has been studied extensively by theoretical calculations and simulation. The prediction of an ion conduction mechanism in which the two doubly occupied states (S1, S3) and (S2, S4) play an essential role has been affirmed by both techniques. Molecular dynamics (MD) simulations suggest the two extracellular states, Sext and S0, reflecting ions entering and leaving the filter, also are important actors in ion conduction.
Hydrophobic region
This region neutralizes the environment around the potassium ion so that it is not attracted to any charges. In turn, it speeds up the reaction.
Central cavity
A central pore, 10 Å wide, is located near the center of the transmembrane channel, where the energy barrier is highest for the transversing ion due to the hydrophobity of the channel wall. The water-filled cavity and the polar C-terminus of the pore helices ease the energetic barrier for the ion. Repulsion by preceding multiple potassium ions is thought to aid the throughput of the ions.
The presence of the cavity can be understood intuitively as one of the channel's mechanisms for overcoming the dielectric barrier, or repulsion by the low-dielectric membrane, by keeping the K+ ion in a watery, high-dielectric environment.
Regulation
The flux of ions through the potassium channel pore is regulated by two related processes, termed gating and inactivation. Gating is the opening or closing of the channel in response to stimuli, while inactivation is the rapid cessation of current from an open potassium channel and the suppression of the channel's ability to resume conducting. While both processes serve to regulate channel conductance, each process may be mediated by a number of mechanisms.
Generally, gating is thought to be mediated by additional structural domains which sense stimuli and in turn open the channel pore. These domains include the RCK domains of BK channels, and voltage sensor domains of voltage gated K+ channels. These domains are thought to respond to the stimuli by physically opening the intracellular gate of the pore domain, thereby allowing potassium ions to traverse the membrane. Some channels have multiple regulatory domains or accessory proteins, which can act to modulate the response to stimulus. While the mechanisms continue to be debated, there are known structures of a number of these regulatory domains, including RCK domains of prokaryotic and eukaryotic channels, pH gating domain of KcsA, cyclic nucleotide gating domains, and voltage gated potassium channels.
N-type inactivation is typically the faster inactivation mechanism, and is termed the "ball and chain" model. N-type inactivation involves interaction of the N-terminus of the channel, or an associated protein, which interacts with the pore domain and occludes the ion conduction pathway like a "ball". Alternatively, C-type inactivation is thought to occur within the selectivity filter itself, where structural changes within the filter render it non-conductive. There are a number of structural models of C-type inactivated K+ channel filters, although the precise mechanism remains unclear.
Pharmacology
Blockers
Potassium channel blockers inhibit the flow of potassium ions through the channel. They either compete with potassium binding within the selectivity filter or bind outside the filter to occlude ion conduction. An example of one of these competitors is quaternary ammonium ions, which bind at the extracellular face or central cavity of the channel. For blocking from the central cavity quaternary ammonium ions are also known as open channel blockers, as binding classically requires the prior opening of the cytoplasmic gate.
Barium ions can also block potassium channel currents, by binding with high affinity within the selectivity filter. This tight binding is thought to underlie barium toxicity by inhibiting potassium channel activity in excitable cells.
Medically potassium channel blockers, such as 4-aminopyridine and 3,4-diaminopyridine, have been investigated for the treatment of conditions such as multiple sclerosis. Off target drug effects can lead to drug induced Long QT syndrome, a potentially life-threatening condition. This is most frequently due to action on the hERG potassium channel in the heart. Accordingly, all new drugs are preclinically tested for cardiac safety.
Activators
Muscarinic potassium channel
Some types of potassium channels are activated by muscarinic receptors and these are called muscarinic potassium channels (IKACh). These channels are a heterotetramer composed of two GIRK1 and two GIRK4 subunits. Examples are potassium channels in the heart, which, when activated by parasympathetic signals through M2 muscarinic receptors, cause an outward current of potassium, which slows down the heart rate.
In fine art
Roderick MacKinnon commissioned Birth of an Idea, a tall sculpture based on the KcsA potassium channel. The artwork contains a wire object representing the channel's interior with a blown glass object representing the main cavity of the channel structure.
See also
References
External links
in 3D
Ion channels
Electrophysiology
Integral membrane proteins | Potassium channel | [
"Chemistry"
] | 2,314 | [
"Neurochemistry",
"Ion channels"
] |
1,013,718 | https://en.wikipedia.org/wiki/RecQ%20helicase | RecQ helicase is a family of helicase enzymes initially found in Escherichia coli that has been shown to be important in genome maintenance. They function through catalyzing the reaction ATP + H2O → ADP + P and thus driving the unwinding of paired DNA and translocating in the 3' to 5' direction. These enzymes can also drive the reaction NTP + H2O → NDP + P to drive the unwinding of either DNA or RNA.
Function
In prokaryotes RecQ is necessary for plasmid recombination and DNA repair from UV-light, free radicals, and alkylating agents. This protein can also reverse damage from replication errors. In eukaryotes, replication does not proceed normally in the absence of RecQ proteins, which also function in aging, silencing, recombination and DNA repair.
Structure
RecQ family members share three regions of conserved protein sequence referred to as the:
N-terminal – Helicase
middle – RecQ-conserved (RecQ-Ct) and
C-terminal – Helicase-and-RNase-D C-terminal (HRDC) domains.
The removal of the N-terminal residues (Helicase and, RecQ-Ct domains) impairs both helicase and ATPase activity but has no effect on the binding ability of RecQ implying that the N-terminus functions as the catalytic end. Truncations of the C-terminus (HRDC domain) compromise the binding ability of RecQ but not the catalytic function. The importance of RecQ in cellular functions is exemplified by human diseases, which all lead to genomic instability and a predisposition to cancer.
Clinical significance
There are at least five human RecQ genes; and mutations in three human RecQ genes are implicated in heritable human diseases: WRN gene in Werner syndrome (WS), BLM gene in Bloom syndrome (BS), and RECQL4 in Rothmund–Thomson syndrome. These syndromes are characterized by premature aging, and can give rise to the diseases of cancer, type 2 diabetes, osteoporosis, and atherosclerosis, which are commonly found in old age. These diseases are associated with high incidence of chromosomal abnormalities, including chromosome breaks, complex rearrangements, deletions and translocations, site specific mutations, and in particular sister chromatid exchanges (more common in BS) that are believed to be caused by a high level of somatic recombination.
Mechanism
The proper function of RecQ helicases requires the specific interaction with topoisomerase III (Top 3). Top 3 changes the topological status of DNA by binding and cleaving single stranded DNA and passing either a single stranded or a double stranded DNA segment through the transient break and finally re-ligating the break. The interaction of RecQ helicase with topoisomerase III at the N-terminal region is involved in the suppression of spontaneous and damage induced recombination and the absence of this interaction results in a lethal or very severe phenotype. The emerging picture clearly is that RecQ helicases in concert with Top 3 are involved in maintaining genomic stability and integrity by controlling recombination events, and repairing DNA damage in the G2-phase of the cell cycle. The importance of RecQ for genomic integrity is exemplified by the diseases that arise as a consequence of mutations or malfunctions in RecQ helicases; thus it is crucial that RecQ is present and functional to ensure proper human growth and development.
WRN helicase
The Werner syndrome ATP-dependent helicase (WRN helicase) is unusual among RecQ DNA family helicases in having an additional exonuclease activity. WRN interacts with DNA-PKcs and the Ku protein complex. This observation, combined with evidence that WRN deficient cells produce extensive deletions at sites of joining of non-homologous DNA ends, suggests a role for WRN protein in the DNA repair process of non-homologous end joining (NHEJ). WRN also physically interacts with the major NHEJ factor X4L4 (XRCC4-DNA ligase 4 complex). X4L4 stimulates WRN exonuclease activity that likely facilitates DNA end processing prior to final ligation by X4L4.
WRN also appears to play a role in resolving recombination intermediate structures during homologous recombinational repair (HRR) of DNA double-strand breaks.
WRN participates in a complex with RAD51, RAD54, RAD54B and ATR proteins in carrying out the recombination step during inter-strand DNA cross-link repair.
Evidence was presented that WRN plays a direct role in the repair of methylation induced DNA damage. The process likely involves the helicase and exonuclease activities of WRN that operate together with DNA polymerase beta in long patch base excision repair.
WRN was found to have a specific role in preventing or repairing DNA damages resulting from chronic oxidative stress, particularly in slowly replicating cells. This finding suggested that WRN may be important in dealing with oxidative DNA damages that underlie normal aging (see DNA damage theory of aging).
BLM helicase
Cells from humans with Bloom syndrome are sensitive to DNA damaging agents such as UV and methyl methanesulfonate indicating deficient DNA repair capability.
The budding yeast Saccharomyces cerevisiae encodes an ortholog of the Bloom syndrome (BLM) protein that is designated Sgs1 (Small growth suppressor 1). Sgs1(BLM) is a helicase that functions in homologous recombinational repair of DNA double-strand breaks. The Sgs1(BLM) helicase appears to be a central regulator of most of the recombination events that occur during S. cerevisiae meiosis. During normal meiosis Sgs1(BLM) is responsible for directing recombination towards the alternate formation of either early non-crossovers or Holliday junction joint molecules, the latter being subsequently resolved as crossovers.
In the plant Arabidopsis thaliana, homologs of the Sgs1(BLM) helicase act as major barriers to meiotic crossover formation. These helicases are thought to displace the invading strand allowing its annealing with the other 3'overhang end of the double-strand break, leading to non-crossover recombinant formation by a process called synthesis-dependent strand annealing (SDSA) (see Wikipedia article "Genetic recombination"). It is estimated that only about 5% of double-strand breaks are repaired by crossover recombination. Sequela-Arnaud et al. suggested that crossover numbers are restricted because of the long-term costs of crossover recombination, that is, the breaking up of favorable genetic combinations of alleles built up by past natural selection.
RECQL4 helicase
In humans, individuals with Rothmund–Thomson syndrome, and carrying the RECQL4 germline mutation, have several clinical features of accelerated aging. These features include atrophic skin and pigment changes, alopecia, osteopenia, cataracts and an increased incidence of cancer. RECQL4 mutant mice also show features of accelerated aging.
RECQL4 has a crucial role in DNA end resection that is the initial step required for homologous recombination (HR)-dependent double-strand break repair. When RECQL4 is depleted, HR-mediated repair and 5' end resection are severely reduced in vivo. RECQL4 also appears to be necessary for other forms of DNA repair including non-homologous end joining, nucleotide excision repair and base excision repair. The association of deficient RECQL4 mediated DNA repair with accelerated aging is consistent with the DNA damage theory of aging.
See also
Bloom syndrome
References
Further reading
External links
RecQ Helicases , introduction at UNC's Sekelsky Lab.
BLM gene encodes a RecQ Helicase, description of the gene
EC 3.6.1
Aging-related enzymes
Helicases
Senescence
DNA repair | RecQ helicase | [
"Chemistry",
"Biology"
] | 1,746 | [
"DNA repair",
"Aging-related enzymes",
"Senescence",
"Molecular genetics",
"Cellular processes",
"Metabolism"
] |
1,013,950 | https://en.wikipedia.org/wiki/Heilbronn%20triangle%20problem | In discrete geometry and discrepancy theory, the Heilbronn triangle problem is a problem of placing points in the plane, avoiding triangles of small area. It is named after Hans Heilbronn, who conjectured that, no matter how points are placed in a given area, the smallest triangle area will be at most inversely proportional to the square of the number of points. His conjecture was proven false, but the asymptotic growth rate of the minimum triangle area remains unknown.
Definition
The Heilbronn triangle problem concerns the placement of points within a shape in the plane, such as the unit square or the unit disk, for a given Each triple of points form the three vertices of a triangle, and among these triangles, the problem concerns the smallest triangle, as measured by area. Different placements of points will have different smallest triangles, and the problem asks: how should points be placed to maximize the area of the smallest
More formally, the shape may be assumed to be a compact set in the plane, meaning that it stays within a bounded distance from the origin and that points are allowed to be placed on its boundary. In most work on this problem, is additionally a convex set of nonzero area. When three of the placed points lie on a line, they are considered as forming a degenerate triangle whose area is defined to be zero, so placements that maximize the smallest triangle will not have collinear triples of points. The assumption that the shape is compact implies that there exists an optimal placement of points, rather than only a sequence of placements approaching optimality. The number may be defined as the area of the smallest triangle in this optimal An example is shown in the figure, with six points in a unit square. These six points form different triangles, four of which are shaded in the figure. Six of these 20 triangles, with two of the shaded shapes, have area 1/8; the remaining 14 triangles have larger areas. This is the optimal placement of six points in a unit square: all other placements form at least one triangle with area 1/8 or smaller. Therefore,
Although researchers have studied the value of for specific shapes and specific small numbers of points, Heilbronn was concerned instead about its asymptotic behavior: if the shape is held fixed, but varies, how does the area of the smallest triangle vary That is, Heilbronn's question concerns the growth rate as a function For any two shapes the numbers and differ only by a constant factor, as any placement of points within can be scaled by an affine transformation to fit changing the minimum triangle area only by a constant. Therefore, in bounds on the growth rate of that omit the constant of proportionality of that growth, the choice of is irrelevant and the subscript may be
Heilbronn's conjecture and its disproof
Heilbronn conjectured prior to 1951 that the minimum triangle area always shrinks rapidly as a function —more specifically, inversely proportional to the square In terms of big O notation, this can be expressed as the bound
In the other direction, Paul Erdős found examples of point sets with minimum triangle area proportional demonstrating that, if true, Heilbronn's conjectured bound could not be strengthened. Erdős formulated the no-three-in-line problem, on large sets of grid points with no three in a line, to describe these examples. As Erdős observed, when is a prime number, the set of points on an integer grid (for have no three collinear points, and therefore by Pick's formula each of the triangles they form has area at When these grid points are scaled to fit within a unit square, their smallest triangle area is proportional matching Heilbronn's conjectured upper bound. If is not prime, then a similar construction using a prime number close to achieves the same asymptotic lower
eventually disproved Heilbronn's conjecture, by using the probabilistic method to find sets of points whose smallest triangle area is larger than the ones found by Erdős. Their construction involves the following steps:
Randomly place points in the unit square, for
Remove all pairs of points that are unexpectedly close together.
Prove that there are few remaining low-area triangles and therefore only a sublinear number of cycles formed by two, three, or four low-area triangles. Remove all points belonging to these cycles.
Apply a triangle removal lemma for 3-uniform hypergraphs of high girth to show that, with high probability, the remaining points include a subset of points that do not form any small-area triangles.
The area resulting from their construction grows asymptotically as
The proof can be derandomized, leading to a polynomial-time algorithm for constructing placements with this triangle area.
Upper bounds
Every set of points in the unit square forms a triangle of area at most inversely proportional One way to see this is to triangulate the convex hull of the given point and choose the smallest of the triangles in the triangulation. Another is to sort the points by their and to choose the three consecutive points in this ordering whose are the closest together. In the first paper published on the Heilbronn triangle problem, in 1951, Klaus Roth proved a stronger upper bound of the form
The best bound known to date is of the form
for some proven by .
A new upper bound equal to was proven by .
Specific shapes and numbers
has investigated the optimal arrangements of points in a square, for up to 16. Goldberg's constructions for up to six points lie on the boundary of the square, and are placed to form an affine transformation of the vertices of a regular polygon. For larger values improved Goldberg's bounds, and for these values the solutions include points interior to the square. These constructions have been proven optimal for up to seven points. The proof used a computer search to subdivide the configuration space of possible arrangements of the points into 226 different subproblems, and used nonlinear programming techniques to show that in 225 of those cases, the best arrangement was not as good as the known bound. In the remaining case, including the eventual optimal solution, its optimality was proven using symbolic computation techniques.
The following are the best known solutions for 7–12 points in a unit square, found through simulated annealing; the arrangement for seven points is known to be optimal.
Instead of looking for optimal placements for a given shape, one may look for an optimal shape for a given number of points. Among convex shapes with area one, the regular hexagon is the one that for this shape, with six points optimally placed at the hexagon vertices. The convex shapes of unit area that maximize have
Variations
There have been many variations of this problem
including the case of a uniformly random set of points, for which arguments based on either Kolmogorov complexity or Poisson approximation show that the expected value of the minimum area is inversely proportional to the cube of the number of points. Variations involving the volume of higher-dimensional simplices have also been studied.
Rather than considering simplices, another higher-dimensional version adds another and asks for placements of points in the unit hypercube that maximize the minimum volume of the convex hull of any subset of points. For these subsets form simplices but for larger values relative they can form more complicated shapes. When is sufficiently large relative randomly placed point sets have minimum convex hull No better bound is possible; any placement has points with obtained by choosing some consecutive points in coordinate order. This result has applications in range searching data structures.
See also
Danzer set, a set of points that avoids empty triangles of large area
Notes
References
External links
Erich's Packing Center, by Erich Friedman, including the best known solutions to the Heilbronn problem for small values of for squares, circles, equilateral triangles, and convex regions of variable shape but fixed area
Discrete geometry
Triangle problems
Area
Discrepancy theory | Heilbronn triangle problem | [
"Physics",
"Mathematics"
] | 1,634 | [
"Scalar physical quantities",
"Geometry problems",
"Discrete mathematics",
"Physical quantities",
"Discrete geometry",
"Quantity",
"Size",
"Combinatorics",
"Discrepancy theory",
"Wikipedia categories named after physical quantities",
"Mathematical problems",
"Area",
"Triangle problems"
] |
1,014,414 | https://en.wikipedia.org/wiki/Antiporter | An antiporter (also called exchanger or counter-transporter) is an integral membrane protein that uses secondary active transport to move two or more molecules in opposite directions across a phospholipid membrane. It is a type of cotransporter, which means that uses the energetically favorable movement of one molecule down its electrochemical gradient to power the energetically unfavorable movement of another molecule up its electrochemical gradient. This is in contrast to symporters, which are another type of cotransporter that moves two or more ions in the same direction, and primary active transport, which is directly powered by ATP.
Transport may involve one or more of each type of solute. For example, the Na+/Ca2+ exchanger, found in the plasma membrane of many cells, moves three sodium ions in one direction, and one calcium ion in the other. As with sodium in this example, antiporters rely on an established gradient that makes entry of one ion energetically favorable to force the unfavorable movement of a second molecule in the opposite direction. Through their diverse functions, antiporters are involved in various important physiological processes, such as regulation of the strength of cardiac muscle contraction, transport of carbon dioxide by erythrocytes, regulation of cytosolic pH, and accumulation of sucrose in plant vacuoles.
Background
Cotransporters are found in all organisms and fall under the broader category of transport proteins, a diverse group of transmembrane proteins that includes uniporters, symporters, and antiporters. Each of them are responsible for providing a means of movement for water-soluble molecules that otherwise would not be able to pass through lipid-based plasma membrane. The simplest of these are the uniporters, which facilitate the movement of one type of molecule in the direction that follows its concentration gradient. In mammals, they are most commonly responsible for bringing glucose and amino acids into cells.
Symporters and antiporters are more complex because they move more than one ion and the movement of one of those ions is in an energetically unfavorable direction. As multiple molecules are involved, multiple binding processes must occur as the transporter undergoes a cycle of conformational changes to move them from one side of the membrane to the other. The mechanism used by these transporters limits their functioning to moving only a few molecules at a time. As a result, symporters and antiporters are characterized by a slower transport speed, moving between 102 and 104 molecules per second. Compare this to ion channels that provide a means for facilitated diffusion to occur and allow between 107 and 108 ions pass through the plasma membrane per second.
Though ATP-powered pumps also move molecules in an energetically unfavorable direction and undergo conformational changes to do so, they fall under a different category of membrane proteins because they couple the energy derived from ATP hydrolysis to transport their respective ions. These ion pumps are very selective, consisting of a double gating system where at least one of the gates is always shut. The ion is allowed to enter from one side of the membrane while one of the gates is open, after which it will shut. Only then will the second gate open to allow the ion to leave on the membrane's opposite side. The time between the alternating gate opening is referred to as the occluded state, where the ions are bound and both gates are shut. These gating reactions limit the speed of these pumps, causing them to function even slower than transport proteins, moving between 100 and 103 ions per second.
Structure and function
To function in active transport, a membrane protein must meet certain requirements. The first of these is that the interior of the protein must contain a cavity that is able to contain its corresponding molecule or ion. Next, the protein must be able to assume at least two different conformations, one with its cavity open to the extracellular space and the other with its cavity open to the cytosol. This is crucial for the movement of molecules from one side of the membrane to the other. Finally, the cavity of the protein must contain binding sites for its ligands, and these binding sites must have a different affinity for the ligand in each of the protein's conformations. Without this, the ligand will not be able to bind to the transporter on one side of the plasma membrane and be released from it on the other side. As transporters, antiporters have all of these features.
Because antiporters are highly diverse, their structure can vary widely depending upon the type of molecules being transported and their location in the cell. However, there are some common features that all antiporters share. One of these is multiple transmembrane regions that span the lipid bilayer of the plasma membrane and form a channel through which hydrophilic molecules can pass. These transmembrane regions are typically structured from alpha helices and are connected by loops in both the extracellular space and cytosol. These loops are what contain the binding sites for the molecules associated with the antiporter.
These features of antiporters allow them to carry out their function in maintaining cellular homeostasis. They provide a space where a hydrophilic molecule can pass through the hydrophobic lipid bilayer, allowing them to bypass the hydrophobic interactions of the plasma membrane. This enables the efficient movement of molecules needed for the environment of the cell, such as in the acidification of organelles. The varying affinity of the antiporter for each ion or molecule on either side of the plasma membrane allows it to bind to and release its ligands on the appropriate side of the membrane according to the electrochemical gradient of the ion being harnessed for its energetically favorable concentration.
Mechanism
The mechanism of antiporter transport involves several key steps and a series of conformational changes that are dictated by the structural element described above:
The substrate binds to its specific binding site on the extracellular side of the plasma membrane, forming a temporary substrate-bound open form of the antiporter.
This becomes an occluded, substrate-bound state that is still facing the extracellular space.
The antiporter undergoes a conformational change to become an occluded, substrate-bound protein that is now facing the cytosol. As it does so, it passes through a temporary fully-occluded intermediate stage.
The substrate is released from the antiporter as it takes on an open, inward-facing conformation.
The antiporter can now bind to its second substrate and transport it in the opposite direction by taking on its transient substrate-bound open state.
This is followed by an occluded, substrate-bound state that is still facing the cytosol, a conformation change with a temporary fully-occluded intermediate stage, and a return to the antiporter's open, outward-facing conformation.
The second substrate is released and the antiporter can return to its original conformation state, where it is ready to bind to new molecules or ions and repeat its transport process.
History
Antiporters were discovered as scientists were exploring ion transport mechanisms across biological membranes. The early studies took place in the mid-20th century and were focused on the mechanisms that transported ions such as sodium, potassium, and calcium across the plasma membrane. Researchers made the observation that these ions were moved in opposite directions and hypothesized the existence of membrane proteins that could facilitate this type of transport.
In the 1960's, biochemist Efraim Racker made a breakthrough in the discovery of antiporters. Through purification from bovine heart mitochondria, Racker and his colleagues found a mitochondrial protein that could exchange inorganic phosphate for hydroxide ions. The protein is located in the inner mitochondrial membrane and transports phosphate ions for use in oxidative phosphorylation. It became known as the phosphate-hydroxide antiporter, or mitochondrial phosphate carrier protein, and was the first example of an antiporter identified in living cells.
As time went on, researchers discovered other antiporters in different membranes and in various organisms. This includes the sodium-calcium exchanger (NCX), another crucial antiporter that regulates intracellular calcium levels through the exchange of sodium ions for calcium ions across the plasma membrane. It was discovered in the 1970s and is now a well-characterized antiporter known to be found in many different types of cells.
Advances in the fields of biochemistry and molecular biology have enabled the identification and characterization of a wide range of antiporters. Understanding the transport processes of various molecules and ions has provided insight into cellular transport mechanisms, as well as the role of antiporters in various physiological functions and in the maintenance of homeostasis
Role in homeostasis
Sodium-calcium exchanger
The sodium-calcium exchanger, also known as the Na+/Ca2+ exchanger or NCX, is an antiporter responsible for removing calcium from cells. This title encompasses a class of ion transporters that are commonly found in the heart, kidney, and brain. They use the energy stored in the electrochemical gradient of sodium to exchange the flow of three sodium ions into the cell for the export of one calcium ion. Though this exchanger is most common in the membranes of the mitochondria and the endoplasmic reticulum of excitable cells, it can be found in many different cell types in various species.
Although the sodium-calcium exchanger has a low affinity for calcium ions, it can transport a high amount of the ion in a short period of time. Because of these properties, it is useful in situations where there is an urgent need to export high amounts of calcium, such as after an action potential has occurred. Its characteristics also enable NCX to work with other proteins that have a greater affinity for calcium ions without interfering with their functions. NCX works with these proteins to carry out functions such as cardiac muscle relaxation, excitation-contraction coupling, and photoreceptor activity. They also maintain the concentration of calcium ions in the sarcoplasmic reticulum of cardiac cells, endoplasmic reticulum of excitable and nonexcitable cells, and the mitochondria.
Another key characteristic of this antiporter is its reversibility. This means that if the cell is depolarized enough, the extracellular sodium level is low enough, or the intracellular level of sodium is high enough, NCX will operate in the reverse direction and begin bringing calcium into the cell. For example, when NCX functions during excitotoxicity, this characteristic allows it to have a protective effect because the accompanying increase in intracellular calcium levels enables the exchanger to work in its normal direction regardless of the sodium concentration. Another example is the depolarization of cardiac muscle cells, which is accompanied by a large increase in the intracellular sodium concentration that causes NCX to work in reverse. Because the concentration of calcium is carefully regulated during the cardiac action potential, this is only a temporary effect as calcium is pumped out of the cell.
The sodium-calcium exchanger's role in maintaining calcium homeostasis in cardiac muscle cells allows it to help relax the heart muscle as it exports calcium during diastole. Therefore, its dysfunction can result in abnormal calcium movement and the development of various cardiac diseases. Abnormally high intracellular calcium levels can hinder diastole and cause abnormal systole and arrhythmias. Arrhythmias can occur when calcium is not properly exported by NCX, causing delayed afterdepolarizations and triggering abnormal activity that can possibly lead to atrial fibrillation and ventricular tachycardia.
If the heart experiences ischemia, the inadequate oxygen supply can disrupt ion homeostasis. When the body tries to stabilize this by returning blood to the area, ischemia-reperfusion injury, a type of oxidative stress, occurs. If NCX is dysfunctional, it can exacerbate the increase of calcium that accompanies reperfusion, causing cell death and tissue damage. Similarly, NCX dysfunction has found to be involved in ischemic strokes. Its activity is upregulated, causing a increased cytosolic calcium level, which can lead to neuronal cell death.
The Na+/Ca2+ exchanger has also been implicated in neurological disorders such as Alzheimer's disease and Parkinson's disease. Its dysfunction can result in oxidative stress and neuronal cell death, contributing to the cognitive decline that characterizes Alzheimer's disease. The dysregulation of calcium homeostasis has been found to be a key part of neuron death and Alzheimer's pathogenesis. For example, neurons that have neurofibrillary tangles contain high levels of calcium and show hyperactivation of calcium-dependent proteins. The abnormal calcium handling of atypical NCX function can also cause the mitochondrial dysfunction, oxidative stress, and neuronal cell death that characterize Parkinson's. In this case, if dopaminergic neurons of the substantia nigra are affected, it can contribute to the onset and development of Parkinson's disease. Although the mechanism is not entirely understood, disease models have shown a link between NCX and Parkinson's and that NCX inhibitors can prevent death of dopaminergic neurons.
Sodium-hydrogen antiporter
The sodium–hydrogen antiporter, also known as the sodium-proton exchanger, Na+/H+ exchanger, or NHE, is an antiporter responsible for transporting sodium into the cell and hydrogen out of the cell. As such, it is important in the regulation of cellular pH and sodium levels. There are differences among the types of NHE antiporter families present in eukaryotes and prokaryotes. The 9 isoforms of this transporter that are found in the human genome fall under several families, including the cation-proton antiporters (CPA 1, CPA 2, and CPA 3) and sodium-transporting carboxylic acid decarboxylase (NaT-DC). Prokaryotic organisms contain the Na+/H+ antiporter families NhaA, NhaB, NhaC, NhaD, and NhaE.
Because enzymes can only function at certain pH ranges, it is critical for cells to tightly regulate cytosolic pH. When a cell's pH is outside of the optimal range, the sodium-hydrogen antiporter detects this and is activated to transport ions as a homeostatic mechanism to restore pH balance. Since ion flux can be reversed in mammalian cells, NHE can also be used to transport sodium out of the cell to prevent excess sodium from accumulating and causing toxicity.
As suggested by its functions, this antiporter is located in the kidney for sodium reabsorption regulation and in the heart for intracellular pH and contractility regulation. NHE plays an important role in the nephron of the kidney, especially in the cells of the proximal convoluted tubule and collecting duct. The sodium-hydrogen antiporter's function is upregulated by Angiotensin II in the proximal convoluted tubule when the body needs to reabsorb sodium and excrete hydrogen.
Plants are sensitive to high amounts of salt, which can halt certain necessary functions of the eukaryotic organism, including photosynthesis. For the organisms to maintain homeostasis and carry out crucial functions, Na+/H+ antiporters are used to rid the cytoplasm of excess sodium by pumping Na+ out of the cell. These antiporters can also close their channel to stop sodium from entering the cell, along with allowing excess sodium within the cell to enter into a vacuole.
Dysregulation of the sodium-hydrogen antiporter's activity has been linked to cardiovascular diseases, renal disorders, and neurological conditions NHE inhibitors are being developed to treat these issues. One of the isoforms of the antiporter, NHE1, is essential to the function of the mammalian myocardium. NHE is involved in the case of hypertrophy and when damage to the heart muscle occurs, such as during ischemia and reperfusion. Studies have shown that NHE1 is more active in animal models experiencing myocardial infarction and left ventricular hypertrophy. During these cardiac events, the function of the sodium-hydrogen antiporter causes an increase in the sodium levels of cardiac muscle cells. In turn, the work of the sodium-calcium antiporter leads to more calcium being brought into the cell, which is what results in damage to the myocardium.
Five isoforms of NHE are found in kidney's epithelial cells. The best studied one is NHE3, which is mainly located in the proximal tubules of the kidney and plays a key role in acid-base homeostasis. Issues with NHE3 disrupt the reabsorption of sodium and secretion of hydrogen. The main conditions that NHE3 dysregulation can cause are hypertension and renal tubular acidosis (RTA). Hypertension can occur when more sodium is reabsorbed in the kidneys because water will follow the sodium ions and create an elevated blood volume. This, in turn, leads to elevated blood pressure. RTA is characterized by the inability of the kidneys to acidify the urine due to underactive NHE3 and reduced secretion of hydrogen ions, resulting in metabolic acidosis. On the other hand, overactive NHE3 can lead to excess secretion of hydrogen ions and metabolic alkalosis, where the blood is too alkaline.
NHE can also be linked to neurodegeneration. The dysregulation or loss of the isoform NHE6 can lead to pathological changes in the tau proteins of human neurons, which can have huge consequences. For example, Christianson Syndrome (CS) is an X-linked disorder caused by a loss-of-function mutation in NHE6, which leads to the over acidification of endosomes. In studies done on postmortem brains of individuals with CS, lower NHE6 function was linked to higher levels of tau deposition. The level of tau phosphorylation was also found to be elevated, which leads to the formation of insoluble tangles that can cause neuronal damage and death. Tau proteins are also implicated in other neurodegenerative diseases, such as Alzheimer's and Parkinson's diseases.
Chloride-bicarbonate antiporter
The chloride-bicarbonate antiporter is crucial to maintaining pH and fluid balance through its function of exchanging bicarbonate and chloride ions through cell membranes. This exchange occurs in many different types of body cells. In the cardiac Purkinje fibers and smooth muscle cells of the ureters, this antiporter is the main mechanism of chloride transport into the cells. Epithelial cells such as those of the kidney use chloride-bicarbonate exchange to regulate their volume, intracellular pH, and extracellular pH. Gastric parietal cells, osteoclasts, and other acid-secreting cells have chloride-bicarbonate antiporters that function in the basolateral membrane to dispose of excess bicarbonate left behind by the function of carbonic anhydrase and apical proton pumps. However, base-secreting cells exhibit apical chloride-bicarbonate exchange and basolateral proton pumps.
An example of a chloride-bicarbonate antiporter is the chloride anion exchanger, also known as down-regulated in adenoma (protein DRA). It is found in the intestinal mucosa, especially in the columnar epithelium and goblet cells of the apical surface of the membrane, where it carries out the function of chloride and bicarbonate exchange. Protein DRA's reuptake of chloride is critical to creating an osmotic gradient that allows the intestine to reabsorb water.
Another well-studied chloride-bicarbonate antiporter is anion exchanger 1 (AE1), which is also known as band 3 anion transport protein or solute carrier family 4 member 1 (SLC4A1). This exchanger is found in red blood cells, where it helps transport bicarbonate and carbon dioxide between the lungs and tissues to maintain acid-base homeostasis. AE1 also expressed in the basolateral side of cells of the renal tubules. It is crucial in the collecting duct of the nephron, which is where its acid-secreting α-intercalated cells are located. These cells use carbon dioxide and water to generate hydrogen and bicarbonate ions, which is catalyzed by carbonic anhydrase. The hydrogen is exchanged across the membrane into the lumen of the collecting duct, and thus acid is excreted into the urine.
Because of its importance to the reabsorption of water in the intestine, mutations in protein DRA cause a condition called congenital chloride diarrhea (CCD). This disorder is caused by an autosomal recessive mutation in the DRA gene on chromosome 7. CCD symptoms in newborns are chronic diarrhea with failure to thrive, and the disorder is characterized by diarrhea that causes metabolic alkalosis.
Mutations of kidney AE1 can lead to distal renal tubular acidosis, a disorder characterized by the inability to secrete acid into the urine. This causes metabolic acidosis, where the blood is too acidic. A chronic state of metabolic acidosis can the health of the bones, kidneys, muscles, and cardiovascular system. Mutations in erythrocyte AE1 cause alterations of its function, leading to changes in red blood cell morphology and function. This can have serious consequences because the shape of red blood cells is closely tied to their function of gas exchange in the lungs and tissues. One such condition is hereditary spherocytosis, a genetic disorder characterized by spherical red blood cells. Another is Southeast Asian ovalocytosis, where a deletion in the AE1 gene generates oval-shaped erythrocytes. Finally, overhydrated hereditary stomatocytosis is a rare genetic disorder where red blood cells have an abnormally high volume, leading to changes in hydration status.
The proper function of AE2, an isoform of AE1, is important in gastric secretion, osteoclast differentiation and function, and the synthesis of enamel. The hydrochloric acid secretion at the apical surface of both gastric parietal cells and osteoclasts relies on chloride-bicarbonate exchange in the basolateral surface. Studies found that mice with nonfunctional AE2 did not secrete hydrochloric acid, and it was concluded that the exchanger is necessary for hydrochloric acid loading in parietal cells. When AE2 expression was suppressed in an animal model, cell lines were unable to differentiate into osteoclasts and perform their functions. Additionally, cells that had osteoclast markers but were deficient in AE2 were abnormal compared to the wild-type cells and were unable to resorb mineralized tissue. This demonstrates the importance of AE2 in osteoclast function. Finally, as the hydroxyapatite crystals of enamel are being formed, a lot of hydrogen is produced, which must be neutralized so that mineralization can proceed. Mice with inactivated AE2 were toothless and suffered from incomplete enamel maturation.
Chloride-hydrogen antiporter
The chloride-hydrogen antiporter facilitates the exchange of chloride ions for hydrogen ions across plasma membranes, thus playing a critical role in maintaining acid-base balance and chloride homeostasis. It is found in various tissues, including the gastrointestinal tract, kidneys, and pancreas. The well-known chloride-hydrogen antiporters belong in the CLC family, which have isoforms from CLC-1 to CLC-7, each with a distinct tissue distribution. Their structure involves two CLC proteins coming together to form a homodimer or a heterodimer where both monomers contain an ion translocation pathway. CLC proteins can either be ion channels or anion-proton exchangers, so CLC-1 and CLC-2 are membrane chloride channels, while CLC-3 through CLC-7 are chloride-hydrogen exchangers.
CLC-4 is a member of the CLC family that is prominent in the brain, but is also located in the liver, kidneys, heart, skeletal muscle, and intestine. It likely resides in endosomes and participates in their acidification, but can also be expressed in the endoplasmic reticulum and plasma membrane. Its roles are not entirely clear, but CLC-4 has been found to possibly participate in endosomal acidification, transferrin trafficking, renal endocytosis, and the hepatic secretory pathway.
CLC-5 is one of the best-studied members of this protein family. It shares 80% of its amino acid sequence with CLC-3 and CLC-4, but it is mainly found in the kidney, especially in the proximal tubule, collecting duct, and ascending limb of the loop of Henle. It functions to transport substances through the endosomal membrane, so it is crucial for pinocytosis, receptor-mediated endocytosis, and endocytosis of plasma membrane proteins from the apical surface.
CLC-7 is another example of a CLC family protein. It is ubiquitously expressed as the chloride-hydrogen antiporter in lysosomes and in the ruffled border of osteoclasts. CLC-7 may be important for regulating to concentration of chloride in lysosomes. It is associated with a protein called Ostm1, forming a complex that allows CLC-7 to carry out its functions. For example, these proteins are crucial to the process of acidifying the resorption lacuna, which enables bone remodeling to occur.
CLC-4 has been connected with mental retardation involving seizure disorders, facial abnormalities, and behavior disorders. Studies found frameshift and missense mutations in patients exhibiting these symptoms. Because these symptoms were mostly exhibited in males, with less severe pathology in females, it is likely X-linked. Studies done on animal models have also shown the possibility of a connection between nonfunctional CLC-4 and impaired neural branching of hippocampus neurons.
Defects in the CLC-5 gene were shown to be the cause of 60% of cases of Dent's disease, which is characterized by tubular proteinuria, formation of kidney stones, excess calcium in the urine, nephrocalcinosis, and chronic kidney failure. This is caused by abnormalities that occur in the endocytosis process when CLC-5 is mutated. Dent's disease itself is one of the causes of Fanconi syndrome, which occurs when the proximal convoluted tubules of the kidney do not perform an adequate level of reabsorption. It causes molecules produced by metabolic pathways, such as amino acids, glucose, and uric acid to be excreted in the urine instead of being reabsorbed. The result is polyuria, dehydration, rickets in children, osteomalacia in adults, acidosis, and hypokalemia.
CLC-7's role in osteoclast function was revealed by studies on knockout mice that developed severe osteopetrosis. These mice were smaller, had shortened long bones, disorganized trabecular structure, a missing medullary cavity, and their teeth did not erupt. This was found to be caused by deletion mutations, missense mutations, and gain-of-function mutations that sped up the gating of CLC-7. CLC-7 is expressed in almost every neuronal cell type, and its loss led to widespread neurodegeneration in mice, especially in the hippocampus. In longer-lived models, the cortex and hippocampus had almost entirely disappeared after 1.5 years. Finally, because of its importance in lysosomes, altered expression of CLC-7 can lead to lysosomal storage disorders. Mice with a mutation introduced to the CLC-7 gene developed lysosomal storage disease and retinal degeneration.
Reduced folate carrier protein
The reduced folate carrier protein (RFC) is a transmembrane protein responsible for the transport of folate, or vitamin B9, into cells. It uses the large gradient of organic phosphate to move folate into the cell against its concentration gradient. The RFC protein can transport folates, reduced folates, the derivatives of reduced folate, and the drug methotrexate. The transporter is encoded by the SLC19A1 gene and is ubiquitously expressed in human cells. Its peak activity occurs at pH 7.4, with no activity occurring below pH 6.4. The RFC protein is critical because folates take the form of hydrophilic anions at physiological pH, so they do not diffuse naturally across biological membranes. Folate is essential for processes such as DNA synthesis, repair,and methylation, and without entry into cells, these could not occur.
Because folates are essential for various life-sustaining processes, a deficiency in this molecule can lead to fetal abnormalities, neurological disorders, cardiovascular disease, and cancer. Folates cannot be synthesized in the body, so it must be taken in through diet and moved into cells. Without the RFC protein facilitating this movement, processes such as embryological development and DNA repair cannot occur.
Adequate folate levels are required for the development of the neural tube in the fetus. Folate deficiency during pregnancy increases the risk of defects such as spina bifida and anencephaly. In mouse models, inactivating both alleles of the FRC protein gene causes death of the embryo. Even if folate is supplemented during gestation, the mice died within two weeks of birth from the failure of hematopoietic tissues.
Altered function of the RFC protein can increase folate deficiency, enhancing cardiovascular disease, neurodegenerative diseases, and cancer. In terms of cardiovascular issues, folate contributes to homocysteine metabolism. Low folate levels result in elevated homocysteine levels, which is a risk factor for cardiovascular diseases. In terms of cancer, folate deficiency is related to an increased risk, especially that of colorectal cancers. In mouse models with altered RFC protein expression showed increased transcripts of genes related to colon cancer and increased proliferation of colonocytes. The cancer risk is likely related to the FRC protein's role in DNA synthesis because inadequate levels of folate can lead to DNA damage and aberrant DNA methylation.
Vesicle neurotransmitter antiporters
Vesicle neurotransmitter antiporters are responsible for packaging neurotransmitters into vesicles in neurons. They utilize the electrochemical gradient of hydrogen protons across the membranes of synaptic vesicles to move neurotransmitters into them. This is essential for the process of synaptic transmission, which requires neurotransmitters to be released into the synapse to bind to receptors on the next neuron.
One of the best characterized of these antiporters is the vesicular monoamine transporter (VMAT). It is responsible for the storage, sorting, and release of neurotransmitters, as well as for protecting them from autoxidation. VMAT's transport functions are dependent on the electrochemical gradient created by a vesicular hydrogen proton-ATPase. VMAT1 and VMAT2 are two isoforms that can transport monoamines such as serotonin, norepinephrine, and dopamine in a proton-dependent fashion. VMAT1 can be found in neuroendocrine cells, while VMAT2 can be found in the neurons of the central and peripheral nervous systems, as well as in adrenal chromaffin cells.
Another important vesicle neurotransmitter antiporter is the vesicular glutamate transporter (VGLUT). This family of proteins includes three isoforms, VGLUT1, VGLUT2, and VGLUT3, that are responsible for packaging glutamate - the most abundant excitatory neurotransmitter in the brain - into synaptic vesicles. These antiporters vary by location. VGLUT1 is found in areas of the brain related to higher cognitive functions, such as the neocortex. VGLUT2 works to regulate basic physiological functions and is expressed in subcortical regions such as the brainstem and hypothalamus. Finally, VGLUT3 can be seen in neurons that also express other neurotransmitters.
VMAT2 has been found to contribute to neurological conditions such as mood disorders and Parkinson's disease. Studies done on an animal model of clinical depression showed that functional alterations of VMAT2 were associated with depression. The nucleus accumbens, pars compacta of the substantia nigra, and ventral tegmental area - all subregions of the brain involved in clinical depression - were found to have lower VMAT2 levels. The likely cause for this is VMAT's relationship with serotonin and norepinephrine, neurotransmitters that are related to depression. VMAT dysfunction may contribute to the altered levels of these neurotransmitters that occur in mood disorders.
Lower expression of VMAT2 was found to correlate with a higher susceptibility to Parkinson's disease and the antiporter's mRNA was found in all cell groups damaged by Parkinson's. This is likely because VMAT2 dysfunction can lead to a decrease in dopamine packaging into vesicles, accounting for the dopamine depletion that characterizes the disease. For this reason, the antiporter has been identified as a protective factor that could be targeted for the prevention of Parkinson's.
Because alterations in glutamate release have been linked to the generation of seizures in epilepsy, alterations in the function of VGLUT may be implicated. A study was conducted where the VGLUT1 gene was inactivated in the astrocytes and neurons of an animal model. When the gene was inactivated in astrocytes, there was an 80% loss in the antiporter protein itself and, in turn, a reduction in glutamate uptake. The mice in this condition experienced seizures, lower body mass, and higher mortality rates. The researchers concluded that VGLUT1 function in astrocytes is therefore critical to epilepsy resistance and normal weight gain.
There is a lot of evidence that the glutamate system plays a role in long-term cell growth and synaptic plasticity. Disturbances of these processes has been linked to the pathology of mood disorders. The link between the function of the glutamatergic neurotransmitter system and mood disorders sets up VGLUT as one of the targets for treatment.
See also
Active transport
Adenine nucleotide translocator
Cotransporter
Reduced folate carrier family
Sodium-calcium exchanger
Sodium-hydrogen antiporter
Symporter
Uniporter
Vesicular monoamine transporter
References
Further reading
External links
Integral membrane proteins
Transport phenomena | Antiporter | [
"Physics",
"Chemistry",
"Engineering"
] | 7,406 | [
"Transport phenomena",
"Chemical engineering",
"Physical phenomena"
] |
11,917,122 | https://en.wikipedia.org/wiki/Scanning%20Hall%20probe%20microscope | Scanning Hall probe microscope (SHPM) is a variety of a scanning probe microscope which incorporates accurate sample approach and positioning of the scanning tunnelling microscope with a semiconductor Hall sensor. Developed in 1996 by Oral, Bending and Henini, SHPM allows mapping the magnetic induction associated with a sample. Current state of the art SHPM systems utilize 2D electron gas materials (e.g. GaAs/AlGaAs) to provide high spatial resolution (~300 nm) imaging with high magnetic field sensitivity. Unlike the magnetic force microscope the SHPM provides direct quantitative information on the magnetic state of a material. The SHPM can also image magnetic induction under applied fields up to ~1 tesla and over a wide range of temperatures (millikelvins to 300 K).
The SHPM can be used to image many types of magnetic structures such as thin films, permanent magnets, MEMS structures, current carrying traces on PCBs, permalloy disks, and recording media
Advantages to other magnetic raster scanning methods
SHPM is a superior magnetic imaging technique due to many reasons. Although MFM provides higher spatial resolution (~30 nm) imaging, unlike the MFM technique, the Hall probe exerts negligible force on the underlying magnetic structure and is noninvasive. Unlike the magnetic decoration technique, the same area can be scanned over and over again. The magnetic field caused by hall probe is so minimal it has a negligible effect on sample it is measuring. The sample does not need to be an electrical conductor, unless using STM for height control. The measurement can be performed from 5 – 500 K in ultra high vacuum (UHV) and is nondestructive to the crystal lattice or structure. Tests requires no special surface preparation or coating. The detectable magnetic field sensitivity, is approximately 0.1 uT – 10 T. SHPM can be combined with other scanning methods such as STM.
Limitations
There are some shortcomings or difficulties when working with an SHPM. High resolution scans become difficult due to the thermal noise of extremely small hall probes. There is a minimum scanning height distance due to the construction of the hall probe. (This is especially significant with 2DEG semi-conductor probes due to their multi-layer design). The scanning (lift) height affects obtained image. Scanning large areas takes a significant amount of time. There is a relatively short practical scanning range (order of 1000s micrometer) along any direction. The housing is important to shield electromagnetic noise (Faraday cage), acoustic noise (anti-vibrating tables), air flow (air isolation cupboard), and static charge on the sample (ionizing units).
References
Scanning probe microscopy | Scanning Hall probe microscope | [
"Chemistry",
"Materials_science"
] | 549 | [
"Nanotechnology",
"Scanning probe microscopy",
"Microscopy"
] |
11,917,751 | https://en.wikipedia.org/wiki/Brownout%20%28electricity%29 | A brownout is a drop in the magnitude of voltage in an electrical power system.
Unintentional brownouts can be caused by excessive electricity demand, severe weather events, or a malfunction or error affecting electrical grid control or monitoring systems. Intentional brownouts are used for load reduction in an emergency, or to prevent a total grid power outage due to high demand. The term brownout comes from the dimming of incandescent lighting when voltage reduces.
In some countries, the term brownout refers not to a drop in voltage but to an intentional or unintentional power outage (or blackout).
Effects
Different types of electrical apparatus will react in different ways to a voltage reduction. Some devices will be severely affected, while others may not be affected at all.
Resistive loads
The heat output of any resistive device, such as an electric space heater, toaster, oven, and incandescent bulbs is equal to the power consumption, which is directly proportional to the square of the applied voltage if the resistance stays constant. Therefore, a significant reduction of heat output will occur with a relatively small reduction in voltage. An incandescent lamp will dim due to lower heat creation in the filament, as well as lower conversion of heat to light. Generally speaking, no damage will occur but functionality will be impaired.
Motors
Commutated electric motors, such as universal motors, will run at reduced speed or reduced torque. Depending on the motor design, no harm may occur. However, under load, the motor may draw more current due to the reduced back-EMF developed at the lower armature speed. Unless the motor has ample cooling capacity, it may eventually overheat and burn out.
An induction motor will draw more current to compensate for the decreased voltage, which may lead to overheating and burnout. If a substantial part of a grid's load is electric motors, reducing voltage may not actually reduce load and can result in damage to customers' equipment.
Power supplies
An unregulated DC supply will produce a lower output voltage. The output voltage ripple will decrease in line with the usually reduced load current. In a cathode-ray tube television, the reduced output voltage will make the screen image smaller, dimmer and fuzzier.
A linear DC regulated supply will maintain the output voltage unless the brownout is severe and the input voltage drops below the drop out voltage for the regulator, at which point the output voltage will fall and high levels of ripple from the rectifier/reservoir capacitor will appear on the output.
A switched-mode power supply will be affected if the brownout voltage is lower than the minimum input voltage of the power supply. As the input voltage falls, the current draw will increase to maintain the same output voltage and current, until such a point that the power supply malfunctions or its under-voltage protection kicks in and disables the output.
Digital systems
Brownouts can cause unexpected behavior in systems with digital control circuits. Reduced voltages can bring control signals below the threshold at which logic circuits can reliably detect which state is being represented. As the voltage returns to normal levels the logic can latch at an incorrect state; to the extent that even "can't happen" states become possible. The seriousness of this effect and whether steps need to be taken by the designer to prevent it depends on the nature of the equipment being controlled; for instance, a brownout may cause a motor to begin running backwards.
See also
Black start
Dumsor
Power outage
Undervoltage lockout (UVLO)
Voltage drop
References
Electrical grid
Voltage stability | Brownout (electricity) | [
"Physics"
] | 740 | [
"Voltage",
"Voltage stability",
"Physical quantities"
] |
11,918,162 | https://en.wikipedia.org/wiki/Environmental%20engineering%20law | Environmental engineering law is a profession that requires an expertise in both environmental engineering and law. This field includes professionals with both a legal and environmental engineering education. This dual educational requirement is typically satisfied through an ABET accredited degree in environmental engineering and an ABA accredited law degree. Likewise, this profession requires both licensure in professional environmental engineering and admittance to one bar.
Environmental engineering law is the professional application of law and engineering principles to improve the environment (air, water, and/or land resources), to provide healthy water, air, and land for human habitation and for other organisms, and to remediate polluted sites. Environmental engineering lawyers seek to promote the advancement of technical engineering knowledge in the legal profession and to enhance informed legal analysis of complex environmental matters.
Practice areas
Environmental engineering law professionals offer a sound knowledge base in the fields of both environmental engineering and law to address complex environmental problems which demand both professional technical practice and legal expertise. Areas of practice are continually expanding, but frequently include complex land transactions, such as:
Brownfields redevelopment
Asbestos baseline survey and building revaluation due to forthcoming asbestos abatements
Soil contamination assessment & remediation, the development of a remedial action workplan (RAWP) and engineering controls, including an environmental land use restriction (ELUR)
Total maximum daily load (TMDL) nutrient loading studies (ex. for NPDES wastewater discharges) and regulatory negotiation of nutrients discharge limits from waste treatment plants, such as phosphorus and nitrogen.
See also
Engineering law
Environmental law
Environmental agreements
Environmental Engineering Science
Environmental impact statement
Environmental justice
International environmental law
References
Environmental engineering
Environmental law | Environmental engineering law | [
"Chemistry",
"Engineering"
] | 328 | [
"Chemical engineering",
"Civil engineering",
"Environmental engineering"
] |
4,175,003 | https://en.wikipedia.org/wiki/Magnetic%20pressure | In physics, magnetic pressure is an energy density associated with a magnetic field. In SI units, the energy density of a magnetic field with strength can be expressed as
where is the vacuum permeability.
Any magnetic field has an associated magnetic pressure contained by the boundary conditions on the field. It is identical to any other physical pressure except that it is carried by the magnetic field rather than (in the case of a gas) by the kinetic energy of gas molecules. A gradient in field strength causes a force due to the magnetic pressure gradient called the magnetic pressure force.
Mathematical statement
In SI units, the magnetic pressure in a magnetic field of strength is
where is the vacuum permeability and has units of energy density.
Magnetic pressure force
In ideal magnetohydrodynamics (MHD) the magnetic pressure force in an electrically conducting fluid with a bulk plasma velocity field , current density , mass density , magnetic field , and plasma pressure can be derived from the Cauchy momentum equation:
where the first term on the right hand side represents the Lorentz force and the second term represents pressure gradient forces. The Lorentz force can be expanded using Ampère's law, , and the vector identity
to give
where the first term on the right hand side is the magnetic tension and the second term is the magnetic pressure force.
Magnetic tension and pressure are both implicitly included in the Maxwell stress tensor. Terms representing these two forces are present along the main diagonal where they act on differential area elements normal to the corresponding axis.
Wire loops
The magnetic pressure force is readily observed in an unsupported loop of wire. If an electric current passes through the loop, the wire serves as an electromagnet, such that the magnetic field strength inside the loop is much greater than the field strength just outside the loop. This gradient in field strength gives rise to a magnetic pressure force that tends to stretch the wire uniformly outward. If enough current travels through the wire, the loop of wire will form a circle. At even higher currents, the magnetic pressure can create tensile stress that exceeds the tensile strength of the wire, causing it to fracture, or even explosively fragment. Thus, management of magnetic pressure is a significant challenge in the design of ultrastrong electromagnets.
The force (in cgs) exerted on a coil by its own current is
where Y is the internal inductance of the coil, defined by the distribution of current. Y is 0 for high frequency currents carried mostly by the outer surface of the conductor, and 0.25 for DC currents distributed evenly throughout the conductor. See inductance for more information.
Interplay between magnetic pressure and ordinary gas pressure is important to magnetohydrodynamics and plasma physics. Magnetic pressure can also be used to propel projectiles; this is the operating principle of a railgun.
Force-free fields
When all electric currents present in a conducting fluid are parallel to the magnetic field, the magnetic pressure gradient and magnetic tension force are balanced, and the Lorentz force vanishes. If non-magnetic forces are also neglected, the field configuration is referred to as force-free. Furthermore, if the current density is zero, the magnetic field is the gradient of a magnetic scalar potential, and the field is subsequently referred to as potential.
See also
Magnetic tension force
Maxwell stress tensor
Electromagnetically induced acoustic noise and vibration
Alfvén wave
References
Plasma parameters
Electromagnetism | Magnetic pressure | [
"Physics"
] | 691 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
4,175,450 | https://en.wikipedia.org/wiki/Shockley%20diode%20equation | The Shockley diode equation, or the diode law, named after transistor co-inventor William Shockley of Bell Labs, models the exponential current–voltage (I–V) relationship of semiconductor diodes in moderate constant current forward bias or reverse bias:
where
is the diode current,
is the reverse-bias saturation current (or scale current),
is the voltage across the diode,
is the thermal voltage, and
is the ideality factor, also known as the quality factor, emission coefficient, or the material constant.
The equation is called the Shockley ideal diode equation when the ideality factor equals 1, thus is sometimes omitted. The ideality factor typically varies from 1 to 2 (though can in some cases be higher), depending on the fabrication process and semiconductor material. The ideality factor was added to account for imperfect junctions observed in real transistors, mainly due to carrier recombination as charge carriers cross the depletion region.
The thermal voltage is defined as:
where
is the Boltzmann constant,
is the absolute temperature of the p–n junction, and
is the elementary charge (the magnitude of an electron's charge).
For example, it is approximately 25.852mV at .
The reverse saturation current is not constant for a given device, but varies with temperature; usually more significantly than , so that typically decreases as increases.
Under reverse bias, the diode equation's exponential term is near 0, so the current is near the somewhat constant reverse current value (roughly a picoampere for silicon diodes or a microampere for germanium diodes, although this is obviously a function of size).
For moderate forward bias voltages the exponential becomes much larger than 1, since the thermal voltage is very small in comparison. The in the diode equation is then negligible, so the forward diode current will approximate
The use of the diode equation in circuit problems is illustrated in the article on diode modeling.
Limitations
Internal resistance causes "leveling off" of a real diode's I–V curve at high forward bias. The Shockley equation doesn't model this, but adding a resistance in series will.
The reverse breakdown region (particularly of interest for Zener diodes) is not modeled by the Shockley equation.
The Shockley equation doesn't model noise (such as Johnson–Nyquist noise from the internal resistance, or shot noise).
The Shockley equation is a constant current (steady state) relationship, and thus doesn't account for the diode's transient response, which includes the influence of its internal junction and diffusion capacitance and reverse recovery time.
Derivation
Shockley derives an equation for the voltage across a p-n junction in a long article published in 1949. Later he gives a corresponding equation for current as a function of voltage under additional assumptions, which is the equation we call the Shockley ideal diode equation. He calls it "a theoretical rectification formula giving the maximum rectification", with a footnote referencing a paper by Carl Wagner, Physikalische Zeitschrift 32, pp. 641–645 (1931).
To derive his equation for the voltage, Shockley argues that the total voltage drop can be divided into three parts:
the drop of the quasi-Fermi level of holes from the level of the applied voltage at the p terminal to its value at the point where doping is neutral (which we may call the junction),
the difference between the quasi-Fermi level of the holes at the junction and that of the electrons at the junction,
the drop of the quasi-Fermi level of the electrons from the junction to the n terminal.
He shows that the first and the third of these can be expressed as a resistance times the current: As for the second, the difference between the quasi-Fermi levels at the junction, he says that we can estimate the current flowing through the diode from this difference. He points out that the current at the p terminal is all holes, whereas at the n terminal it is all electrons, and the sum of these two is the constant total current. So the total current is equal to the decrease in hole current from one side of the diode to the other. This decrease is due to an excess of recombination of electron-hole pairs over generation of electron-hole pairs. The rate of recombination is equal to the rate of generation when at equilibrium, that is, when the two quasi-Fermi levels are equal. But when the quasi-Fermi levels are not equal, then the recombination rate is times the rate of generation. We then assume that most of the excess recombination (or decrease in hole current) takes place in a layer going by one hole diffusion length into the n material and one electron diffusion length into the p material, and that the difference between the quasi-Fermi levels is constant in this layer at Then we find that the total current, or the drop in hole current, is
where
and is the generation rate. We can solve for in terms of :
and the total voltage drop is then
When we assume that is small, we obtain and the Shockley ideal diode equation.
The small current that flows under high reverse bias is then the result of thermal generation of electron–hole pairs in the layer. The electrons then flow to the n terminal, and the holes to the p terminal. The concentrations of electrons and holes in the layer is so small that recombination there is negligible.
In 1950, Shockley and coworkers published a short article describing a germanium diode that closely followed the ideal equation.
In 1954, Bill Pfann and W. van Roosbroek (who were also of Bell Telephone Laboratories) reported that while Shockley's equation was applicable to certain germanium junctions, for many silicon junctions the current (under appreciable forward bias) was proportional to with having a value as high as 2 or 3. This is the ideality factor above.
Feynman gave a derivation using the Brownian ratchet in The Feynman Lectures on Physics I.46.
Photovoltaic energy conversion
In 1981, Alexis de Vos and Herman Pauwels showed that a more careful analysis of the quantum mechanics of a junction, under certain assumptions, gives a current versus voltage characteristic of the form
in which is the cross-sectional area of the junction, and is the number of incoming photons per unit area, per unit time, with energy over the band-gap energy, and is outgoing photons, given by
The factor of 2 multiplying the outgoing flux is needed because photons are emitted from both sides, but the incoming flux is assumed to come from just one side.
Although the analysis was done for photovoltaic cells under illumination, it applies also when the illumination is simply background thermal radiation, provided that a factor of 2 is then used for this incoming flux as well. The analysis gives a more rigorous expression for ideal diodes in general, except that it assumes that the cell is thick enough that it can produce this flux of photons. When the illumination is just background thermal radiation, the characteristic is
Note that, in contrast to the Shockley law, the current goes to infinity as the voltage goes to the gap voltage . This of course would require an infinite thickness to provide an infinite amount of recombination.
This equation was recently revised to account for the new temperature scaling in the revised current using a recent model for 2D materials based Schottky diode.
References
Diodes
Electrical engineering
Eponymous equations of physics | Shockley diode equation | [
"Physics",
"Engineering"
] | 1,566 | [
"Electrical engineering",
"Eponymous equations of physics",
"Equations of physics"
] |
4,175,709 | https://en.wikipedia.org/wiki/Science%20in%20the%20Renaissance | During the Renaissance, great advances occurred in geography, astronomy, chemistry, physics, mathematics, manufacturing, anatomy and engineering. The collection of ancient scientific texts began in earnest at the start of the 15th century and continued up to the Fall of Constantinople in 1453, and the invention of printing allowed a faster propagation of new ideas. Nevertheless, some have seen the Renaissance, at least in its initial period, as one of scientific backwardness. Historians like George Sarton and Lynn Thorndike criticized how the Renaissance affected science, arguing that progress was slowed for some amount of time. Humanists favored human-centered subjects like politics and history over study of natural philosophy or applied mathematics. More recently, however, scholars have acknowledged the positive influence of the Renaissance on mathematics and science, pointing to factors like the rediscovery of lost or obscure texts and the increased emphasis on the study of language and the correct reading of texts.
Marie Boas Hall coined the term Scientific Renaissance to designate the early phase of the Scientific Revolution, 1450–1630. More recently, Peter Dear has argued for a two-phase model of early modern science: a Scientific Renaissance of the 15th and 16th centuries, focused on the restoration of the natural knowledge of the ancients; and a Scientific Revolution of the 17th century, when scientists shifted from recovery to innovation.
Context
During and after the Renaissance of the 12th century, Europe experienced an intellectual revitalization, especially with regard to the investigation of the natural world. In the 14th century, however, a series of events that would come to be known as the Crisis of the Late Middle Ages was underway. When the Black Death came, it wiped out so many lives it affected the entire system. It brought a sudden end to the previous period of massive scientific change. The plague killed 25–50% of the people in Europe, especially in the crowded conditions of the towns, where the heart of innovations lay. Recurrences of the plague and other disasters caused a continuing decline of population for a century.
The Renaissance
The 14th century saw the beginning of the cultural movement of the Renaissance. By the early 15th century, an international search for ancient manuscripts was underway and would continue unabated until the Fall of Constantinople in 1453, when many Byzantine scholars had to seek refuge in the West, particularly Italy. Likewise, the invention of the printing press was to have great effect on European society: the facilitated dissemination of the printed word democratized learning and allowed a faster propagation of new ideas.
Initially, there were no new developments in physics or astronomy, and the reverence for classical sources further enshrined the Aristotelian and Ptolemaic views of the universe. Renaissance philosophy lost much of its rigor as the rules of logic and deduction were seen as secondary to intuition and emotion. At the same time, Renaissance humanism stressed that nature came to be viewed as an animate spiritual creation that was not governed by laws or mathematics. Only later, when no more manuscripts could be found, did humanists turn from collecting to editing and translating them, and new scientific work began with the work of such figures as Copernicus, Cardano, and Vesalius.
Important developments
Alchemy and chemistry
While differing in some respects, alchemy and chemistry often had similar goals during the Renaissance period, and together they are sometimes referred to as chymistry. Alchemy is the study of the transmutation of materials through obscure processes. Although it is often viewed as a pseudoscientific endeavor, many of its practitioners utilized widely accepted scientific theories of their times to formulate hypotheses about the constituents of matter and the ways matter could be changed. One of the main aims of alchemists was to find a method of creating gold and other precious metals from the transmutation of base materials. A common belief of alchemists was that there is an essential substance from which all other substances formed, and that if you could reduce a substance to this original material, you could then construct it into another substance, like lead to gold. Medieval alchemists worked with two main elements or "principles", sulphur and mercury.
Paracelsus was a chymist and physician of the Renaissance period who believed that, in addition to sulphur and mercury, salt served as one of the primary alchemical principles from which everything else was made. Paracelsus was also instrumental in helping to put chemical practices to practical medicinal use through a recognition that the body operates through processes which may be seen as chemical in nature. These lines of thinking directly conflicted with many long-held traditional beliefs, such as those popularized by Aristotle; however, Paracelsus was insistent that questioning principles of nature was essential to continue the general growth of knowledge.
Despite its frequent basis in what may be considered scientific practices by modern standards, numerous factors caused chymistry as a discipline to remain separate from general academia until near the end of the Renaissance, when it finally began appearing as a portion of some university education. The commercial nature of chymistry at the time, along with the lack of classical basis for the practice, were some of the contributing factors which led to the general view of the discipline as a craft rather than a respectable academic discipline.
Astronomy
The astronomy of the late Middle Ages was based on the geocentric model described by Claudius Ptolemy in antiquity. Probably very few practicing astronomers or astrologers actually read Ptolemy's Almagest, which had been translated into Latin by Gerard of Cremona in the 12th century. Instead they relied on introductions to the Ptolemaic system such as the De sphaera mundi of Johannes de Sacrobosco and the genre of textbooks known as Theorica planetarum. For the task of predicting planetary motions they turned to the Alfonsine tables, a set of astronomical tables based on the Almagest models but incorporating some later modifications, mainly the trepidation model attributed to Thabit ibn Qurra. Contrary to popular belief, astronomers of the Middle Ages and Renaissance did not resort to "epicycles on epicycles" in order to correct the original Ptolemaic models—until one comes to Copernicus himself.
Sometime around 1450, mathematician Georg Purbach (1423–1461) began a series of lectures on astronomy at the University of Vienna. Regiomontanus (1436–1476), who was then one of his students, collected his notes on the lecture and later published them as Theoricae novae planetarum in the 1470s. This "New Theorica" replaced the older theorica as the textbook of advanced astronomy. Purbach also began to prepare a summary and commentary on the Almagest. He died after completing only six books, however, and Regiomontanus continued the task, consulting a Greek manuscript brought from Constantinople by Cardinal Bessarion. When it was published in 1496, the Epitome of the Almagest made the highest levels of Ptolemaic astronomy widely accessible to many European astronomers for the first time.
The last major event in Renaissance astronomy is the work of Nicolaus Copernicus (1473–1543). He was among the first generation of astronomers to be trained with the Theoricae novae and the Epitome. Shortly before 1514 he began to revive Aristarchus's idea that the Earth revolves around the Sun. He spent the rest of his life attempting a mathematical proof of heliocentrism. When De revolutionibus orbium coelestium was finally published in 1543, Copernicus was on his deathbed. A comparison of his work with the Almagest shows that Copernicus was in many ways a Renaissance scientist rather than a revolutionary, because he followed Ptolemy's methods and even his order of presentation. Not until the works of Johannes Kepler (1571–1630) and Galileo Galilei (1564–1642) was Ptolemy's manner of doing astronomy superseded. The use of more advanced tables and mathematics would provide the impetus for the establishment of the Gregorian calendar in 1582 (primarily to reform the calculation of the date of Easter), replacing the Julian calendar, which had several errors.
Mathematics
The accomplishments of Greek mathematicians survived throughout Late Antiquity and the Middle Ages through a long and indirect history. Much of the work of Euclid, Archimedes, and Apollonius, along with later authors such as Hero and Pappus, were copied and studied in both Byzantine culture and in Islamic centers of learning. Translations of these works began already in the 12th century, with the work of translators in Spain and Sicily, working mostly from Arabic and Greek sources into Latin. Two of the most prolific were Gerard of Cremona and William of Moerbeke.
The greatest of all translation efforts, however, took place in the 15th and 16th centuries in Italy, as attested by the numerous manuscripts dating from this period currently found in European libraries. Virtually all leading mathematicians of the era were obsessed with the need for restoring the mathematical works of the ancients. Not only did humanists assist mathematicians with the retrieval of Greek manuscripts, they also took an active role in translating these work into Latin, often commissioned by religious leaders such as Nicholas V and Cardinal Bessarion.
Some of the leading figures in this effort include Regiomontanus, who made a copy of the Latin Archimedes and had a program for printing mathematical works; Commandino (1509–1575), who likewise produced an edition of Archimedes, as well as editions of works by Euclid, Hero, and Pappus; and Maurolyco (1494–1575), who not only translated the work of ancient mathematicians but added much of his own work to these. Their translations ensured that the next generation of mathematicians would be in possession of techniques far in advance of what it was generally available during the Middle Ages.
It must be borne in mind that the mathematical output of the 15th and 16th centuries was not exclusively limited to the works of the ancient Greeks. Some mathematicians, such as Tartaglia and Luca Paccioli, welcomed and expanded on the medieval traditions of both Islamic scholars and people like Jordanus and Fibonnacci. Giordano Bruno was also one to critique the works of people like Aristotle, whom he believed to have a flawed logic and developed a mathematical doctrine for the computation of partial physics, with Bruno attempting to transform theories of nature.
Physics
The progress being made in math was complemented by advancements in physics, with people like Galileo attempting to bridge the gap between the two fields and question Aristotelian ideas. The revived invertigation of physics opened up many opportunities in subfields like mechanics, optics, navigation, and cartography.
Mechanical theories had originated with the Greeks, especially Aristotle and Archimedes. Mechanics and philosophy had been related disciplines in ancient Greece, and only in the Renaissance did the two subjects begin to split. A lot of the work of developing new mechanical ideas and theories was carried out by Italians such as Rafael Bombelli, though the Fleming Simon Stevin also provided many ideas. Galileo also contributed to the advancement of this field with a treatise on mechanics in 1593, helping to develop ideas on relativity, freely falling bodies, and accelerated linear motion, though he lacked the means to properly communicate his findings at the time. In June 1609, Galileo's interests shifted to his telescopic investigations after having been close to revolutionizing the science of mechanics.
Navigation was an important topic of the time, and many innovations were made that, with the introduction of better ships and applications of the compass, would later lead to geographical discoveries. The calculations involved in navigation proved to be difficult, with the technology of the time unable to accuately predict weather or determine one's geographic position. Determining one's longitude proved especially challenging, since one's local time need to be calculated on the basis of an astonomical observation. One theory that was tested was to record the time of an eclipse and use Regiomontanus' Ephemerides to compare it with Nuremberg time or Zacuto's Almanach perpetuum to compare it with Salamanca time, though the margin of error in such calculations was unacceptably great (around 25.5 degrees). Until longitude could be accurately determined, navigators had to rely on dead reckoning, with its many uncertainties.
Medicine
With the Renaissance came an increase in experimental investigation, principally in the field of dissection and body examination, thus advancing our knowledge of human anatomy. The development of modern neurology began in the 16th century with Andreas Vesalius, who described the anatomy of the brain and other organs; he had little knowledge of the brain's function, thinking that it resided mainly in the ventricles. Understanding of medical sciences and diagnosis improved, but with little direct benefit to health care. Few effective drugs existed, beyond opium and quinine. William Harvey provided a refined and complete description of the circulatory system. The most useful tomes in medicine, used both by students and expert physicians, were materiae medicae and pharmacopoeiae.
Geography and the New World
In the history of geography, the key classical text was the Geographia of Claudius Ptolemy (2nd century). It was translated into Latin in the 15th century by Jacopo d'Angelo. It was widely read in manuscript and went through many print editions after it was first printed in 1475. Regiomontanus worked on preparing an edition for print prior to his death; his manuscripts were consulted by later mathematicians in Nuremberg. Ptolemy's Geographia became the basis for most maps made in Europe throughout the 15th century. Even as new knowledge began to replace the content of old maps, the rediscovery of Ptolemy's mapping system, including the use of coordinates and projection, helped to redefine the overall field of cartography as a scientific pursuit rather than an artistic one.
The information provided by Ptolemy, as well as Pliny the Elder and other classical sources, was soon seen to be in contradiction to the lands explored in the Age of Discovery. The new discoveries revealed shortcomings in classical knowledge; they also opened European imagination to new possibilities. In particular, Christopher Columbus' voyage to the New World in 1492 helped set the tone for what would soon after become a wave of European expansion. Thomas More's Utopia was inspired partly by the discovery of the New World. Most maps developed prior to this period grossly underestimated the extent of the lands separating Europe from India on a westward route through the New World; however, through contributions of explorers such as Ferdinand Magellan, efforts were made to create more accurate maps during this period.
See also
Continuity thesis
The Copernican Question
Renaissance magic
Renaissance technology
Notes
References
Dear, Peter. Revolutionizing the Sciences: European Knowledge and Its Ambitions, 1500–1700. Princeton: Princeton University Press, 2001.
Debus, Allen G. Man and Nature in the Renaissance. Cambridge: Cambridge University Press, 1978.
Grafton, Anthony, et al. New Worlds, Ancient Texts: The Power of Tradition and the Shock of Discovery. Cambridge: Belknap Press of Harvard University Press, 1992.
Hall, Marie Boas. The Scientific Renaissance, 1450–1630. New York: Dover Publications, 1962, 1994.
External links
Renaissance science and technology at Britannica.com
Renaissance
Science | Science in the Renaissance | [
"Technology"
] | 3,143 | [
"History of science",
"History of science and technology"
] |
4,177,188 | https://en.wikipedia.org/wiki/Thermal%20management%20%28electronics%29 | All electronic devices and circuitry generate excess heat and thus require thermal management to improve reliability and prevent premature failure. The amount of heat output is equal to the power input, if there are no other energy interactions. There are several techniques for cooling including various styles of heat sinks, thermoelectric coolers, forced air systems and fans, heat pipes, and others. In cases of extreme low environmental temperatures, it may actually be necessary to heat the electronic components to achieve satisfactory operation.
Overview
Thermal resistance of devices
This is usually quoted as the thermal resistance from junction to case of the semiconductor device. The units are °C/W. For example, a heatsink rated at 10 °C/W will get 10 °C hotter than the surrounding air when it dissipates 1 Watt of heat. Thus, a heatsink with a low °C/W value is more efficient than a heatsink with a high °C/W value.
Given two semiconductor devices in the same package, a lower junction to ambient resistance (RθJ-C) indicates a more efficient device. However, when comparing two devices with different die-free package thermal resistances (Ex. DirectFET MT vs wirebond 5x6mm PQFN), their junction to ambient or junction to case resistance values may not correlate directly to their comparative efficiencies. Different semiconductor packages may have different die orientations, different copper(or other metal) mass surrounding the die, different die attach mechanics, and different molding thickness, all of which could yield significantly different junction to case or junction to ambient resistance values, and could thus obscure overall efficiency numbers.
Thermal time constants
A heatsink's thermal mass can be considered as a capacitor (storing heat instead of charge) and the thermal resistance as an electrical resistance (giving a measure of how fast stored heat can be dissipated). Together, these two components form a thermal RC circuit with an associated time constant given by the product of R and C. This quantity can be used to calculate the dynamic heat dissipation capability of a device, in an analogous way to the electrical case.
Thermal interface material
A thermal interface material or mastic (aka TIM) is used to fill the gaps between thermal transfer surfaces, such as between microprocessors and heatsinks, in order to increase thermal transfer efficiency.
It has a higher thermal conductivity value in Z-direction than xy-direction.
Applications
Personal computers
Due to recent technological developments and public interest, the retail heat sink market has reached an all-time high. In the early 2000s, CPUs were produced that emitted more and more heat than earlier, escalating requirements for quality cooling systems.
Overclocking has always meant greater cooling needs, and the inherently hotter chips meant more concerns for the enthusiast. Efficient heat sinks are vital to overclocked computer systems because the higher a microprocessor's cooling rate, the faster the computer can operate without instability; generally, faster operation leads to higher performance. Many companies now compete to offer the best heat sink for PC overclocking enthusiasts. Prominent aftermarket heat sink manufacturers include: Aero Cool, Foxconn, Thermalright, Thermaltake, Swiftech, and Zalman.
Soldering
Temporary heat sinks were sometimes used while soldering circuit boards, preventing excessive heat from damaging sensitive nearby electronics. In the simplest case, this means partially gripping a component using a heavy metal crocodile clip or similar clamp. Modern semiconductor devices, which are designed to be assembled by reflow soldering, can usually tolerate soldering temperatures without damage. On the other hand, electrical components such as magnetic reed switches can malfunction if exposed to higher powered soldering irons, so this practice is still very much in use.
Batteries
In the battery used for electric vehicles, Nominal battery performance is usually specified for working temperatures somewhere in the +20 °C to +30 °C range; however, the actual performance can deviate substantially from this if the battery is operated at higher or, in particular, lower temperatures, so some electric cars have heating and cooling for their batteries.
Methodologies
Heat sinks
Heat sinks are widely used in electronics and have become essential to modern microelectronics. In common use, it is a metal object brought into contact with an electronic component's hot surface—though in most cases, a thin thermal interface material mediates between the two surfaces. Microprocessors and power handling semiconductors are examples of electronics that need a heat sink to reduce their temperature through increased thermal mass and heat dissipation (primarily by conduction and convection and to a lesser extent by radiation). Heat sinks have become almost essential to modern integrated circuits like microprocessors, DSPs, GPUs, and more.
A heat sink usually consists of a metal structure with one or more flat surfaces to ensure good thermal contact with the components to be cooled, and an array of comb or fin like protrusions to increase the surface contact with the air, and thus the rate of heat dissipation.
A heat sink is sometimes used in conjunction with a fan to increase the rate of airflow over the heat sink. This maintains a larger temperature gradient by replacing warmed air faster than convection would. This is known as a forced air system.
Cold plate
Placing a conductive thick metal plate, referred to as a cold plate, as a heat transfer interface between a heat source and a cold flowing fluid (or any other heat sink) may improve the cooling performance. In such arrangement, the heat source is cooled under the thick plate instead of being cooled in direct contact with the cooling fluid. It is shown that the thick plate can significantly improve the heat transfer between the heat source and the cooling fluid by way of conducting the heat current in an optimal manner. The two most attractive advantages of this method are that no additional pumping power and no extra heat transfer surface area, that is quite different from fins (extended surfaces).
Principle
Heat sinks function by efficiently transferring thermal energy ("heat") from an object at high temperature to a second object at a lower temperature with a much greater heat capacity. This rapid transfer of thermal energy quickly brings the first object into thermal equilibrium with the second, lowering the temperature of the first object, fulfilling the heat sink's role as a cooling device. Efficient function of a heat sink relies on rapid transfer of thermal energy from the first object to the heat sink, and the heat sink to the second object.
The most common design of a heat sink is a metal device with many fins. The high thermal conductivity of the metal combined with its large surface area result in the rapid transfer of thermal energy to the surrounding, cooler, air. This cools the heat sink and whatever it is in direct thermal contact with. Use of fluids (for example coolants in refrigeration) and thermal interface material (in cooling electronic devices) ensures good transfer of thermal energy to the heat sink. Similarly, a fan may improve the transfer of thermal energy from the heat sink to the air.
Construction and materials
A heat sink usually consists of a base with one or more flat surfaces and an array of comb or fin-like protrusions to increase the heat sink's surface area contacting the air, and thus increasing the heat dissipation rate. While a heat sink is a static object, a fan often aids a heat sink by providing increased airflow over the heat sink—thus maintaining a larger temperature gradient by replacing the warmed air more quickly than passive convection achieves alone—this is known as a forced-air system.
Ideally, heat sinks are made from a good thermal conductor such as silver, gold, copper, or aluminum alloy. Copper and aluminum are among the most-frequently used materials for this purpose within electronic devices. Copper (401 W/(m·K) at 300 K) is significantly more expensive than aluminum (237 W/(m·K) at 300 K) but is also roughly twice as efficient as a thermal conductor. Aluminum has the significant advantage that it can be easily formed by extrusion, thus making complex cross-sections possible. Aluminum is also much lighter than copper, offering less mechanical stress on delicate electronic components. Some heat sinks made from aluminum have a copper core as a trade off. The heat sink's contact surface (the base) must be flat and smooth to ensure the best thermal contact with the object needing cooling. Frequently a thermally conductive grease is used to ensure optimal thermal contact; such compounds often contain colloidal silver. Further, a clamping mechanism, screws, or thermal adhesive hold the heat sink tightly onto the component, but specifically without pressure that would crush the component.
Performance
Heat sink performance (including free convection, forced convection, liquid cooled, and any combination thereof) is a function of material, geometry, and overall surface heat transfer coefficient. Generally, forced convection heat sink thermal performance is improved by increasing the thermal conductivity of the heat sink materials, increasing the surface area (usually by adding extended surfaces, such as fins or foam metal) and by increasing the overall area heat transfer coefficient (usually by increase fluid velocity, such as adding fans, pumps, etc.).
Online heat sink calculators from companies such as Novel Concepts, Inc. and at www.heatsinkcalculator.com can accurately estimate forced and natural convection heat sink performance. For more complex heat sink geometries, or heat sinks with multiple materials or multiple fluids, computation fluid dynamics (CFD) analysis is recommended (see graphics on this page).
Convective air cooling
This term describes device cooling by the convection currents of the warm air being allowed to escape the confines of the component to be replaced by cooler air. Since warm air normally rises, this method usually requires venting at the top or sides of the casing to be effective.
Forced air cooling
If there is more air being forced into a system than being pumped out (due to an imbalance in the number of fans), this is referred to as a 'positive' airflow, as the pressure inside the unit is higher than outside.
A balanced or neutral airflow is the most efficient, although a slightly positive airflow can result in less dust build up if filtered properly
Heat pipes
A heat pipe is a heat transfer device that uses evaporation and condensation of a two-phase "working fluid" or coolant to transport large quantities of heat with a very small difference in temperature between the hot and cold interfaces. A typical heat pipe consists of sealed hollow tube made of a thermoconductive metal such as copper or aluminium, and a wick to return the working fluid from the evaporator to the condenser. The pipe contains both saturated liquid and vapor of a working fluid (such as water, methanol or ammonia), all other gases being excluded. The most common heat pipe for electronics thermal management has a copper envelope and wick, with water as the working fluid. Copper/methanol is used if the heat pipe needs to operate below the freezing point of water, and aluminum/ammonia heat pipes are used for electronics cooling in space.
The advantage of heat pipes is their great efficiency in transferring heat. The thermal conductivity of heat pipes can be as high as 100,000 W/m K, in contrast to copper, which has a thermal conductivity of around 400 W/m K.
Peltier cooling plates
Peltier cooling plates take advantage of the Peltier effect to create a heat flux between the junction of two different conductors of electricity by applying an electric current. This effect is commonly used for cooling electronic components and small instruments. In practice, many such junctions may be arranged in series to increase the effect to the amount of heating or cooling required.
There are no moving parts, so a Peltier plate is maintenance free. It has a relatively low efficiency, so thermoelectric cooling is generally used for electronic devices, such as infra-red sensors, that need to operate at temperatures below ambient. For cooling these devices, the solid state nature of the Peltier plates outweighs their poor efficiency. Thermoelectric junctions are typically around 10% as efficient as the ideal Carnot cycle refrigerator, compared with 40% achieved by conventional compression cycle systems.
Synthetic jet air cooling
A synthetic jet is produced by a continual flow of vortices that are formed by alternating brief ejection and suction of air across an opening such that the net mass flux is zero. A unique feature of these jets is that they are formed entirely from the working fluid of the flow system in which they are deployed can produce a net momentum to the flow of a system without net mass injection to the system.
Synthetic jet air movers have no moving parts and are thus maintenance free. Due to the high heat transfer coefficients, high reliability but lower overall flow rates, Synthetic jet air movers are usually used at the chip level and not at the system level for cooling. However depending on the size and complexity of the systems they can be used for both at times.
Electrostatic fluid acceleration
An electrostatic fluid accelerator (EFA) is a device which pumps a fluid such as air without any moving parts. Instead of using rotating blades, as in a conventional fan, an EFA uses an electric field to propel electrically charged air molecules. Because air molecules are normally neutrally charged, the EFA has to create some charged molecules, or ions, first. Thus there are three basic steps in the fluid acceleration process: ionize air molecules, use those ions to push many more neutral molecules in a desired direction, and then recapture and neutralize the ions to eliminate any net charge.
The basic principle has been understood for some time but only in recent years have seen developments in the design and manufacture of EFA devices that may allow them to find practical and economical applications, such as in micro-cooling of electronics components.
Recent developments
More recently, high thermal conductivity materials such as synthetic diamond and boron arsenide cooling sinks are being researched to provide better cooling. Boron arsenide has been reported with high thermal conductivity and high thermal boundary conductance with gallium nitride transistors and thus better performance than diamond and silicon carbide cooling technologies. For example, funded by the U.S. Department of Defense, research has been underway using high-power density gallium nitride transistors with synthetic diamonds as thermal conductors. Also, some heat sinks are constructed of multiple materials with desirable characteristics, such as phase change materials, which can store a great deal of energy due to their heat of fusion.
Thermal simulation of electronics
Thermal simulations give engineers a visual representation of the temperature and airflow inside the equipment. Thermal simulations enable engineers to design the cooling system; to optimise a design to reduce power consumption, weight and cost; and to verify the thermal design to ensure there are no issues when the equipment is built. Most thermal simulation software uses Computational fluid dynamics techniques to predict temperature and airflow of an electronics system.
Design
Thermal simulation is often required to determine how to effectively cool components within design constraints. Simulation enables the design and verification of the thermal design of the equipment at a very early stage and throughout the design of the electronic and mechanical parts. Designing with thermal properties in mind from the start reduces the risk of last minute design changes to fix thermal issues.
Using thermal simulation as part of the design process enables the creation of an optimal and innovative product design that performs to specification and meets customers' reliability requirements.
Optimise
It is easy to design a cooling system for almost any equipment if there is unlimited space, power and budget. However, the majority of equipment will have a rigid specification that leaves a limited margin for error. There is a constant pressure to reduce power requirements, system weight and cost parts, without compromising performance or reliability. Thermal simulation allows experimentation with optimisation, such as modifying heatsink geometry or reducing fan speeds in a virtual environment, which is faster, cheaper and safer than physical experiment and measurement.
Verify
Traditionally, the first time the thermal design of the equipment is verified is after a prototype has been built. The device is powered up, perhaps inside an environmental chamber, and temperatures of the critical parts of the system are measured using sensors such as thermocouples. If any problems are discovered, the project is delayed while a solution is sought. A change to the design of a PCB or enclosure part may be required to fix the issue, which will take time and cost a significant amount of money. If thermal simulation is used as part of the design process of the equipment, thermal design issue will be identified before a prototype is built. Fixing an issue at the design stage is both quicker and cheaper than modifying the design after a prototype is created.
Software
There are a wide range of software tools that are designed for thermal simulation of electronics include 6SigmaET, Ansys' IcePak and Mentor Graphics' FloTHERM.
Telecommunications environments
Thermal management measures must be taken to accommodate high heat release equipment in telecommunications rooms. Generic supplemental/spot cooling techniques, as well as turnkey cooling solutions developed by equipment manufacturers are viable solutions. Such solutions could allow very high heat release equipment to be housed in a central office that has a heat density at or near the cooling capacity available from the central air handler.
According to Telcordia GR-3028, Thermal Management in Telecommunications Central Offices, the most common way of cooling modern telecommunications equipment internally is by utilizing multiple high-speed fans to create forced convection cooling. Although direct and indirect liquid cooling may be introduced in the future, the current design of new electronic equipment is geared towards maintaining air as the cooling medium.
A well-developed "holistic" approach is required to understand current and future thermal management problems. Space cooling on one hand, and equipment cooling on the other, cannot be viewed as two isolated parts of the overall thermal challenge. The main purpose of an equipment facility's air-distribution system is to distribute conditioned air in such a way that the electronic equipment is cooled effectively. The overall cooling efficiency depends on how the air distribution system moves air through the equipment room, how the equipment moves air through the equipment frames, and how these airflows interact with one another. High heat-dissipation levels rely heavily on a seamless integration of equipment-cooling and room-cooling designs.
The existing environmental solutions in telecommunications facilities have inherent limitations. For example, most mature central offices have limited space available for large air duct installations that are required for cooling high heat density equipment rooms. Furthermore, steep temperature gradients develop quickly should a cooling outage occur; this has been well documented through computer modeling and direct measurements and observations. Although environmental backup systems may be in place, there are situations when they will not help. In a recent case, telecommunications equipment in a major central office was overheated, and critical services were interrupted by a complete cooling shut down initiated by a false smoke alarm.
A major obstacle for effective thermal management is the way heat-release data is currently reported. Suppliers generally specify the maximum (nameplate) heat release from the equipment. In reality, equipment configuration and traffic diversity will result in significantly lower heat release numbers.
Equipment cooling classes
As stated in GR-3028, most equipment environments maintain cool front (maintenance) aisles and hot rear (wiring) aisles, where cool supply air is delivered to the front aisles and hot air is removed from the rear aisles. This scheme provides multiple benefits, including effective equipment cooling and high thermal efficiency.
In the traditional room cooling class utilized by the majority of service providers, equipment cooling would benefit from air intake and exhaust locations that help move air from the front aisle to the rear aisle. The traditional front-bottom to top-rear pattern, however, has been replaced in some equipment with other airflow patterns that may not ensure adequate equipment cooling in high heat density areas.
A classification of equipment (shelves and cabinets) into Equipment-Cooling (EC) classes serves the purpose of classifying the equipment with regard to the cooling air intake and hot air exhaust locations, i.e., the equipment airflow schemes or protocols.
The EC-Class syntax provides a flexible and important “common language.” It is used for developing Heat-Release Targets (HRTs), which are important for network reliability, equipment and space planning, and infrastructure capacity planning. HRTs take into account physical limitations of the environment and environmental baseline criteria, including the supply airflow capacity, air diffusion into the equipment space, and air-distribution/equipment interactions. In addition to being used for developing the HRTs, the EC Classification can be used to show compliance on product sheets, provide internal design specifications, or specify requirements in purchase orders.
The Room-Cooling classification (RC-Class) refers to the way the overall equipment space
is air-conditioned (cooled). The main purpose of RC-Classes is to provide a logical classification and description of legacy and non-legacy room-cooling schemes or protocols in the central office environment. In addition to being used for developing HRTs, the RC-classification can be used in internal central office design specifications or in purchase orders.
Supplemental-Cooling classes (SC-Class) provide a classification of supplemental cooling techniques. Service providers use supplemental/spot-cooling solutions to supplement the
cooling capacity (e.g., to treat occurrences of “hot spots”) provided by the general
room-cooling protocol as expressed by the RC-Class.
Economic impact
Energy consumption by telecommunications equipment currently accounts for a high percentage of the total energy consumed in central offices. Most of this energy is subsequently released as heat to the surrounding equipment space. Since most of the remaining central office energy use goes to cool the equipment room, the economic impact of making the electronic equipment energy-efficient would be considerable for companies that use and operate telecommunications equipment. It would reduce capital costs for support systems, and improve thermal conditions in the equipment room.
See also
Heat generation in integrated circuits
Thermal resistance in electronics
Thermal management of high-power LEDs
Thermal design power
Heat pipe
Computer cooling
Radiator
Active cooling
References
Further reading
External links
Computer hardware cooling
Electronic design
de:Kühlkörper
es:Disipador
fr:Radiateur#.C3.89changeur solide.2Fair
it:Dissipatore (elettronica)
he:צלעות קירור
lt:Radiatorius (elektronikoje)
nl:Koelvin
ja:ヒートシンク
pl:Radiator
pt:Dissipador
ru:Кулер
sk:Chladič (elektronika) | Thermal management (electronics) | [
"Engineering"
] | 4,615 | [
"Electronic design",
"Electronic engineering",
"Design"
] |
4,179,425 | https://en.wikipedia.org/wiki/Royal%20Netherlands%20Meteorological%20Institute | The Royal Netherlands Meteorological Institute (, ; KNMI) is the Dutch national weather forecasting service, which has its headquarters in De Bilt, in the province of Utrecht, central Netherlands.
The primary tasks of KNMI are weather forecasting, monitoring of climate changes and monitoring seismic activity. KNMI is also the national research and information centre for climate, climate change and seismology.
History
KNMI was established by royal decree of King William III on 21 January 1854 under the title "Royal Meteorological Observatory". Professor C. H. D. Buys Ballot was appointed as the first Director. The year before Professor Ballot had moved the Utrecht University Observatory to the decommissioned fort at Sonnenborgh. It was only later, in 1897, that the headquarters of the KNMI moved to the Koelenberg estate in De Bilt.
The "Royal Meteorological Observatory" originally had two divisions, the land branch under Dr. Frederik Wilhelm Christiaan Krecke and the marine branch under navy Lt. Marin H. Jansen.
Like Robert FitzRoy who founded the Meteorological Office in Britain the same year, Ballot was disenchanted with the non-scientific weather reports found in European newspapers at the time. Like the Met Office, the KNMI also pioneered daily weather predictions, which he called by a new combination "weervoorspelling" (weather prognostication).
Research
Applied research at KNMI is focused on three areas:
Research aimed at improving the quality, usefulness and accessibility of meteorological and oceanographical data in support of operational weather forecasting and other applications of such data.
Climate-related research on oceanography; atmospheric boundary layer processes, clouds and radiation; the chemical composition of the atmosphere (e.g. ozone); climate variability research; the analysis of climate, climate variability and climatic change; modelling support and policy support to the Dutch Government with respect to climate and climatic change.
Seismological research as well as monitoring of seismic activity (earthquakes).
Development of atmospheric dispersion models
KNMI's applied research also encompasses the development and operational use of atmospheric dispersion models.
Whenever a disaster occurs within Europe which causes the emission of toxic gases or radioactive material into the atmosphere, it is of utmost importance to quickly determine where the atmospheric plume of toxic material is being transported by the prevailing winds and other meteorological factors. At such times, KNMI activates a special calamity service. For this purpose, a group of seven meteorologists is constantly on call day and night. KNMI's role in supplying information during emergencies is included in municipal and provincial disaster management plans. Civil services, fire departments and the police can be provided with weather and other relevant information directly by the meteorologist on duty, through dedicated telephone connections.
KNMI has available two atmospheric dispersion models for use by their calamity service:
PUFF - In cooperation with the Netherlands National Institute for Public Health and the Environment (Dutch: Rijksinstituut voor Volksgezondheid en Milieuhygiene or simply RIVM), KNMI has developed the dispersion model PUFF. It has been designed to calculate the dispersion of air pollution on European scales. The model was originally tested by using measurements of the dispersion of radioactivity caused by the accident in the nuclear power plant of Chernobyl in 1986. A few years later, in 1994, a dedicated dispersion experiment called ETEX (European Tracer EXperiment) was carried out, which also provided useful data for further testing of PUFF.
CALM - CALM is a CALamity Model designed for the calculation of air pollution dispersion on small spatial scales, within the Netherlands. The algorithms and parameters contained in the CALM model are practically identical to that of the PUFF model. However, the meteorological input can only be supplied manually in CALM. The user provides both observed and predicted values for wind velocity at the 10 meter height level, the atmospheric stability classification and the mixing height. After the model calculations have been performed, a map is created and displayed with the derived trajectories of the pollution plume and an indication of how and where the cloud will disperse.
Storm naming
In 2019 KNMI decided to join the western storm naming group to help awareness of the danger of storms, the first named storm was Storm Ciara on 9 February 2020.
See also
Atmospheric dispersion modeling
List of atmospheric dispersion models
National Center for Atmospheric Research
NERI, the National Environmental Research Institute of Denmark
NILU, the Norwegian Institute for Air Research
Roadway air dispersion modeling
Swedish Meteorological and Hydrological Institute
TA Luft
UK Atmospheric Dispersion Modelling Liaison Committee
UK Dispersion Modelling Bureau
University Corporation for Atmospheric Research
References
External links
KNMI website (in Dutch)
KNMI website (in English)
KNMI atmospheric dispersion models
RIVM website (in English)
Atmospheric dispersion modeling
Organisations based in De Bilt
Governmental meteorological agencies in Europe
Independent government agencies of the Netherlands
Organisations based in the Netherlands with royal patronage
Research institutes in the Netherlands | Royal Netherlands Meteorological Institute | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,022 | [
"Atmospheric dispersion modeling",
"Environmental modelling",
"Environmental engineering"
] |
4,180,667 | https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff%20limit | The Tolman–Oppenheimer–Volkoff limit (or TOV limit) is an upper bound to the mass of cold, non-rotating neutron stars, analogous to the Chandrasekhar limit for white dwarf stars. Stars more massive than the TOV limit collapse into a black hole. The original calculation in 1939, which neglected complications such as nuclear forces between neutrons, placed this limit at approximately 0.7 solar masses (). Later, more refined analyses have resulted in larger values.
Theoretical work in 1996 placed the limit at approximately 1.5 to 3.0 , corresponding to an original stellar mass of 15 to 20 ; additional work in the same year gave a more precise range of 2.2 to 2.9 .
Data from GW170817, the first gravitational wave observation attributed to merging neutron stars (thought to have collapsed into a black hole within a few seconds after merging) placed the limit in the range of 2.01 to 2.17 .
In the case of a rigidly spinning neutron star, meaning that different levels in the interior of the star all rotate at the same rate, the mass limit is thought to increase by up to 18–20%.
History
The idea that there should be an absolute upper limit for the mass of a cold (as distinct from thermal pressure supported) self-gravitating body dates back to the 1932 work of Lev Landau, based on the Pauli exclusion principle. Pauli's principle shows that the fermionic particles in sufficiently compressed matter would be forced into energy states so high that their rest mass contribution would become negligible when compared with the relativistic kinetic contribution (RKC). RKC is determined just by the relevant quantum wavelength , which would be of the order of the mean interparticle separation. In terms of Planck units, with the reduced Planck constant , the speed of light , and the gravitational constant all set equal to one, there will be a corresponding pressure given roughly by
At the upper mass limit, that pressure will equal the pressure needed to resist gravity. The pressure to resist gravity for a body of mass will be given according to the virial theorem roughly by
where is the density. This will be given by , where is the relevant mass per particle. It can be seen that the wavelength cancels out so that one obtains an approximate mass limit formula of the very simple form
In this relationship, can be taken to be given roughly by the proton mass. This even applies in the white dwarf case (that of the Chandrasekhar limit) for which the fermionic particles providing the pressure are electrons. This is because the mass density is provided by the nuclei in which the neutrons are at most about as numerous as the protons. Likewise the protons, for charge neutrality, must be exactly as numerous as the electrons outside.
In the case of neutron stars this limit was first worked out by J. Robert Oppenheimer and George Volkoff in 1939, using the work of Richard Chace Tolman. Oppenheimer and Volkoff assumed that the neutrons in a neutron star formed a degenerate cold Fermi gas. They thereby obtained a limiting mass of approximately 0.7 solar masses, which was less than the Chandrasekhar limit for white dwarfs.
Oppenheimer and Volkoff's paper notes that "the effect of repulsive forces, i.e., of raising the pressure for a given density above the value given by the Fermi equation of state ... could tend to prevent the collapse." And indeed, the most massive neutron star detected so far, PSR J0952–0607, is estimated to be much heavier than Oppenheimer and Volkoff's TOV limit at M☉. More realistic models of neutron stars that include baryon strong force repulsion predict a neutron star mass limit of 2.2 to 2.9 M☉. The uncertainty in the value reflects the fact that the equations of state for extremely dense matter are not well known.
Applications
In a star less massive than the limit, the gravitational compression is balanced by short-range repulsive neutron–neutron interactions mediated by the strong force and also by the quantum degeneracy pressure of neutrons, preventing collapse. If its mass is above the limit, the star will collapse to some denser form. It could form a black hole, or change composition and be supported in some other way (for example, by quark degeneracy pressure if it becomes a quark star). Because the properties of hypothetical, more exotic forms of degenerate matter are even more poorly known than those of neutron-degenerate matter, most astrophysicists assume, in the absence of evidence to the contrary, that a neutron star above the limit collapses directly into a black hole.
A black hole formed by the collapse of an individual star must have mass exceeding the Tolman–Oppenheimer–Volkoff limit. Theory predicts that because of mass loss during stellar evolution, a black hole formed from an isolated star of solar metallicity can have a mass of no more than approximately 10 solar masses.:Fig. 16 Observationally, because of their large mass, relative faintness, and X-ray spectra, a number of massive objects in X-ray binaries are thought to be stellar black holes. These black hole candidates are estimated to have masses between 3 and 20 solar masses. LIGO has detected black hole mergers involving black holes in the 7.5–50 solar mass range; it is possible – although unlikely – that these black holes were themselves the result of previous mergers.
Oppenheimer and Volkoff discounted the influence of heat, stating in reference to work by Landau (1932), 'even [at] 107 degrees... the pressure is determined essentially by the density only and not by the temperature' – yet it has been estimated that temperatures can reach up to approximately >109 K during formation of a neutron star, mergers and binary accretion. Another source of heat and therefore collapse-resisting pressure in neutron stars is 'viscous friction in the presence of differential rotation.'
Oppenheimer and Volkoff's calculation of the mass limit of neutron stars also neglected to consider the rotation of neutron stars, however we now know that neutron stars are capable of spinning at much faster rates than were known in Oppenheimer and Volkoff's time. The fastest-spinning neutron star known is PSR J1748-2446ad, rotating at a rate of 716 times per second or 43,000 revolutions per minute, giving a linear (tangential) speed at the surface on the order of 0.24c (i.e., nearly a quarter the speed of light). Star rotation interferes with convective heat loss during supernova collapse, so rotating stars are more likely to collapse directly to form a black hole
List of least massive black holes
List of objects in mass gap
This list contains objects that may be neutron stars, black holes, quark stars, or other exotic objects. This list is distinct from the list of least massive black holes due to the undetermined nature of these objects, largely because of indeterminate mass, or other poor observation data.
See also
Tolman–Oppenheimer–Volkoff equation
Oppenheimer–Snyder model
Bekenstein bound
Quark star
Notes
References
Astrophysics
Neutron stars
Black holes
J. Robert Oppenheimer | Tolman–Oppenheimer–Volkoff limit | [
"Physics",
"Astronomy"
] | 1,513 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Stellar phenomena",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
8,771,002 | https://en.wikipedia.org/wiki/Hanna%20Nasser%20%28academic%29 | Hanna Nasir (; born 14 Jan 1935), alternately transliterated Hanna Nasser, is a Palestinian academic and political figure.
Early life and education
Nasser was born in Jaffa in 1935. His cousin was Kamal Nasser who was assassinated by the Israelis in Beirut in 1973.
Nasser holds a PhD in Nuclear Physics from Purdue University in the United States.
Career and activities
Nasser was a long-time president of Birzeit University, which his father, Musa Nasser, founded. He directed the school's transition from a community college to an accredited university. In November 1974 Nasser was exiled by the Israeli authorities. He continued to serve as Birzeit's president in exile; while the school's vice-president managed its day-to-day business, Birzeit officials regularly visited Nasser in Amman to receive his input on major decisions.
Nasir served on the Executive Committee of the Palestine Liberation Organization between 1981 and 1984 and held the position of Head of the Palestine National Fund between 1982 and 1984. Nasir, along with 29 other exiles, was allowed to return to the West Bank in May 1993 as the peace process got under way. He remained president of Birzeit until his retirement in 2004.
In 2002, Yasser Arafat appointed Nasir to the post of Chairman of the Palestinian Central Elections Commission (CEC). The CEC was established by the Palestinian Authority
in 1995 as an independent body, responsible for the conduct of elections in the Palestinian territories. In the post, Nasir oversaw the presidential election in 2005, the legislative election in 2006, and the local election in the West Bank in 2012 and 2017.
Personal life
Born to a Palestinian Christian family, Hanna is the father of three sons and one daughter.
Awards
He holds honorary titles including the French Legion of Honour and an honorary doctorate from the American University in Cairo.
Notes
Further reading
An extensive discussion of Nasir's career can be found in Gabi Baramki's Peaceful Resistance: Building a Palestinian University under Occupation, Pluto Press, October 2009.
Nuclear physicists
Palestinian physicists
Palestinian Christians
20th-century Palestinian politicians
Palestine Liberation Organization members
Academic staff of Birzeit University
Purdue University alumni
Living people
Arab people in Mandatory Palestine
People from Jaffa
1935 births
Presidents of Birzeit University | Hanna Nasser (academic) | [
"Physics"
] | 456 | [
"Nuclear physicists",
"Nuclear physics"
] |
8,774,050 | https://en.wikipedia.org/wiki/Telecommunications%20engineering | Telecommunications engineering is a subfield of electronics engineering which seeks to design and devise systems of communication at a distance. The work ranges from basic circuit design to strategic mass developments. A telecommunication engineer is responsible for designing and overseeing the installation of telecommunications equipment and facilities, such as complex electronic switching system, and other plain old telephone service facilities, optical fiber cabling, IP networks, and microwave transmission systems. Telecommunications engineering also overlaps with broadcast engineering.
Telecommunication is a diverse field of engineering connected to electronic, civil and systems engineering. Ultimately, telecom engineers are responsible for providing high-speed data transmission services. They use a variety of equipment and transport media to design the telecom network infrastructure; the most common media used by wired telecommunications today are twisted pair, coaxial cables, and optical fibers. Telecommunications engineers also provide solutions revolving around wireless modes of communication and information transfer, such as wireless telephony services, radio and satellite communications, internet, Wi-Fi and broadband technologies.
History
Telecommunication systems are generally designed by telecommunication engineers which sprang from technological improvements in the telegraph industry in the late 19th century and the radio and the telephone industries in the early 20th century. Today, telecommunication is widespread and devices that assist the process, such as the television, radio and telephone, are common in many parts of the world. There are also many networks that connect these devices, including computer networks, public switched telephone network (PSTN), radio networks, and television networks. Computer communication across the Internet is one of many examples of telecommunication. Telecommunication plays a vital role in the world economy, and the telecommunication industry's revenue has been placed at just under 3% of the gross world product.
Telegraph and telephone
Samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on 2 September 1837. Soon after he was joined by Alfred Vail who developed the register — a telegraph terminal that integrated a logging device for recording messages to paper tape. This was demonstrated successfully over three miles (five kilometres) on 6 January 1838 and eventually over forty miles (sixty-four kilometres) between Washington, D.C. and Baltimore on 24 May 1844. The patented invention proved lucrative and by 1851 telegraph lines in the United States spanned over 20,000 miles (32,000 kilometres).
The first successful transatlantic telegraph cable was completed on 27 July 1866, allowing transatlantic telecommunication for the first time. Earlier transatlantic cables installed in 1857 and 1858 only operated for a few days or weeks before they failed. The international use of the telegraph has sometimes been dubbed the "Victorian Internet".
The first commercial telephone services were set up in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven and London. Alexander Graham Bell held the master patent for the telephone that was needed for such services in both countries. The technology grew quickly from this point, with inter-city lines being built and telephone exchanges in every major city of the United States by the mid-1880s. Despite this, transatlantic voice communication remained impossible for customers until January 7, 1927, when a connection was established using radio. However no cable connection existed until TAT-1 was inaugurated on September 25, 1956, providing 36 telephone circuits.
In 1880, Bell and co-inventor Charles Sumner Tainter conducted the world's first wireless telephone call via modulated lightbeams projected by photophones. The scientific principles of their invention would not be utilized for several decades, when they were first deployed in military and fiber-optic communications.
Radio and television
Over several years starting in 1894, the Italian inventor Guglielmo Marconi built the first complete, commercially successful wireless telegraphy system based on airborne electromagnetic waves (radio transmission). In December 1901, he would go on to established wireless communication between Britain and Newfoundland, earning him the Nobel Prize in physics in 1909 (which he shared with Karl Braun). In 1900, Reginald Fessenden was able to wirelessly transmit a human voice. On March 25, 1925, Scottish inventor John Logie Baird publicly demonstrated the transmission of moving silhouette pictures at the London department store Selfridges. In October 1925, Baird was successful in obtaining moving pictures with halftone shades, which were by most accounts the first true television pictures. This led to a public demonstration of the improved device on 26 January 1926 again at Selfridges. Baird's first devices relied upon the Nipkow disk and thus became known as the mechanical television. It formed the basis of semi-experimental broadcasts done by the British Broadcasting Corporation beginning September 30, 1929.
Satellite
The first U.S. satellite to relay communications was Project SCORE in 1958, which used a tape recorder to store and forward voice messages. It was used to send a Christmas greeting to the world from U.S. President Dwight D. Eisenhower. In 1960 NASA launched an Echo satellite; the aluminized PET film balloon served as a passive reflector for radio communications. Courier 1B, built by Philco, also launched in 1960, was the world's first active repeater satellite. Satellites these days are used for many applications such as uses in GPS, television, internet and telephone uses.
Telstar was the first active, direct relay commercial communications satellite. Belonging to AT&T as part of a multi-national agreement between AT&T, Bell Telephone Laboratories, NASA, the British General Post Office, and the French National PTT (Post Office) to develop satellite communications, it was launched by NASA from Cape Canaveral on July 10, 1962, the first privately sponsored space launch. Relay 1 was launched on December 13, 1962, and became the first satellite to broadcast across the Pacific on November 22, 1963.
The first and historically most important application for communication satellites was in intercontinental long distance telephony. The fixed Public Switched Telephone Network relays telephone calls from land line telephones to an earth station, where they are then transmitted a receiving satellite dish via a geostationary satellite in Earth orbit. Improvements in submarine communications cables, through the use of fiber-optics, caused some decline in the use of satellites for fixed telephony in the late 20th century, but they still exclusively service remote islands such as Ascension Island, Saint Helena, Diego Garcia, and Easter Island, where no submarine cables are in service. There are also some continents and some regions of countries where landline telecommunications are rare to nonexistent, for example Antarctica, plus large regions of Australia, South America, Africa, Northern Canada, China, Russia and Greenland.
After commercial long distance telephone service was established via communication satellites, a host of other commercial telecommunications were also adapted to similar satellites starting in 1979, including mobile satellite phones, satellite radio, satellite television and satellite Internet access. The earliest adaption for most such services occurred in the 1990s as the pricing for commercial satellite transponder channels continued to drop significantly.
Computer networks and the Internet
On 11 September 1940, George Stibitz was able to transmit problems using teleprinter to his Complex Number Calculator in New York and receive the computed results back at Dartmouth College in New Hampshire. This configuration of a centralized computer or mainframe computer with remote "dumb terminals" remained popular throughout the 1950s and into the 1960s. However, it was not until the 1960s that researchers started to investigate packet switching — a technology that allows chunks of data to be sent between different computers without first passing through a centralized mainframe. A four-node network emerged on 5 December 1969. This network soon became the ARPANET, which by 1981 would consist of 213 nodes.
ARPANET's development centered around the Request for Comment process and on 7 April 1969, RFC 1 was published. This process is important because ARPANET would eventually merge with other networks to form the Internet, and many of the communication protocols that the Internet relies upon today were specified through the Request for Comment process. In September 1981, RFC 791 introduced the Internet Protocol version 4 (IPv4) and RFC 793 introduced the Transmission Control Protocol (TCP) — thus creating the TCP/IP protocol that much of the Internet relies upon today.
Optical fiber
Optical fiber can be used as a medium for telecommunication and computer networking because it is flexible and can be bundled into cables. It is especially advantageous for long-distance communications, because light propagates through the fiber with little attenuation compared to electrical cables. This allows long distances to be spanned with few repeaters.
In 1966 Charles K. Kao and George Hockham proposed optical fibers at STC Laboratories (STL) at Harlow, England, when they showed that the losses of 1000 dB/km in existing glass (compared to 5-10 dB/km in coaxial cable) was due to contaminants, which could potentially be removed.
Optical fiber was successfully developed in 1970 by Corning Glass Works, with attenuation low enough for communication purposes (about 20dB/km), and at the same time GaAs (Gallium arsenide) semiconductor lasers were developed that were compact and therefore suitable for transmitting light through fiber optic cables for long distances.
After a period of research starting from 1975, the first commercial fiber-optic communications system was developed, which operated at a wavelength around 0.8 μm and used GaAs semiconductor lasers. This first-generation system operated at a bit rate of 45 Mbps with repeater spacing of up to 10 km. Soon on 22 April 1977, General Telephone and Electronics sent the first live telephone traffic through fiber optics at a 6 Mbit/s throughput in Long Beach, California.
The first wide area network fibre optic cable system in the world seems to have been installed by Rediffusion in Hastings, East Sussex, UK in 1978. The cables were placed in ducting throughout the town, and had over 1000 subscribers. They were used at that time for the transmission of television channels, not available because of local reception problems.
The first transatlantic telephone cable to use optical fiber was TAT-8, based on Desurvire optimized laser amplification technology. It went into operation in 1988.
In the late 1990s through 2000, industry promoters, and research companies such as KMI, and RHK predicted massive increases in demand for communications bandwidth due to increased use of the Internet, and commercialization of various bandwidth-intensive consumer services, such as video on demand, Internet Protocol data traffic was increasing exponentially, at a faster rate than integrated circuit complexity had increased under Moore's Law.
Concepts
Basic elements of a telecommunication system
Transmitter
Transmitter (information source) that takes information and converts it to a signal for transmission. In electronics and telecommunications a transmitter or radio transmitter is an electronic device which, with the aid of an antenna, produces radio waves. In addition to their use in broadcasting, transmitters are necessary component parts of many electronic devices that communicate by radio, such as cell phones,
Transmission medium
Transmission medium over which the signal is transmitted. For example, the transmission medium for sounds is usually air, but solids and liquids may also act as transmission media for sound. Many transmission media are used as communications channel. One of the most common physical media used in networking is copper wire. Copper wire is used to carry signals to long distances using relatively low amounts of power. Another example of a physical medium is optical fiber, which has emerged as the most commonly used transmission medium for long-distance communications. Optical fiber is a thin strand of glass that guides light along its length.
The absence of a material medium in vacuum may also constitute a transmission medium for electromagnetic waves such as light and radio waves.
Receiver
Receiver (information sink) that receives and converts the signal back into required information. In radio communications, a radio receiver is an electronic device that receives radio waves and converts the information carried by them to a usable form. It is used with an antenna. The information produced by the receiver may be in the form of sound (an audio signal), images (a video signal) or digital data.
Wired communication
Wired communications make use of underground communications cables (less often, overhead lines), electronic signal amplifiers (repeaters) inserted into connecting cables at specified points, and terminal apparatus of various types, depending on the type of wired communications used.
Wireless communication
Wireless communication involves the transmission of information over a distance without help of wires, cables or any other forms of electrical conductors. Wireless operations permit services, such as long-range communications, that are impossible or impractical to implement with the use of wires. The term is commonly used in the telecommunications industry to refer to telecommunications systems (e.g. radio transmitters and receivers, remote controls etc.) which use some form of energy (e.g. radio waves, acoustic energy, etc.) to transfer information without the use of wires. Information is transferred in this manner over both short and long distances.
Roles
Telecom equipment engineer
A telecom equipment engineer is an electronics engineer that designs equipment such as routers, switches, multiplexers, and other specialized computer/electronics equipment designed to be used in the telecommunication network infrastructure.
Network engineer
A network engineer is a computer engineer who is in charge of designing, deploying and maintaining computer networks. In addition, they oversee network operations from a network operations center, designs backbone infrastructure, or supervises interconnections in a data center.
Central-office engineer
A central-office engineer is responsible for designing and overseeing the implementation of telecommunications equipment in a central office (CO for short), also referred to as a wire center or telephone exchange A CO engineer is responsible for integrating new technology into the existing network, assigning the equipment's location in the wire center, and providing power, clocking (for digital equipment), and alarm monitoring facilities for the new equipment. The CO engineer is also responsible for providing more power, clocking, and alarm monitoring facilities if there are currently not enough available to support the new equipment being installed. Finally, the CO engineer is responsible for designing how the massive amounts of cable will be distributed to various equipment and wiring frames throughout the wire center and overseeing the installation and turn up of all new equipment.
Sub-roles
As structural engineers, CO engineers are responsible for the structural design and placement of racking and bays for the equipment to be installed in as well as for the plant to be placed on.
As electrical engineers, CO engineers are responsible for the resistance, capacitance, and inductance (RCL) design of all new plant to ensure telephone service is clear and crisp and data service is clean as well as reliable. Attenuation or gradual loss in intensity and loop loss calculations are required to determine cable length and size required to provide the service called for. In addition, power requirements have to be calculated and provided to power any electronic equipment being placed in the wire center.
Overall, CO engineers have seen new challenges emerging in the CO environment. With the advent of Data Centers, Internet Protocol (IP) facilities, cellular radio sites, and other emerging-technology equipment environments within telecommunication networks, it is important that a consistent set of established practices or requirements be implemented.
Installation suppliers or their sub-contractors are expected to provide requirements with their products, features, or services. These services might be associated with the installation of new or expanded equipment, as well as the removal of existing equipment.
Several other factors must be considered such as:
Regulations and safety in installation
Removal of hazardous material
Commonly used tools to perform installation and removal of equipment
Outside-plant engineer
Outside plant (OSP) engineers are also often called field engineers, because they frequently spend much time in the field taking notes about the civil environment, aerial, above ground, and below ground. OSP engineers are responsible for taking plant (copper, fiber, etc.) from a wire center to a distribution point or destination point directly. If a distribution point design is used, then a cross-connect box is placed in a strategic location to feed a determined distribution area.
The cross-connect box, also known as a serving area interface, is then installed to allow connections to be made more easily from the wire center to the destination point and ties up fewer facilities by not having dedication facilities from the wire center to every destination point. The plant is then taken directly to its destination point or to another small closure called a terminal, where access can also be gained to the plant, if necessary. These access points are preferred as they allow faster repair times for customers and save telephone operating companies large amounts of money.
The plant facilities can be delivered via underground facilities, either direct buried or through conduit or in some cases laid under water, via aerial facilities such as telephone or power poles, or via microwave radio signals for long distances where either of the other two methods is too costly.
Sub-roles
As structural engineers, OSP engineers are responsible for the structural design and placement of cellular towers and telephone poles as well as calculating pole capabilities of existing telephone or power poles onto which new plant is being added. Structural calculations are required when boring under heavy traffic areas such as highways or when attaching to other structures such as bridges. Shoring also has to be taken into consideration for larger trenches or pits. Conduit structures often include encasements of slurry that needs to be designed to support the structure and withstand the environment around it (soil type, high traffic areas, etc.).
As electrical engineers, OSP engineers are responsible for the resistance, capacitance, and inductance (RCL) design of all new plant to ensure telephone service is clear and crisp and data service is clean as well as reliable. Attenuation or gradual loss in intensity and loop loss calculations are required to determine cable length and size required to provide the service called for. In addition power requirements have to be calculated and provided to power any electronic equipment being placed in the field. Ground potential has to be taken into consideration when placing equipment, facilities, and plant in the field to account for lightning strikes, high voltage intercept from improperly grounded or broken power company facilities, and from various sources of electromagnetic interference.
As civil engineers, OSP engineers are responsible for drafting plans, either by hand or using Computer-aided design (CAD) software, for how telecom plant facilities will be placed. Often when working with municipalities trenching or boring permits are required and drawings must be made for these. Often these drawings include about 70% or so of the detailed information required to pave a road or add a turn lane to an existing street. Structural calculations are required when boring under heavy traffic areas such as highways or when attaching to other structures such as bridges. As civil engineers, telecom engineers provide the modern communications backbone for all technological communications distributed throughout civilizations today.
Unique to telecom engineering is the use of air-core cable which requires an extensive network of air handling equipment such as compressors, manifolds, regulators and hundreds of miles of air pipe per system that connects to pressurized splice cases all designed to pressurize this special form of copper cable to keep moisture out and provide a clean signal to the customer.
As political and social ambassador, the OSP engineer is a telephone operating company's face and voice to the local authorities and other utilities. OSP engineers often meet with municipalities, construction companies and other utility companies to address their concerns and educate them about how the telephone utility works and operates. Additionally, the OSP engineer has to secure real estate in which to place outside facilities, such as an easement to place a cross-connect box.
See also
Computer engineering
Computer networking
Electronic design automation
Electronic engineering
Electronic media
Fiber-optic communication
History of telecommunication
Information theory
List of electrical engineering topics (alphabetical)
List of electrical engineering topics (thematic)
Professional engineer
Radio
Receiver (radio)
Telecommunication
Telephone
Television
Telecommunications cable
Transmission medium
Transmitter
Two-way radio
Wired communication
Wireless
References
Further reading
External links
Telecommunications engineering | Telecommunications engineering | [
"Engineering"
] | 3,996 | [
"Electrical engineering",
"Telecommunications engineering"
] |
8,777,614 | https://en.wikipedia.org/wiki/Atomic%20Industrial%20Forum | The Atomic Industrial Forum (AIF) was an industrial policy organization for the commercial development of nuclear power and energy.
History
1950s
The Atomic Industrial Forum history dates to Autumn 1952, when it was being first organized:
In response, some 30 industrialists, engineers, and educators met in January 1953 to establish the forum. The AIF was formally incorporated on April 10, 1953, in New York City, and marked the beginning of the commercial nuclear power industry in the United StatesThe first Executive Director of AIF was Charles Robbins.
As a profit trade association the AIF advocated the peaceful uses of atomic energy and increasing the role of the private sector in its development. Its first order of business was to advocate revising the Atomic Energy Act of 1946 to allow and foster the commercial ownership of non weapons nuclear facilities, such as production of radioactive isotopes and nuclear power plants. AIF established strong working relationships with the U.S. Atomic Energy Commission and the Congressional Joint Committee on Atomic Energy. AIF's efforts helped to achieve the passage of the Atomic Energy Act of 1954 which resulted in the growth of a commercial nuclear industry. AIF was organized on the basis of an executive committee, the annual election of officers and a permanent operations staff, headed by an Executive Director, Mr. Charles Robbins.
1960s
In 1963 AIF established an international public information program. Working with other forums around the world, the program sought, through publications, workshops, exhibitions, speeches and outreach, to foster and achieve better understanding of the peaceful uses of atomic energy. Its first program director was Charles B.Yulish.
Both the government and private sectors involvement in atomic energy grew steadily; eventually, more that 125 commercial nuclear power plants provided 20 percent of America's electricity.
At the same time there were increasing debates on safeguards and regulation. The Atomic Energy Commission, which both promoted, developed and regulated nuclear development, was split into two agencies—the Energy Research and Development Agency, now the Department of Energy, and the independent U.S. Nuclear Regulatory Administration.
As new challenges and opportunities evolved, new industry efforts and resources were required to address these matters.
1980s
In 1987 the AIF was reconfigured into the Nuclear Utility Management and Resources Council (NUMARC), which addressed generic regulatory and technical issues, and the U.S. Council for Energy Awareness (USCEA), founded in 1979. In 1994 these two organizations were again reorganized and re-purposed. The Nuclear Energy Institute and the American Nuclear Energy Council (ANEC conducted public affairs, and the nuclear division of the Edison Electric Institute (EEI), was responsible for issues involving nuclear fuel supply and management, and the economics of nuclear energy.
2000s
In 2011, the Nuclear Energy Institute became the leading organization representing the nuclear industry. NEI headquarters is in Washington, DC.
References
Trade associations based in the United States
Nuclear organizations
Organizations established in 1953
1953 establishments in the United States | Atomic Industrial Forum | [
"Engineering"
] | 592 | [
"Nuclear organizations",
"Energy organizations"
] |
8,778,829 | https://en.wikipedia.org/wiki/Biogenic%20sulfide%20corrosion | Biogenic sulfide corrosion is a bacterially mediated process of forming hydrogen sulfide gas and the subsequent conversion to sulfuric acid that attacks concrete and steel within wastewater environments. The hydrogen sulfide gas is biochemically oxidized in the presence of moisture to form sulfuric acid. The effect of sulfuric acid on concrete and steel surfaces exposed to severe wastewater environments can be devastating. In the USA alone, corrosion causes sewer asset losses estimated at $14 billion per year. This cost is expected to increase as the aging infrastructure continues to fail.
Environment
Corrosion may occur where stale sewage generates hydrogen sulfide gas into an atmosphere containing oxygen gas and high relative humidity. There must be an underlying anaerobic aquatic habitat containing sulfates and an overlying aerobic aquatic habitat separated by a gas phase containing both oxygen and hydrogen sulfide at concentrations in excess of 2 ppm.
Conversion of sulfate to hydrogen sulfide
Fresh domestic sewage entering a wastewater collection system contains proteins including organic sulfur compounds oxidizable to sulfates () and may contain inorganic sulfates. Dissolved oxygen is depleted as bacteria begin to catabolize organic material in sewage. In the absence of dissolved oxygen and nitrates, sulfates are reduced to hydrogen sulfide (H2S) as an alternative source of oxygen for catabolizing organic waste by sulfate-reducing bacteria (SRB), identified primarily from the obligate anaerobic species Desulfovibrio.
Hydrogen sulfide production depends on various physicochemical, topographic, and hydraulic parameters such as:
Sewage oxygen concentration. The threshold is 0.1 mg/l; above this value, sulfides produced in sludge and sediments are oxidized by oxygen; below this value, sulfides are emitted in the gaseous phase.
Temperature. The higher the temperature, the faster the kinetics of H2S production.
Sewage pH. It must be included between 5.5 and 9 with an optimum at 7.5–8.
Sulfate concentration
Nutrients concentration, associated to the biochemical oxygen demand
Conception of the sewage As H2S is formed only in anaerobic conditions. Slow flow and long retention time gives more time to aerobic bacteria to consume all available dissolved oxygen in water, creating anaerobic conditions. The flatter the land, the less slope can be given to the sewer network, and this favors slower flow and more pumping stations (where retention time is generally longer).
Conversion of hydrogen sulfide to sulfuric acid
Some hydrogen sulfide gas diffuses into the headspace environment above the wastewater. Moisture evaporated from warm sewage may condense on unsubmerged walls of sewers, and is likely to hang in partially formed droplets from the horizontal crown of the sewer. As a portion of the hydrogen sulfide gas and oxygen gas from the air above the sewage dissolves into these stationary droplets, they become a habitat for sulfur oxidizing bacteria (SOB), of the genus Acidithiobacillus. Colonies of these aerobic bacteria metabolize the hydrogen sulfide gas to sulfuric acid ().
Corrosion
Sulfuric acid produced by microorganisms will interact with the surface of the structure material. For ordinary Portland cement, it reacts with the calcium hydroxide in concrete to form calcium sulfate. This change simultaneously destroys the polymeric nature of calcium hydroxide and substitutes a larger molecule into the matrix causing pressure and spalling of the adjacent concrete and aggregate particles. The weakened crown may then collapse under heavy overburden loads. Even within a well-designed sewer network, a rule of thumb in the industry suggests that 5% of the total length may/will suffer from biogenic corrosion. In these specific areas, biogenic sulfide corrosion can deteriorate metal or several millimeters per year of concrete (see Table).
For calcium aluminate cements, processes are completely different because they are based on another chemical composition. At least three different mechanisms contribute to the better resistance to biogenic corrosion:
The first barrier is the larger acid neutralizing capacity of calcium aluminate cements vs. ordinary Portland Cement; one gram of calcium aluminate cement can neutralize around 40% more acid than a gram of ordinary Portland cement. For a given production of acid by the biofilm, a calcium aluminate cement concrete will last longer.
The second barrier is due to the precipitation, when the surficial pH gets below 10, of a layer of alumina gel (AH3 in cement chemistry notation). AH3 is a stable compound down to a pH of 4 and it will form an acid-resistant barrier as long as the surface pH is not lowered below 3–4 by the bacterial activity.
The third barrier is the bacteriostatic effect locally activated when the surface reaches pH values less than 3–4. At this level, the alumina gel is no longer stable and will dissolve, liberating aluminum ions. These ions will accumulate in the thin biofilm. Once the concentration reaches 300–500 ppm, it will produce a bacteriostatic effect on bacteria metabolism. In other words, bacteria will stop oxidizing the sulfur from H2S to produce acid, and the pH will stop decreasing.
A mortar made of calcium aluminate cement combined with calcium aluminate aggregates, i.e. a 100% calcium aluminate material, will last much longer, as aggregates can also limit microorganisms' growth and inhibit the acid generation at the source itself.
Prevention
There are several options to address biogenic sulfide corrosion problems: impairing H2S formation, venting out the H2S, or using materials resistant to biogenic corrosion. For example, sewage flows more rapidly through steeper gradient sewers reducing time available for hydrogen sulfide generation. Likewise, removing sludge and sediments from the bottom of the pipes reduces the amount of anoxic areas responsible for sulfate-reducing bacteria growth. Providing good ventilation of sewers can reduce atmospheric concentrations of hydrogen sulfide gas and may dry exposed sewer crowns, but this may create odor issues with neighbors around the venting shafts. Three other efficient methods can be used involving continuous operation of mechanical equipment: chemical reactant like calcium nitrate can be continuously added in the sewerage water to impair the H2S formation, an active ventilation through odor treatment units to remove H2S, or an injection of compressed air in pressurized mains to avoid the anaerobic condition to develop. In sewerage areas where biogenic sulfide corrosion is expected, acid-resistant materials like calcium aluminate cements, PVC or vitrified clay pipe may be substituted to ordinary concrete or steel sewers.
Existing structures that have extensive exposure to biogenic corrosion such as sewer manholes and pump station wet wells can be rehabilitated. Rehabilitation can be done with materials such as a structural epoxy coating, this epoxy is designed to be both acid-resistant and strengthen the compromised concrete structure.
See also
Corrosion
Microbial corrosion
Sulfide
References
Brongers, M.P.H., Virmani, P.Y., Payer, J.H., 2002. Drinking Water and Sewer Systems in Corrosion Costs and preventive Strategies in the United States. United States Department of Transportation Federal Highway Administration.
Sydney, R., Esfandi, E., Surapaneni, S., 1996. Control concrete sewer corrosion via the crown spray process. Water Environ. Res. 68 (3), 338–347.
United States Environmental Protection Agency, 1991. Hydrogen Sulphide Corrosion in Wastewater Collection and Treatment Systems (Technical Report).
United States Environmental Protection Agency (1985) Design Manual, Odor and Corrosion Control in Sanitary Sewerage Systems and Treatment Plants (Technical Report).
Morton R.L., Yanko W.A., Grahom D.W., Arnold R.G. (1991) Relationship between metal concentrations and crown corrosion in Los Angeles County sewers. Research Journal of Water Pollution Control Federation, 63, 789–798.
Mori T., Nonaka T., Tazaki K., Koga M., Hikosaka Y., Noda S. (1992) Interactions of nutrients, moisture, and pH on microbial corrosion of concrete sewer pipes. Water Research, 26, 29–37.
Ismail N., Nonaka T., Noda S., Mori T. (1993) Effect of carbonation on microbial corrosion of concrete. Journal of Construction Management and Engineering, 20, 133–138.
Davis J.L. (1998) Characterization and modeling of microbially induced corrosion of concrete sewer pipes. Ph.D. Dissertation, University of Houston, Houston, TX.
Monteny J., De Belie N., Vincke E., Verstraete W., Taerwe L. (2001) Chemical and microbiological tests to simulate sulfuric acid corrosion of polymer-modified concrete. Cement and Concrete Research, 31, 1359–1365.
Vincke E., Van Wanseele E., Monteny J., Beeldens A., De Belie N., Taerwe L., Van Gemert D., Verstraete W. (2002) Influence of polymer addition on biogenic sulfuric acid attack. International Biodeterioration and Biodegradation, 49, 283–292.
Herisson J., Van Hullebusch E., Gueguen Minerbe M., Chaussadent T. (2014) Biogenic corrosion mechanism: study of parameters explaining calcium aluminate cement durability. CAC 2014 – International Conference on Calcium Aluminates, May 2014, France. 12 p.
Hammer, Mark J. Water and Waste-Water Technology John Wiley & Sons (1975)
Metcalf & Eddy Wastewater Engineering McGraw-Hill (1972)
Pomeroy, R.D., 1976, "The problem of hydrogen sulphide in sewers". Published by the Clay Pipes Development Association
*Pomeroy's report contains errors in the equation: the pipeline slope (S, p. 8) is quoted as m/100m, but should be m/m. This introduces a factor of 10 underestimate in the calculation of the "Z factor", used to indicate if there is a risk of sulfide-induced corrosion, if the published units are used. The web link is to the revised 1992 edition, which contains the units error - the 1976 edition has the correct units.
Sawyer, Clair N. & McCarty, Perry L. Chemistry for Sanitary Engineers (2nd edition) McGraw-Hill (1967)
United States Department of the Interior (USDI) Concrete Manual (8th edition) United States Government Printing Office (1975)
Weismann, D. & Lohse, M. (Hrsg.): "Sulfid-Praxishandbuch der Abwassertechnik; Geruch, Gefahr, Korrosion verhindern und Kosten beherrschen!" 1. Auflage, VULKAN-Verlag, 2007,
Notes
Bacteria
Cement
Concrete
Corrosion
Sewerage | Biogenic sulfide corrosion | [
"Chemistry",
"Materials_science",
"Engineering",
"Biology",
"Environmental_science"
] | 2,293 | [
"Structural engineering",
"Metallurgy",
"Prokaryotes",
"Corrosion",
"Water pollution",
"Sewerage",
"Electrochemistry",
"Bacteria",
"Environmental engineering",
"Concrete",
"Materials degradation",
"Microorganisms"
] |
9,448,193 | https://en.wikipedia.org/wiki/Boolean%20network | A Boolean network consists of a discrete set of Boolean variables each of which has a Boolean function (possibly different for each variable) assigned to it which takes inputs from a subset of those variables and output that determines the state of the variable it is assigned to. This set of functions in effect determines a topology (connectivity) on the set of variables, which then become nodes in a network. Usually, the dynamics of the system is taken as a discrete time series where the state of the entire network at time t+1 is determined by evaluating each variable's function on the state of the network at time t. This may be done synchronously or asynchronously.
Boolean networks have been used in biology to model regulatory networks. Although Boolean networks are a crude simplification of genetic reality where genes are not simple binary switches, there are several cases where they correctly convey the correct pattern of expressed and suppressed genes.
The seemingly mathematical easy (synchronous) model was only fully understood in the mid 2000s.
Classical model
A Boolean network is a particular kind of sequential dynamical system, where time and states are discrete, i.e. both the set of variables and the set of states in the time series each have a bijection onto an integer series.
A random Boolean network (RBN) is one that is randomly selected from the set of all possible Boolean networks of a particular size, N. One then can study statistically, how the expected properties of such networks depend on various statistical properties of the ensemble of all possible networks. For example, one may study how the RBN behavior changes as the average connectivity is changed.
The first Boolean networks were proposed by Stuart A. Kauffman in 1969, as random models of genetic regulatory networks but their mathematical understanding only started in the 2000s.
Attractors
Since a Boolean network has only 2N possible states, a trajectory will sooner or later reach a previously visited state, and thus, since the dynamics are deterministic, the trajectory will fall into a steady state or cycle called an attractor (though in the broader field of dynamical systems a cycle is only an attractor if perturbations from it lead back to it). If the attractor has only a single state it is called a point attractor, and if the attractor consists of more than one state it is called a cycle attractor. The set of states that lead to an attractor is called the basin of the attractor. States which occur only at the beginning of trajectories (no trajectories lead to them), are called garden-of-Eden states and the dynamics of the network flow from these states towards attractors. The time it takes to reach an attractor is called transient time.
With growing computer power and increasing understanding of the seemingly simple model, different authors gave different estimates for the mean number and length of the attractors, here a brief summary of key publications.
Stability
In dynamical systems theory, the structure and length of the attractors of a network corresponds to the dynamic phase of the network. The stability of Boolean networks depends on the connections of their nodes. A Boolean network can exhibit stable, critical or chaotic behavior. This phenomenon is governed by a critical value of the average number of connections of nodes (), and can be characterized by the Hamming distance as distance measure. In the unstable regime, the distance between two initially close states on average grows exponentially in time, while in the stable regime it decreases exponentially. In this, with "initially close states" one means that the Hamming distance is small compared with the number of nodes () in the network.
For N-K-model the network is stable if , critical if , and unstable if .
The state of a given node is updated according to its truth table, whose outputs are randomly populated. denotes the probability of assigning an off output to a given series of input signals.
If for every node, the transition between the stable and chaotic range depends on . According to Bernard Derrida and Yves Pomeau
, the critical value of the average number of connections is .
If is not constant, and there is no correlation between the in-degrees and out-degrees, the conditions of stability is determined by The network is stable if , critical if , and unstable if .
The conditions of stability are the same in the case of networks with scale-free topology where the in-and out-degree distribution is a power-law distribution: , and , since every out-link from a node is an in-link to another.
Sensitivity shows the probability that the output of the Boolean function of a given node changes if its input changes. For random Boolean networks,
. In the general case, stability of the network is governed by the largest eigenvalue of matrix , where , and is the adjacency matrix of the network. The network is stable if , critical if , unstable if .
Variations of the model
Other topologies
One theme is to study different underlying graph topologies.
The homogeneous case simply refers to a grid which is simply the reduction to the famous Ising model.
Scale-free topologies may be chosen for Boolean networks. One can distinguish the case where only in-degree distribution in power-law distributed, or only the out-degree-distribution or both.
Other updating schemes
Classical Boolean networks (sometimes called CRBN, i.e. Classic Random Boolean Network) are synchronously updated. Motivated by the fact that genes don't usually change their state simultaneously, different alternatives have been introduced. A common classification is the following:
Deterministic asynchronous updated Boolean networks (DRBNs) are not synchronously updated but a deterministic solution still exists. A node i will be updated when t ≡ Qi (mod Pi) where t is the time step.
The most general case is full stochastic updating (GARBN, general asynchronous random Boolean networks). Here, one (or more) node(s) are selected at each computational step to be updated.
The Partially-Observed Boolean Dynamical System (POBDS) signal model differs from all previous deterministic and stochastic Boolean network models by removing the assumption of direct observability of the Boolean state vector and allowing uncertainty in the observation process, addressing the scenario encountered in practice.
Autonomous Boolean networks (ABNs) are updated in continuous time (t is a real number, not an integer), which leads to race conditions and complex dynamical behavior such as deterministic chaos.
Application of Boolean Networks
Classification
The Scalable Optimal Bayesian Classification developed an optimal classification of trajectories accounting for potential model uncertainty and also proposed a particle-based trajectory classification that is highly scalable for large networks with much lower complexity than the optimal solution.
See also
NK model
References
Dubrova, E., Teslenko, M., Martinelli, A., (2005). *Kauffman Networks: Analysis and Applications, in "Proceedings of International Conference on Computer-Aided Design", pages 479-484.
External links
Analysis of Dynamic Algebraic Models (ADAM) v1.1
bioasp/bonesis: Synthesis of Most Permissive Boolean Networks from network architecture and dynamical properties
CoLoMoTo (Consortium for Logical Models and Tools)
DDLab
NetBuilder Boolean Networks Simulator
Open Source Boolean Network Simulator
JavaScript Kauffman Network
Probabilistic Boolean Networks (PBN)
RBNLab
A SAT-based tool for computing attractors in Boolean Networks
Bioinformatics
Logic
Spin models
Exactly solvable models
Statistical mechanics | Boolean network | [
"Physics",
"Engineering",
"Biology"
] | 1,586 | [
"Biological engineering",
"Spin models",
"Quantum mechanics",
"Bioinformatics",
"Statistical mechanics"
] |
9,453,024 | https://en.wikipedia.org/wiki/Vroman%20effect | The Vroman effect, named after Leo Vroman, describes the process of competitive protein adsorption to a surface by blood serum proteins. The highest mobility proteins generally arrive first and are later replaced by less mobile proteins that have a higher affinity for the surface. The order of protein adsorption also depends on the molecular weight of the species adsorbing. Typically, low molecular weight proteins are displaced by high molecular weight protein while the opposite, high molecular weight being displaced by low molecular weight, does not occur. A typical example of this occurs when fibrinogen displaces earlier adsorbed proteins on a biopolymer surface and is later replaced by high molecular weight kininogen. The process is delayed in narrow spaces and on hydrophobic surfaces, fibrinogen is usually not displaced. Under stagnant conditions initial protein deposition takes place in the sequence: albumin; globulin; fibrinogen; fibronectin; factor XII, and HMWK.
Molecular Mechanisms of Action
While the exact mechanism of action is still unknown many important protein physical properties play a part in the Vroman Effect. Proteins have many properties that are important to take into consideration when discussing protein adsorption. These properties include the protein size, charge, mobility, stability, and the structure and composition of the different protein domains that make up the protein's tertiary structure. Protein size determines the molecular weight. Protein charge determines whether preferentially or selective favorable interactions will exist between the protein and a biomaterial. Protein mobility plays a factor in adsorption kinetics.
Adsorption - Desorption Model
The simplest molecular explanation for the exchange of proteins on a surface is the adsorption/desorption model. Here, proteins interact with the surface of a biomaterial and "stick" on the material through interactions made with the protein and the biomaterial surface. Once a protein has adsorbed onto the surface of a biomaterial, the protein may change conformation (structure) and even become nonfunctional. The spaces between the proteins on the biomaterial then become available for new proteins to adsorb. Desorption occurs when the protein leaves the biomaterial surface. This simple model lacks in complexity, since Vroman-like behavior has been observed on hydrophobic surfaces as well as hydrophilic ones. Furthermore, adsorption and desorption doesn't completely explain competitive protein exchange on hydrophilic surfaces.
Transient Complex Model
A "transient complex" model was first proposed by Huetz et al. to explain this competitive exchange. This transient complex exchange occurs in three distinct steps. Initially a protein embeds itself into the monolayer of an already adsorbed homogenous protein monolayer. The aggregation of this new heterogenous protein mixture causes the "turning" of the double-protein complex which exposes the initially adsorbed protein to the solution. In the third step, the protein that was initially adsorbed can now diffuse out into the solution and the new protein takes over. This 3 part "transient complex mechanism" is further explained and verified through AFM imaging by Hirsh et al.
pH Cycling
Jung et al. also describe a molecular mechanism for fibrinogen displacement involving pH cycling. Here the αC domains of fibrinogen change charge after pH cycling which results in conformational changes to the protein that leads to stronger interactions with the protein and the biomaterial.
Mathematical Models
The simplest mathematical model to explain the Vroman Effect is the Langmuir model using the Langmuir isotherm. More complex models include the Fruendlich isotherm and other modifications to the Langmuir model. This model explains the kinetics between reversible adsorption and desorption, assuming the adsorbate behaves as an ideal gas at isothermal conditions.
See also
Protein adsorption
Langmuir adsorption model
References
Surface science
Blood | Vroman effect | [
"Physics",
"Chemistry",
"Materials_science"
] | 810 | [
"Condensed matter physics",
"Surface science"
] |
9,458,068 | https://en.wikipedia.org/wiki/RNA%20silencing | RNA silencing or RNA interference refers to a family of gene silencing effects by which gene expression is negatively regulated by non-coding RNAs such as microRNAs. RNA silencing may also be defined as sequence-specific regulation of gene expression triggered by double-stranded RNA (dsRNA). RNA silencing mechanisms are conserved among most eukaryotes. The most common and well-studied example is RNA interference (RNAi), in which endogenously expressed microRNA (miRNA) or exogenously derived small interfering RNA (siRNA) induces the degradation of complementary messenger RNA. Other classes of small RNA have been identified, including piwi-interacting RNA (piRNA) and its subspecies repeat associated small interfering RNA (rasiRNA).
Background
RNA silencing describes several mechanistically related pathways which are involved in controlling and regulating gene expression. RNA silencing pathways are associated with the regulatory activity of small non-coding RNAs (approximately 20–30 nucleotides in length) that function as factors involved in inactivating homologous sequences, promoting endonuclease activity, translational arrest, and/or chromatic or DNA modification. In the context in which the phenomenon was first studied, small RNA was found to play an important role in defending plants against viruses. For example, these studies demonstrated that enzymes detect double-stranded RNA (dsRNA) not normally found in cells and digest it into small pieces that are not able to cause disease.
While some functions of RNA silencing and its machinery are understood, many are not. For example, RNA silencing has been shown to be important in the regulation of development and in the control of transposition events. RNA silencing has been shown to play a role in antiviral protection in plants as well as insects. Also in yeast, RNA silencing has been shown to maintain heterochromatin structure. However, the varied and nuanced role of RNA silencing in the regulation of gene expression remains an ongoing scientific inquiry. A range of diverse functions have been proposed for a growing number of characterized small RNA sequences—e.g., regulation of developmental, neuronal cell fate, cell death, proliferation, fat storage, haematopoietic cell fate, insulin secretion.
RNA silencing functions by repressing translation or by cleaving messenger RNA (mRNA), depending on the amount of complementarity of base-pairing. RNA has been largely investigated within its role as an intermediary in the translation of genes into proteins. More active regulatory functions, however, only began to be addressed by researchers beginning in the late-1990s. The landmark study providing an understanding of the first identified mechanism was published in 1998 by Fire et al., demonstrating that double-stranded RNA could act as a trigger for gene silencing. Since then, various other classes of RNA silencing have been identified and characterized. Presently, the therapeutic potential of these discoveries is being explored, for example, in the context of targeted gene therapy.
While RNA silencing is an evolving class of mechanisms, a common theme is the fundamental relationship between small RNAs and gene expression. It has also been observed that the major RNA silencing pathways currently identified have mechanisms of action which may involve both post-transcriptional gene silencing (PTGS) as well as chromatin-dependent gene silencing (CDGS) pathways. CDGS involves the assembly of small RNA complexes on nascent transcripts and is regarded as encompassing mechanisms of action which implicate transcriptional gene silencing (TGS) and co-transcriptional gene silencing (CTGS) events. This is significant at least because the evidence suggests that small RNAs play a role in the modulation of chromatin structure and TGS.
Despite early focus in the literature on RNA interference (RNAi) as a core mechanism which occurs at the level of messenger RNA translation, others have since been identified in the broader family of conserved RNA silencing pathways acting at the DNA and chromatin level. RNA silencing refers to the silencing activity of a range of small RNAs and is generally regarded as a broader category than RNAi. While the terms have sometimes been used interchangeably in the literature, RNAi is generally regarded as a branch of RNA silencing. To the extent it is useful to craft a distinction between these related concepts, RNA silencing may be thought of as referring to the broader scheme of small RNA related controls involved in gene expression and the protection of the genome against mobile repetitive DNA sequences, retroelements, and transposons to the extent that these can induce mutations. The molecular mechanisms for RNA silencing were initially studied in plants but have since broadened to cover a variety of subjects, from fungi to mammals, providing strong evidence that these pathways are highly conserved.
At least three primary classes of small RNA have currently been identified, namely: small interfering RNA (siRNA), microRNA (miRNA), and piwi-interacting RNA (piRNA).
small interfering RNA (siRNA)
siRNAs act in the nucleus and the cytoplasm and are involved in RNAi as well as CDGS. siRNAs come from long dsRNA precursors derived from a variety of single-stranded RNA (ssRNA) precursors, such as sense and antisense RNAs. siRNAs also come from hairpin RNAs derived from transcription of inverted repeat regions. siRNAs may also arise enzymatically from non-coding RNA precursors. The volume of literature on siRNA within the framework of RNAi is extensive. One of the potent applications of siRNAs is the ability to distinguish the target versus non-target sequence with a single-nucleotide difference. This approach has been considered as therapeutically crucial for the silencing dominant gain-of-function (GOF) disorders, where mutant allele causing disease is differed from wt-allele by a single nucleotide (nt). This type of siRNAs with capability to distinguish a single-nt difference are termed as allele-specific siRNAs.
microRNA (miRNA)
The majority of miRNAs act in the cytoplasm and mediate mRNA degradation or translational arrest. However, some plant miRNAs have been shown to act directly to promote DNA methylation. miRNAs come from hairpin precursors generated by the RNaseIII enzymes Drosha and Dicer. Both miRNA and siRNA form either the RNA-induced silencing complex (RISC) or the nuclear form of RISC known as RNA-induced transcriptional silencing complex (RITS). The volume of literature on miRNA within the framework of RNAi is extensive.
Three prime untranslated regions and microRNAs
Three prime untranslated regions (3'UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally cause RNA interference. Such 3'-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. By binding to specific sites within the 3'-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3'-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of a mRNA.
The 3'-UTR often contains microRNA response elements (MREs). MREs are sequences to which miRNAs bind. These are prevalent motifs within 3'-UTRs. Among all regulatory motifs within the 3'-UTRs (e.g. including silencer regions), MREs make up about half of the motifs.
As of 2014, the miRBase web site, an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to have an average of about four hundred target mRNAs (affecting expression of several hundred genes). Freidman et al. estimate that >45,000 miRNA target sites within human mRNA 3'UTRs are conserved above background levels, and >60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs.
Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs. Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold).
The effects of miRNA dysregulation of gene expression seem to be important in cancer. For instance, in gastrointestinal cancers, nine miRNAs have been identified as epigenetically altered and effective in down regulating DNA repair enzymes.
The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia, bipolar disorder, major depression, Parkinson's disease, Alzheimer's disease and autism spectrum disorders.
piwi-interacting RNA (piRNA)
piRNAs represent the largest class of small non-coding RNA molecules expressed in animal cells, deriving from a large variety of sources, including repetitive DNA and transposons. However, the biogenesis of piRNAs is also the least well understood. piRNAs appear to act both at the post-transcriptional and chromatin levels. They are distinct from miRNA due to at least an increase in terms of size and complexity. Repeat associated small interfering RNA (rasiRNAs) are considered to be a subspecies of piRNA.
Mechanism
The most basic mechanistic flow for RNA Silencing is as follows:
(For a more detailed explanation of the mechanism, refer to the RNAi:Cellular mechanism article.)
1: RNA with inverted repeats hairpin/panhandle constructs --> 2: dsRNA --> 3: miRNAs/siRNAs --> 4: RISC --> 5: Destruction of target mRNA
It has been discovered that the best precursor to good RNA silencing is to have single stranded antisense RNA with inverted repeats which, in turn, build small hairpin RNA and panhandle constructs. The hairpin or panhandle constructs exist so that the RNA can remain independent and not anneal with other RNA strands.
These small hairpin RNAs and/or panhandles then get transported from the nucleus to the cytosol through the nuclear export receptor called exportin-5, and then get transformed into a dsRNA, a double stranded RNA, which, like DNA, is a double stranded series of nucleotides. If the mechanism didn't use dsRNAs, but only single strands, there would be a higher chance for it to hybridize to other "good" mRNAs. As a double strand, it can be kept on call for when it is needed.
The dsRNA then gets cut up by a Dicer into small (21-28 nt = nucleotides long) strands of miRNAs (microRNAs) or siRNAs (short interfering RNAs.) A Dicer is an endoribonuclease RNase, which is a complex of a protein mixed with strand(s) of RNA.
Lastly, the double stranded miRNAs/siRNAs separate into single strands; the antisense RNA strand of the two will combine with another endoribonuclease enzyme complex called RISC (RNA-induced silencing complex), which includes the catalytic component Argonaute, and will guide the RISC to break up the "perfectly complementary" target mRNA or viral genomic RNA so that it can be destroyed.
It means that based on a short sequence specific area, a corresponding mRNA will be cut. To make sure, it will be cut in many other places as well. (If the mechanism only worked with a long stretch, then there would be higher chance that it would not have time to match to its complementary long mRNA.) It has also been shown that the repeated-associated short interference RNAs (rasiRNA) have a role in guiding chromatin modification.
Biological functions
Immunity against viruses or transposons
RNA silencing is the mechanism that our cells (and cells from all kingdoms) use to fight RNA viruses and transposons (which originate from our own cells as well as from other vehicles). In the case of RNA viruses, these get destroyed immediately by the mechanism cited above. In the case of transposons, it's a little more indirect. Since transposons are located in different parts of the genome, the different transcriptions from the different promoters produce complementary mRNAs that can hybridize with each other. When this happens, the RNAi machinery goes into action, debilitating the mRNAs of the proteins that would be required to move the transposons themselves.
Down-regulation of genes
For a detailed explanation of the down-regulation of genes, see RNAi:downregulation of genes
Up-regulation of genes
For a detailed explanation of the up-regulation of genes, see RNAi:upregulation of genes
RNA silencing also gets regulated
The same way that RNA silencing regulates downstream target mRNAs, RNA silencing itself is regulated. For example, silencing signals get spread between cells by a group of enzymes called RdRPs (RNA-dependent RNA polymerases) or RDRs.
Practical applications
Growing understanding of small RNA gene-silencing mechanisms involving dsRNA-mediated sequence-specific mRNA degradation has directly impacted the fields of functional genomics, biomedicine, and experimental biology. The following section describes various applications involving the effects of RNA silencing. These include uses in biotechnology, therapeutics, and laboratory research. Bioinformatics techniques are also being applied to identify and characterize large numbers of small RNAs and their targets.
Biotechnology
Artificial introduction of long dsRNAs or siRNAs has been adopted as a tool to inactivate gene expression, both in cultured cells and in living organisms. Structural and functional resolution of small RNAs as the effectors of RNA silencing has had a direct impact on experimental biology. For example, dsRNA may be synthesized to have a specific sequence complementary to a gene of interest. Once introduced into a cell or biological system, it is recognized as exogenous genetic material and activates the corresponding RNA silencing pathway. This mechanism can be used to effect decreases in gene expression with respect to the target, useful for investigating loss of function for genes relative to a phenotype. That is, studying the phenotypic and/or physiologic effects of expression decreases can reveal the role of a gene product. The observable effects can be nuanced, such that some methods can distinguish between “knockdown” (decrease expression) and “knockout” (eliminate expression) of a gene. RNA interference technologies have been noted recently as one of the most widely utilized techniques in functional genomics. Screens developed using small RNAs have been used to identify genes involved in fundamental processes such as cell division, apoptosis and fat regulation.
Biomedicine
Since at least the mid-2000s, there has been intensifying interest in developing short interfering RNAs for biomedical and therapeutic applications. Bolstering this interest is a growing number of experiments which have successfully demonstrated the clinical potential and safety of small RNAs for combatting diseases ranging from viral infections to cancer as well as neurodegenerative disorders. In 2004, the first Investigational New Drug applications for siRNA were filed in the United States with the Food and Drug Administration; it was intended as a therapy for age-related macular degeneration. RNA silencing in vitro and in vivo has been accomplished by creating triggers (nucleic acids that induce RNAi) either via expression in viruses or synthesis of oligonucleotides. Optimistically many studies indicate that small RNA-based therapies may offer novel and potent weapons against pathogens and diseases where small molecule/pharmacologic and vaccine/biologic treatments have failed or proved less effective in the past. However, it is also warned that the design and delivery of small RNA effector molecules should be carefully considered in order to ensure safety and efficacy.
The role of RNA silencing in therapeutics, clinical medicine, and diagnostics is a fast developing area and it is expected that in the next few years some of the compounds using this technology will reach market approval. A report has been summarized below to highlight the many clinical domains in which RNA silencing is playing an increasingly important role, chief among them are ocular and retinal disorders, cancer, kidney disorders, LDL lowering, and antiviral. The following table displays a listing of RNAi based therapy currently in various phases of clinical trials. The status of these trials can be monitored on the ClinicalTrials.gov website, a service of the National Institutes of Health (NIH). Of note are treatments in development for ocular and retinal disorders, that were among the first compounds to reach clinical development. AGN211745 (sirna027) (Allergan) and bevasiranib (Cand5) (Opko) underwent clinical development for the treatment of age-related macular degeneration, but trials were terminated before the compounds reached the market. Other compounds in development for ocular conditions include SYL040012 (Sylentis) and QPI-007 (Quark). SYL040012 (bamosinan) is a drug candidate under clinical development for glaucoma, a progressive optic neurdegeneration frequently associated to increased intraocular pressure; QPI-007 is a candidate for the treatment of angle-closure glaucoma and Non-arteritic anterior ischaemic optic neuropathy; both compounds are currently undergoing phase II clinical trials. Several compounds are also under development for conditions such as cancer and rare diseases.
Main challenge
As with conventional manufactured drugs, the main challenge in developing successful offshoots of the RNAi-based drugs is the precise delivery of the RNAi triggers to where they are needed in the body. The reason that the ocular macular degeneration antidote was successful sooner than the antidote with other diseases is that the eyeball is almost a closed system, and the serum can be injected with a needle exactly where it needs to be. The future successful drugs will be the ones who are able to land where needed, probably with the help of nanobots. Below is a rendition of a table that shows the existing means of delivery of the RNAi triggers.
Laboratory
The scientific community has been quick to harness RNA silencing as a research tool. The strategic targeting of mRNA can provide a large amount of information about gene function and its ability to be turned on and off. Induced RNA silencing can serve as a controlled method for suppressing gene expression. Since the machinery is conserved across most eukaryotes, these experiments scale well to a range of model organisms. In practice, expressing synthetic short hairpin RNAs can be used to reach stable knock-down. If promoters can be made to express these designer short hairpin RNAs, the result is often potent, stable, and controlled gene knock-down in both in vitro and in vivo contexts. Short hairpin RNA vector systems can be seen as roughly analogous in scope to using cDNA overexpression systems. Overall, synthetic and natural small RNAs have proven to be an important tool for studying gene function in cells as well as animals.
Bioinformatics approaches to identify small RNAs and their targets have returned several hundred, if not thousands of, small RNA candidates predicted to affect gene expression in plants, C. elegans, D. melanogaster, zebrafish, mouse, rat, and human. These methods are largely directed to identifying small RNA candidates for knock-out experiments but may have broader applications. One bioinformatics approach evaluated sequence conservation criteria by filtering seed complementary target-binding sites. The cited study predicted that approximately one third of mammalian genes were to be regulated by, in this case, miRNAs.
Ethics & Risk-Benefit Analysis
One aspect of RNA silencing to consider is its possible off-target affects, toxicity, and delivery methods. If RNA silencing is to become a conventional drug, it must first pass the typical ethical issues of biomedicine. Using risk-benefit analysis, researchers can determine whether RNA silencing conforms to ethical ideologies such as nonmaleficence, beneficence, and autonomy.
There is a risk of creating infection-competent viruses that could infect non-consenting people. There is also a risk of affecting future generations based on these treatments. These two scenarios, in respect to autonomy, is possible unethical. At this moment, unsafe delivery methods and unintended aspects of vector viruses add to the argument against RNA silencing.
In terms of off-target effects, siRNA can induce innate interferon responses, inhibit endogenous miRNAs through saturation, and may have complementary sequences to other non-target mRNAs. These off-targets could also have target up-regulations such as oncogenes and antiapoptotic genes. The toxicity of RNA silencing is still under review as there are conflicting reports.
RNA silencing is quickly developing, because of that, the ethical issues need to be discussed further. With the knowledge of general ethical principles, we must continuously perform risk-benefit analysis.
See also
RNAi
siRNA
miRNA
piwiRNA
rasiRNA
References
RNA
Gene expression | RNA silencing | [
"Chemistry",
"Biology"
] | 4,483 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
14,609,741 | https://en.wikipedia.org/wiki/Alexandrov%20space | In geometry, Alexandrov spaces with curvature ≥ k form a generalization of Riemannian manifolds with sectional curvature ≥ k, where k is some real number. By definition, these spaces are locally compact complete length spaces where the lower curvature bound is defined via comparison of geodesic triangles in the space to geodesic triangles in standard constant-curvature Riemannian surfaces.
One can show that the Hausdorff dimension of an Alexandrov space with curvature ≥ k is either a non-negative integer or infinite. One can define a notion of "angle" (see Comparison triangle#Alexandrov angles) and "tangent cone" in these spaces.
Alexandrov spaces with curvature ≥ k are important as they form the limits (in the Gromov–Hausdorff metric) of sequences of Riemannian manifolds with sectional curvature ≥ k, as described by Gromov's compactness theorem.
Alexandrov spaces with curvature ≥ k were introduced by the Russian mathematician Aleksandr Danilovich Aleksandrov in 1948 and should not be confused with Alexandrov-discrete spaces named after the Russian topologist Pavel Alexandrov. They were studied in detail by Burago, Gromov and Perelman in 1992 and were later used in Perelman's proof of the Poincaré conjecture.
References
Metric geometry
Differential geometry
Riemannian manifolds | Alexandrov space | [
"Mathematics"
] | 279 | [
"Metric spaces",
"Riemannian manifolds",
"Space (mathematics)",
"Geometry",
"Geometry stubs"
] |
14,609,763 | https://en.wikipedia.org/wiki/Normal%20shock%20tables | In aerodynamics, the normal shock tables are a series of tabulated data listing the various properties before and after the occurrence of a normal shock wave. With a given upstream Mach number, the post-shock Mach number can be calculated along with the pressure, density, temperature, and stagnation pressure ratios. Such tables are useful since the equations used to calculate the properties after a normal shock are cumbersome.
The tables below have been calculated using a heat capacity ratio, , equal to 1.4. The upstream Mach number, , begins at 1 and ends at 5. Although the tables could be extended over any range of Mach numbers, stopping at Mach 5 is typical since assuming to be 1.4 over the entire Mach number range leads to errors over 10% beyond Mach 5.
Normal shock table equations
Given an upstream Mach number, , and the ratio of specific heats, , the post normal shock Mach number, , can be calculated using the equation below.
The next equation shows the relationship between the post normal shock pressure, , and the upstream ambient pressure, .
The relationship between the post normal shock density, , and the upstream ambient density, is shown next in the tables.
Next, the equation below shows the relationship between the post normal shock temperature, , and the upstream ambient temperature, .
Finally, the ratio of stagnation pressures is shown below where is the upstream stagnation pressure and occurs after the normal shock. The ratio of stagnation temperatures remains constant across a normal shock since the process is adiabatic.
Note that before and after the shock the isentropic relations are valid and connect static and total quantities. That means, (comes from Bernoulli, assumes incompressible flow) because the flow is for Mach numbers greater than unity always compressible.
The normal shock tables (for γ = 1.4)
See also
Normal shock
Mach number
Compressible flow
References
External links
University of Cincinnati shock relations calculator
Parkin Research Normal shock calculator
Aerospace engineering
Aerodynamics | Normal shock tables | [
"Chemistry",
"Engineering"
] | 409 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics"
] |
14,610,483 | https://en.wikipedia.org/wiki/Autotransporter%20family | In molecular biology, an autotransporter domain is a structural domain found in some bacterial outer membrane proteins. The domain is always located at the C-terminal end of the protein and forms a beta-barrel structure. The barrel is oriented in the membrane such that the N-terminal portion of the protein, termed the passenger domain, is presented on the cell surface. These proteins are typically virulence factors, associated with infection or virulence in pathogenic bacteria.
The name autotransporter derives from an initial understanding that the protein was self-sufficient in transporting the passenger domain through the outermembrane. This view has since been challenged by Benz and Schmidt.
Secretion of polypeptide chains through the outer membrane of Gram-negative bacteria can occur via a number of different pathways. The type V(a), or autotransporter, secretion pathway constitutes the largest number of secreted virulence factors of any one of the seven known types of secretion in Gram-negative bacteria. This secretion pathway is exemplified by the prototypical IgA1 Protease of Neisseria gonorrhoeae. The protein is directed to the inner membrane by a signal peptide transported across the inner membrane via the Sec machinery. Once in the periplasm, the autotransporter domain inserts into the outer membrane. The passenger domain is passed through the center of the autotransporter domain to be presented on the outside of the cell, however the mechanism by which this occurs remains unclear.
The C-terminal translocator domain corresponds to an outer membrane beta-barrel domain. The N-terminal passenger domain is translocated across the membrane, and may or may not be cleaved from the translocator domain. In those proteins where the cleavage is auto-catalytic, the peptidase domains belong to MEROPS peptidase families S6 and S8. Passenger domains structurally characterized to date have been shown to be dominated by a protein fold known as a beta helix, typified by pertactin. The folding of this domain is thought to be intrinsically linked to its method of outer membrane translocation.
See also
Trimeric Autotransporter Adhesins (TAA)
References
Further reading
Protein domains
Outer membrane proteins | Autotransporter family | [
"Biology"
] | 476 | [
"Protein domains",
"Protein classification"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.