text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In mathematics, the supergolden ratio is a geometrical proportion equal to 1.465 571 231 876 768 026 65 ... ; [ 2 ] it is the unique real solution of the equation x 3 = x 2 + 1 .
The name supergolden ratio results from analogy with the golden ratio , the positive solution of the equation x 2 = x + 1.
Two quantities a > b > 0 are in the supergolden ratio-squared if ( a + b a ) 2 = a b {\displaystyle \left({\frac {a+b}{a}}\right)^{2}={\frac {a}{b}}} The ratio a + b a {\displaystyle {\frac {a+b}{a}}} is commonly denoted ψ . {\displaystyle \psi .}
Based on this definition, one has 1 = ( a + b a ) 2 b a = ( a + b a ) 2 ( a + b a − 1 ) ⟹ ψ 2 ( ψ − 1 ) = 1 {\displaystyle {\begin{aligned}1&=\left({\frac {a+b}{a}}\right)^{2}{\frac {b}{a}}\\&=\left({\frac {a+b}{a}}\right)^{2}\left({\frac {a+b}{a}}-1\right)\\&\implies \psi ^{2}\left(\psi -1\right)=1\end{aligned}}}
It follows that the supergolden ratio is found as the unique real solution of the cubic equation ψ 3 − ψ 2 − 1 = 0. {\displaystyle \psi ^{3}-\psi ^{2}-1=0.}
The minimal polynomial for the reciprocal root is the depressed cubic x 3 + x − 1 {\displaystyle x^{3}+x-1} , [ 4 ] thus the simplest solution with Cardano's formula , w 1 , 2 = ( 1 ± 1 3 31 3 ) / 2 1 / ψ = w 1 3 + w 2 3 {\displaystyle {\begin{aligned}w_{1,2}&=\left(1\pm {\frac {1}{3}}{\sqrt {\frac {31}{3}}}\right)/2\\1/\psi &={\sqrt[{3}]{w_{1}}}+{\sqrt[{3}]{w_{2}}}\end{aligned}}} or, using the hyperbolic sine ,
1 / ψ {\displaystyle 1/\psi } is the superstable fixed point of the iteration x ← ( 2 x 3 + 1 ) / ( 3 x 2 + 1 ) {\displaystyle x\gets (2x^{3}+1)/(3x^{2}+1)} .
The iteration x ← 1 + x 2 3 {\displaystyle x\gets {\sqrt[{3}]{1+x^{2}}}} results in the continued radical
Dividing the defining trinomial x 3 − x 2 − 1 {\displaystyle x^{3}-x^{2}-1} by x − ψ {\displaystyle x-\psi } one obtains x 2 + x / ψ 2 + 1 / ψ {\displaystyle x^{2}+x/\psi ^{2}+1/\psi } , and the conjugate elements of ψ {\displaystyle \psi } are x 1 , 2 = ( − 1 ± i 4 ψ 2 + 3 ) / 2 ψ 2 , {\displaystyle x_{1,2}=\left(-1\pm i{\sqrt {4\psi ^{2}+3}}\right)/2\psi ^{2},} with x 1 + x 2 = 1 − ψ {\displaystyle x_{1}+x_{2}=1-\psi \;} and x 1 x 2 = 1 / ψ . {\displaystyle \;x_{1}x_{2}=1/\psi .}
Good approximations for the supergolden ratio come from its continued fraction expansion , [1; 2, 6, 1, 3, 5, 4, 22, 1, 1, 4, 1, 2, 84, ...] . [ 6 ] The first few are:
See also § Narayana sequence , below.
Many properties of ψ {\displaystyle \psi } are related to golden ratio φ {\displaystyle \varphi } . For example, the supergolden ratio can be expressed in terms of itself as the infinite geometric series [ 9 ]
in comparison to the golden ratio identity
Additionally, 1 + φ − 1 + φ − 2 = 2 {\displaystyle 1+\varphi ^{-1}+\varphi ^{-2}=2} , while ∑ n = 0 7 ψ − n = 3. {\displaystyle \sum _{n=0}^{7}\psi ^{-n}=3.}
For every integer n {\displaystyle n} one has ψ n = ψ n − 1 + ψ n − 3 = ψ n − 2 + ψ n − 3 + ψ n − 4 = ψ n − 2 + 2 ψ n − 4 + ψ n − 6 {\displaystyle {\begin{aligned}\psi ^{n}&=\psi ^{n-1}+\psi ^{n-3}\\&=\psi ^{n-2}+\psi ^{n-3}+\psi ^{n-4}\\&=\psi ^{n-2}+2\psi ^{n-4}+\psi ^{n-6}\end{aligned}}} From this an infinite number of further relations can be found.
Argument θ = arcsec ( 2 ψ 4 ) {\displaystyle \;\theta =\operatorname {arcsec}(2\psi ^{4})\;} satisfies the identity tan ( θ ) − 4 sin ( θ ) = 3 3 . {\displaystyle \;\tan(\theta )-4\sin(\theta )=3{\sqrt {3}}.} [ 10 ]
Continued fraction pattern of a few low powers ψ − 1 = [ 0 ; 1 , 2 , 6 , 1 , 3 , 5 , 4 , 22 , . . . ] ≈ 0.6823 ( 13 / 19 ) ψ 0 = [ 1 ] ψ 1 = [ 1 ; 2 , 6 , 1 , 3 , 5 , 4 , 22 , 1 , . . . ] ≈ 1.4656 ( 22 / 15 ) ψ 2 = [ 2 ; 6 , 1 , 3 , 5 , 4 , 22 , 1 , 1 , . . . ] ≈ 2.1479 ( 15 / 7 ) ψ 3 = [ 3 ; 6 , 1 , 3 , 5 , 4 , 22 , 1 , 1 , . . . ] ≈ 3.1479 ( 22 / 7 ) ψ 4 = [ 4 ; 1 , 1 , 1 , 1 , 2 , 2 , 1 , 2 , 2 , . . . ] ≈ 4.6135 ( 60 / 13 ) ψ 5 = [ 6 ; 1 , 3 , 5 , 4 , 22 , 1 , 1 , 4 , . . . ] ≈ 6.7614 ( 115 / 17 ) {\displaystyle {\begin{aligned}\psi ^{-1}&=[0;1,2,6,1,3,5,4,22,...]\approx 0.6823\;(13/19)\\\psi ^{0}&=[1]\\\psi ^{1}&=[1;2,6,1,3,5,4,22,1,...]\approx 1.4656\;(22/15)\\\psi ^{2}&=[2;6,1,3,5,4,22,1,1,...]\approx 2.1479\;(15/7)\\\psi ^{3}&=[3;6,1,3,5,4,22,1,1,...]\approx 3.1479\;(22/7)\\\psi ^{4}&=[4;1,1,1,1,2,2,1,2,2,...]\approx 4.6135\;(60/13)\\\psi ^{5}&=[6;1,3,5,4,22,1,1,4,...]\approx 6.7614\;(115/17)\end{aligned}}}
Notably, the continued fraction of ψ 2 {\displaystyle \psi ^{2}} begins as permutation of the first six natural numbers; the next term is equal to their sum + 1.
The supergolden ratio is the fourth smallest Pisot number . [ 11 ] Because the absolute value 1 / ψ {\displaystyle 1/{\sqrt {\psi }}} of the algebraic conjugates is smaller than 1, powers of ψ {\displaystyle \psi } generate almost integers . For example: ψ 11 = 67.000222765... ≈ 67 + 1 / 4489 {\displaystyle \psi ^{11}=67.000222765...\approx 67+1/4489} . After eleven rotation steps the phases of the inward spiraling conjugate pair – initially close to ± 13 π / 22 {\displaystyle \pm 13\pi /22} – nearly align with the imaginary axis.
The minimal polynomial of the supergolden ratio m ( x ) = x 3 − x 2 − 1 {\displaystyle m(x)=x^{3}-x^{2}-1} has discriminant Δ = − 31 {\displaystyle \Delta =-31} . The Hilbert class field of imaginary quadratic field K = Q ( Δ ) {\displaystyle K=\mathbb {Q} ({\sqrt {\Delta }})} can be formed by adjoining ψ {\displaystyle \psi } . With argument τ = ( 1 + Δ ) / 2 {\displaystyle \tau =(1+{\sqrt {\Delta }})/2\,} a generator for the ring of integers of K {\displaystyle K} , one has the special value of Dedekind eta quotient
Expressed in terms of the Weber-Ramanujan class invariant G n
Properties of the related Klein j-invariant j ( τ ) {\displaystyle j(\tau )} result in near identity e π − Δ ≈ ( 2 ψ ) 24 − 24 {\displaystyle e^{\pi {\sqrt {-\Delta }}}\approx \left({\sqrt {2}}\,\psi \right)^{24}-24} . The difference is < 1/143092 .
The elliptic integral singular value [ 12 ] k r = λ ∗ ( r ) {\displaystyle k_{r}=\lambda ^{*}(r)} for r = 31 {\displaystyle r=31} has closed form expression
(which is less than 1/10 the eccentricity of the orbit of Venus).
Narayana's cows is a recurrence sequence originating from a problem proposed by the 14th century Indian mathematician Narayana Pandita . [ 13 ] He asked for the number of cows and calves in a herd after 20 years, beginning with one cow in the first year, where each cow gives birth to one calf each year from the age of three onwards.
The Narayana sequence has a close connection to the Fibonacci and Padovan sequences and plays an important role in data coding, cryptography and combinatorics. The number of compositions of n into parts 1 and 3 is counted by the n th Narayana number.
The Narayana sequence is defined by the third-order recurrence relation N n = N n − 1 + N n − 3 for n > 2 , {\displaystyle N_{n}=N_{n-1}+N_{n-3}{\text{ for }}n>2,} with initial values N 0 = N 1 = N 2 = 1. {\displaystyle N_{0}=N_{1}=N_{2}=1.}
The first few terms are 1, 1, 1, 2, 3, 4, 6, 9, 13, 19, 28, 41, 60, 88,... (sequence A000930 in the OEIS ).
The limit ratio between consecutive terms is the supergolden ratio: lim n → ∞ N n + 1 / N n = ψ {\displaystyle \lim _{n\rightarrow \infty }N_{n+1}/N_{n}=\psi } .
The first 11 indices n for which N n {\displaystyle N_{n}} is prime are n = 3, 4, 8, 9, 11, 16, 21, 25, 81, 6241, 25747 (sequence A170954 in the OEIS ). The last number has 4274 decimal digits.
The sequence can be extended to negative indices using N n = N n + 3 − N n + 2 . {\displaystyle N_{n}=N_{n+3}-N_{n+2}.}
The generating function of the Narayana sequence is given by
The Narayana numbers are related to sums of binomial coefficients by
The characteristic equation of the recurrence is x 3 − x 2 − 1 = 0 {\displaystyle x^{3}-x^{2}-1=0} . If the three solutions are real root α {\displaystyle \alpha } and conjugate pair β {\displaystyle \beta } and γ {\displaystyle \gamma } , the Narayana numbers can be computed with the Binet formula [ 14 ]
Since | b β n + c γ n | < 1 / α n / 2 {\displaystyle \left\vert b\beta ^{n}+c\gamma ^{n}\right\vert <1/\alpha ^{n/2}} and α = ψ {\displaystyle \alpha =\psi } , the number N n {\displaystyle N_{n}} is the nearest integer to a ψ n + 2 {\displaystyle a\,\psi ^{n+2}} , with n ≥ 0 and a = ψ / ( ψ 2 + 3 ) = {\displaystyle a=\psi /(\psi ^{2}+3)=} 0.28469 30799 75318 50274 74714...
Coefficients a = b = c = 1 {\displaystyle a=b=c=1} result in the Binet formula for the related sequence A n = N n + 2 N n − 3 {\displaystyle A_{n}=N_{n}+2N_{n-3}} .
The first few terms are 3, 1, 1, 4, 5, 6, 10, 15, 21, 31, 46, 67, 98, 144,... (sequence A001609 in the OEIS ).
This anonymous sequence has the Fermat property : if p is prime, A p ≡ A 1 mod p {\displaystyle A_{p}\equiv A_{1}{\bmod {p}}} . The converse does not hold, but the small number of odd pseudoprimes n ∣ ( A n − 1 ) {\displaystyle \,n\mid (A_{n}-1)} makes the sequence special. [ 15 ] The 8 odd composite numbers below 10 8 to pass the test are n = 1155, 552599, 2722611, 4822081, 10479787, 10620331, 16910355, 66342673.
The Narayana numbers are obtained as integral powers n > 3 of a matrix with real eigenvalue ψ {\displaystyle \psi } [ 13 ] Q = ( 1 0 1 1 0 0 0 1 0 ) , {\displaystyle Q={\begin{pmatrix}1&0&1\\1&0&0\\0&1&0\end{pmatrix}},}
Q n = ( N n N n − 2 N n − 1 N n − 1 N n − 3 N n − 2 N n − 2 N n − 4 N n − 3 ) {\displaystyle Q^{n}={\begin{pmatrix}N_{n}&N_{n-2}&N_{n-1}\\N_{n-1}&N_{n-3}&N_{n-2}\\N_{n-2}&N_{n-4}&N_{n-3}\end{pmatrix}}}
The trace of Q n {\displaystyle Q^{n}} gives the above A n {\displaystyle A_{n}} .
Alternatively, Q {\displaystyle Q} can be interpreted as incidence matrix for a D0L Lindenmayer system on the alphabet { a , b , c } {\displaystyle \{a,b,c\}} with corresponding substitution rule { a ↦ a b b ↦ c c ↦ a {\displaystyle {\begin{cases}a\;\mapsto \;ab\\b\;\mapsto \;c\\c\;\mapsto \;a\end{cases}}} and initiator w 0 = b {\displaystyle w_{0}=b} . The series of words w n {\displaystyle w_{n}} produced by iterating the substitution have the property that the number of c's, b's and a's are equal to successive Narayana numbers. The lengths of these words are l ( w n ) = N n . {\displaystyle l(w_{n})=N_{n}.}
Associated to this string rewriting process is a compact set composed of self-similar tiles called the Rauzy fractal , that visualizes the combinatorial information contained in a multiple-generation three-letter sequence. [ 16 ]
A supergolden rectangle is a rectangle whose side lengths are in a ψ : 1 {\displaystyle \psi :1} ratio. Compared to the golden rectangle , the supergolden rectangle has one more degree of self-similarity .
Given a rectangle of height 1 , length ψ {\displaystyle \psi } and diagonal length ψ 3 {\displaystyle {\sqrt {\psi ^{3}}}} (according to 1 + ψ 2 = ψ 3 {\displaystyle 1+\psi ^{2}=\psi ^{3}} ). The triangles on the diagonal have altitudes 1 / ψ ; {\displaystyle 1/{\sqrt {\psi }}\,;} each perpendicular foot divides the diagonal in ratio ψ 2 {\displaystyle \psi ^{2}} .
On the left-hand side, cut off a square of side length 1 and mark the intersection with the falling diagonal. The remaining rectangle now has aspect ratio ψ 2 : 1 {\displaystyle \psi ^{2}:1} (according to ψ − 1 = ψ − 2 {\displaystyle \psi -1=\psi ^{-2}} ). Divide the original rectangle into four parts by a second, horizontal cut passing through the intersection point. [ 17 ] [ 9 ]
The rectangle below the diagonal has aspect ratio ψ 3 {\displaystyle \psi ^{3}} , the other three are all supergolden rectangles, with a fourth one between the feet of the altitudes. The parent rectangle and the four scaled copies have linear sizes in the ratios ψ 3 : ψ 2 : ψ : ψ 2 − 1 : 1. {\displaystyle \psi ^{3}:\psi ^{2}:\psi :\psi ^{2}-1:1.} It follows from the theorem of the gnomon that the areas of the two rectangles opposite the diagonal are equal.
In the supergolden rectangle above the diagonal, the process is repeated at a scale of 1 : ψ 2 {\displaystyle 1:\psi ^{2}} .
A supergolden spiral is a logarithmic spiral that gets wider by a factor of ψ {\displaystyle \psi } for every quarter turn. It is described by the polar equation r ( θ ) = a exp ( k θ ) , {\displaystyle r(\theta )=a\exp(k\theta ),} with initial radius a {\displaystyle a} and parameter k = 2 ln ( ψ ) π . {\displaystyle k={\frac {2\ln(\psi )}{\pi }}.} If drawn on a supergolden rectangle, the spiral has its pole at the foot of altitude of a triangle on the diagonal and passes through vertices of rectangles with aspect ratio ψ {\displaystyle \psi } which are perpendicularly aligned and successively scaled by a factor 1 / ψ . {\displaystyle 1/\psi .} | https://en.wikipedia.org/wiki/Supergolden_ratio |
A superhabitable world is a hypothetical type of planet or moon that is better suited than Earth for the emergence and evolution of life . The concept was introduced in a 2014 paper by René Heller and John Armstrong, in which they criticized the language used in the search for habitable exoplanets and proposed clarifications. [ 2 ] The authors argued that knowing whether a world is located within the star's habitable zone is insufficient to determine its habitability, and that the prevailing model of characterization was geocentric or anthropocentric in nature. Instead, they proposed a biocentric model that prioritized characteristics affecting the abundance of life and biodiversity on a world's surface. [ 2 ]
If a world possesses more diverse flora and fauna than there are on Earth, then it would empirically show that its natural environment is more hospitable to life. [ 3 ] To identify such a world, one should consider its geological processes, formation age, atmospheric composition , ocean coverage, and the type of star that it orbits. In other words, a superhabitable world would likely be larger, warmer, and older than Earth , with an evenly-distributed ocean, and orbiting a K-type main-sequence star . [ 4 ] In 2020, astronomers, building on Heller and Armstrong's hypothesis, identified 24 potentially superhabitable exoplanets based on measured characteristics that fit these criteria. [ 5 ]
A star's characteristics is a key consideration for planetary habitability . [ 6 ] The types of stars generally considered to be potential hosts for habitable worlds include F, G, K, and M-type main-sequence stars. [ 7 ] The most massive stars— O , B , and A-type , respectively—have average lifespans on the main sequence that are considered too short for complex life to develop, [ 8 ] ranging from a few hundred million years for A-type stars to only a few million years for O-type stars. [ 9 ] Thus, F-type stars are described as the "hot limit" for stars that can potentially support life, as their lifespan of 2 to 4 billion years would be sufficient for habitability. [ 10 ] However, F-type stars emit large amounts of ultraviolet radiation , and without the presence of a protective ozone layer, could disrupt nucleic acid-based life on a planet's surface. [ 10 ]
On the opposite end, the less massive red dwarfs , which generally includes M-type stars, are by far the most common and long-lived stars in the universe, [ 11 ] but ongoing research points to serious challenges to their ability to support life . Due to the low luminosity of red dwarfs, the circumstellar habitable zone (HZ) [ a ] is in very close proximity to the star, which causes any planet to become tidally locked . [ 14 ] The primary concern for researchers, however, is the star's propensity for frequent outbreaks of high-energy radiation , especially early in its life, that could strip away a planet's atmosphere . [ 15 ] At the same time, red dwarfs do not emit enough quiescent UV radiation (i.e., UV radiation emitted during inactive periods ) to support biological processes like photosynthesis. [ 3 ]
Dismissing both ends, astronomers are led to conclude that G and K-type stars —yellow and orange dwarfs, respectively—provide the best life-supporting characteristics. However, a limiting factor to the habitability of yellow dwarfs is their higher emissions of ionizing radiation and shorter lifespans compared to cooler orange dwarfs. [ 16 ] Therefore, researchers conclude that orange dwarfs offer the best conditions for a superhabitable world. [ 3 ] [ 16 ]
Also nicknamed "Goldilocks stars," orange dwarfs emit low enough levels of ultraviolet radiation to eliminate the need for a protective ozone layer , but just enough to contribute to necessary biological processes. [ 17 ] [ 3 ] Additionally, the long average lifespan of an orange dwarf (18 to 34 billion years, compared to 10 billion for the Sun) provides a more stable habitable zone throughout the star's lifetime, providing more time for life to develop. [ 18 ] [ 19 ] [ 17 ]
It is necessary for the age of any superhabitable world to be greater than Earth's age (~4.5 billion years). [ 19 ] This necessity is based on the belief that as a planet or moon ages, it experiences increasing levels of biodiversity, since native species have had more time to evolve, adapt, and stabilize the environmental conditions suitable for life. [ 19 ] However, the eventual exhaustion of a world's internally generated heat means that there is also an upper limit to the age of any habitable world; internal cooling would lead to changes to the average global temperature and atmospheric composition. [ 20 ] Therefore, the optimal age range for a superhabitable world would be roughly 5–8 billion years. [ 20 ]
During the main sequence phase, a star burns hydrogen in its core, producing energy through nuclear fusion. Over time, as the hydrogen fuel is consumed, the star's core contracts and heats up, leading to an increase in the rate of fusion. This causes the star to gradually become more luminous, and as its luminosity increases, the amount of energy it emits grows, pushing the habitable zone (HZ) outward. [ 23 ] [ 24 ] Studies suggest that Earth's orbit lies near the inner edge of the Solar System's HZ, [ 14 ] which could harm its long-term livability as it nears the end of its HZ lifetime.
Ideally, the orbit of a superhabitable world should be further out and closer to the center of the HZ relative to Earth's orbit, [ 25 ] [ 26 ] but knowing whether a world is in this region is insufficient on its own to determine habitability. [ 3 ] Not all rocky planets in the HZ may be habitable, while tidal heating can render planets or moons habitable beyond this region. For example, Jupiter's moon Europa is well beyond the outer limits of the Solar System's HZ, yet as a result of its orbital interactions with the other Galilean moons , it is believed to have a subsurface ocean of liquid water beneath its icy surface. [ 27 ]
According to a 2023 paper by Jonathan Jernigan and colleagues, marine biological activity increases on planets with increasing obliquity and eccentricity. The authors suggest that planets with a high obliquity and/or eccentricity may be superhabitable, and that scientists should be keen to look for biosignatures on exoplanets with these orbital characteristics. [ 28 ]
Assuming that a greater surface area would provide greater biodiversity, the size of a superhabitable world should generally be greater than 1 R 🜨 , with the condition that its mass is not arbitrarily large. [ 29 ] Studies of the mass-radius relationship indicate that there is a transition point between rocky planets and gaseous planets (i.e., mini-Neptunes ) that occurs around 2 M 🜨 or 1.7 R 🜨 . [ 30 ] [ 31 ] Another study argues that there is a natural radius limit, set at 1.6 R 🜨 , below which nearly all planets are terrestrial , composed primarily of rock-iron-water mixtures. [ 32 ]
Heller and Armstrong argue that the optimal mass and radius of a superhabitable world can be determined by geological activity; the more massive a planetary body, the longer time it will continuously generate internal heat —a major contributing factor to plate tectonics. [ 29 ] Too much mass, however, can slow plate tectonics by increasing the pressure of the mantle. [ 29 ] It is believed that plate tectonics peak in bodies between 1 and 5 M 🜨 , and from this perspective, a planet can be considered superhabitable up to around 2 M 🜨 . [ 33 ] Assuming this planet has a density similar to Earth's, its radius should be between 1.2 and 1.3 R 🜨 . [ 33 ] [ 29 ]
An important geological process is plate tectonics , which appears to be common in terrestrial planets with a significant rotation speed and an internal heat source. [ 34 ] If large bodies of water are present on a planet, plate tectonics can maintain high levels of carbon dioxide ( CO 2 ) in its atmosphere and increase the global surface temperature through the greenhouse effect . [ 35 ] However, if tectonic activity is not significant enough to increase temperatures above the freezing point of water , the planet could experience a permanent ice age , unless the process is offset by another energy source like tidal heating or stellar irradiation . [ 36 ] On the other hand, if the effects of any of these processes are too strong, the amount of greenhouse gases in the atmosphere could cause a runaway greenhouse effect by trapping heat and preventing adequate cooling.
The presence of a magnetic field is important for the long-term survivability of life on the surface of a planet or moon. [ 22 ] A sufficiently strong magnetic field effectively shields a world's surface and atmosphere against ionizing radiation emanating from the interstellar medium and its host star. [ 22 ] [ 37 ] A planet can generate an intrinsic magnetic field through a dynamo that involves an internal heat source, an electrically conductive fluid like molten iron , and a significant rotation speed , while a moon could be extrinsically protected by its host planet's magnetic field. [ 22 ] Less massive bodies and those that are tidally locked are likely to have a weak to non-existent magnetic field, which over time can result in the loss of a significant portion of its atmosphere by hydrodynamic escape and become a desert planet . [ 29 ] If a planet's rotation is too slow, such as with Venus, then it cannot generate an Earth-like magnetic field . A more massive planet could overcome this problem by hosting multiple moons , which through their combined gravitational effects, can boost the planet's magnetic field. [ 38 ]
The appearance of a superhabitable world should be similar to the conditions found in the tropical climates of Earth. [ 39 ] Due to the denser atmosphere and less temperature variation across its surface, such a world would lack any major ice sheets and have a higher concentration of clouds, while plant life would potentially cover more of the planet's surface and be visible from space. [ 39 ]
When considering the differences in the peak wavelength of visible light for K-type stars and the lower stellar flux of the planet, surface vegetation may exhibit colors different than the typical green color found on Earth. [ 40 ] [ 41 ] Instead, vegetation on these worlds could have a red, orange, or even purple appearance. [ 42 ]
An ocean that covers a large portion of a world's surface with fractionate continents and archipelagos could provide a stable environment across its surface. [ 43 ] In addition, the greater surface gravity of a superhabitable world could reduce the average ocean depth and create shallow ocean basins , providing the optimal environment for marine life to thrive. [ 44 ] [ 45 ] [ 46 ] For example, marine ecosystems found in the shallow areas of Earth's oceans and seas, given the amount of light and heat they receive, are observed to have greater biodiversity and are generally seen as being more comfortable for aquatic species. This has led researchers to speculate that shallow water environments on exoplanets should be similarly suitable for life. [ 43 ] [ 47 ]
In general, the climate of a superhabitable planet would be warm, moist, and homogeneous, allowing life to extend across the surface without presenting large population differences. [ 48 ] [ 49 ] These characteristics are in contrast to those found on Earth, which has more variable and inhospitable regions that include frigid tundra and dry deserts . [ 50 ] Deserts on superhabitable planets would be more limited in area and would likely support habitat-rich coastal environments. [ 51 ]
The optimum surface temperature for Earth-like life is unknown, although it appears that on Earth, organism diversity has been greater in warmer periods. [ 52 ] It is therefore possible that exoplanets with slightly higher average temperatures than that of Earth are more suitable for life. [ 53 ] The denser atmosphere of a superhabitable planet would naturally provide a greater average temperature and less variability of the global climate. [ 54 ] [ 46 ] Ideally, the temperature should reach the optimal levels for plant life, which is 25 °C (77 °F). In addition, a large distributed ocean would have the ability to regulate a planet's surface temperature similar to Earth's ocean currents , and could allow it to maintain a moderate temperature within the habitable zone. [ 55 ] [ 51 ]
There are no solid arguments to explain if Earth's atmosphere has the optimal composition, [ 56 ] but relative atmospheric oxygen levels is required to meet the high-energy demands of complex life ( O 2 ). [ 57 ] Therefore, it is hypothesized that oxygen abundance in the atmosphere is essential for complex life on other worlds. [ 56 ] [ 57 ]
In September 2020, Dirk Schulze-Makuch and colleagues identified 24 contenders for superhabitable planets out of more than 4000 confirmed exoplanets and exoplanet candidates. [ 5 ] The criteria included measurable factors like type of star, and the planet's age, mass, radius, and surface temperature. The authors also considered more hypothetical factors like the presence of abundant water, a large moon, and a geological recycling mechanism like plate tectonics. [ 20 ]
Kepler-1126b (KOI-2162.01) and Kepler-69c (KOI-172.02) are the only objects in the list that have been confirmed as exoplanets. [ 58 ] However, earlier research on Kepler-69c suggests that because its orbit lies near the inner edge of the HZ, its atmosphere could likely be in a runaway greenhouse state, which could heavily impact its prospects for habitability. [ 59 ] The full list can be found below. [ 60 ] | https://en.wikipedia.org/wiki/Superhabitable_world |
In thermodynamics , superheating (sometimes referred to as boiling retardation , or boiling delay ) is the phenomenon in which a liquid is heated to a temperature higher than its boiling point , without boiling . This is a so-called metastable state or metastate , where boiling might occur at any time, induced by external or internal effects. [ 1 ] [ 2 ] Superheating is achieved by heating a homogeneous substance in a clean container, free of nucleation sites , while taking care not to disturb the liquid.
This may occur by microwaving water in a very smooth container. Disturbing the water may cause an unsafe eruption of hot water and result in burns . [ 3 ]
Water is said to "boil" when bubbles of water vapor grow without bound, bursting at the surface. For a vapor bubble to expand, the temperature must be high enough that the vapor pressure exceeds the ambient pressure (the atmospheric pressure , primarily). Below that temperature, a water vapor bubble will shrink and vanish.
Superheating is an exception to this simple rule; a liquid is sometimes observed not to boil even though its vapor pressure does exceed the ambient pressure. The cause is an additional force, the surface tension , which suppresses the growth of bubbles. [ 4 ]
Surface tension makes the bubble act like an elastic balloon. The pressure inside is raised slightly by the "skin" attempting to contract. For the bubble to expand, the temperature must be raised slightly above the boiling point to generate enough vapor pressure to overcome both surface tension and ambient pressure.
What makes superheating so explosive is that a larger bubble is easier to inflate than a small one; just as when blowing up a balloon, the hardest part is getting started. It turns out the excess pressure Δ p {\displaystyle \Delta p} due to surface tension is inversely proportional to the diameter d {\displaystyle d} of the bubble. [ 5 ] That is, Δ p ∝ d − 1 {\displaystyle \Delta p\propto d^{-1}} .
This can be derived by imagining a plane cutting a bubble into two halves. Each half is pulled towards the middle with a surface tension force F ∝ π d {\displaystyle F\propto \pi d} , which must be balanced by the force from excess pressure Δ p × ( π d 2 / 4 ) {\displaystyle \Delta p\times (\pi d^{2}/4)} . So we obtain Δ p ( π d 2 / 4 ) ∝ π d {\displaystyle \Delta p(\pi d^{2}/4)\propto \pi d} , which simplifies to Δ p ∝ d − 1 {\displaystyle \Delta p\propto d^{-1}} .
This means if the largest bubbles in a container are small, only a few micrometres in diameter, overcoming the surface tension may require a large Δ p {\displaystyle \Delta p} , requiring exceeding the boiling point by several degrees Celsius. Once a bubble does begin to grow, the surface tension pressure decreases, so it expands explosively in a positive feedback loop. In practice, most containers have scratches or other imperfections which trap pockets of air that provide starting bubbles, and impure water containing small particles can also trap air pockets. Only a smooth container of purified liquid can reliably superheat.
Superheating can occur when an undisturbed container of water is heated in a microwave oven . At the time the container is removed, the lack of nucleation sites prevents boiling, leaving the surface calm. However, once the water is disturbed, some of it violently flashes to steam , potentially spraying boiling water out of the container. [ 6 ] The boiling can be triggered by jostling the cup, inserting a stirring device, or adding a substance like instant coffee or sugar. The chance of superheating is greater with smooth containers, because scratches or chips can house small pockets of air, which serve as nucleation points. Superheating is more likely after repeated heating and cooling cycles of an undisturbed container, as when a forgotten coffee cup is re-heated without being removed from a microwave oven. This is due to heating cycles releasing dissolved gases such as oxygen and nitrogen from the solvent. There are ways to prevent superheating in a microwave oven, such as putting a spoon or stir stick into the container beforehand or using a scratched container. To avoid a dangerous sudden boiling, it is recommended not to microwave water for an excessive amount of time. [ 3 ]
Superheating of hydrogen liquid is used in bubble chambers . | https://en.wikipedia.org/wiki/Superheating |
Superheavy elements , also known as transactinide elements , transactinides , or super-heavy elements , or superheavies for short, are the chemical elements with atomic number greater than 104. [ 1 ] The superheavy elements are those beyond the actinides in the periodic table; the last actinide is lawrencium (atomic number 103). By definition, superheavy elements are also transuranium elements , i.e., having atomic numbers greater than that of uranium (92). Depending on the definition of group 3 adopted by authors, lawrencium may also be included to complete the 6d series. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Glenn T. Seaborg first proposed the actinide concept , which led to the acceptance of the actinide series . He also proposed a transactinide series ranging from element 104 to 121 and a superactinide series approximately spanning elements 122 to 153 (though more recent work suggests the end of the superactinide series to occur at element 157 instead). The transactinide seaborgium was named in his honor. [ 6 ] [ 7 ]
Superheavies are radioactive and have only been obtained synthetically in laboratories. No macroscopic sample of any of these elements has ever been produced. Superheavies are all named after physicists and chemists or important locations involved in the synthesis of the elements.
IUPAC defines an element to exist if its lifetime is longer than 10 −14 second , which is the time it takes for the atom to form an electron cloud. [ 8 ]
The known superheavies form part of the 6d and 7p series in the periodic table. Except for rutherfordium and dubnium (and lawrencium if it is included), all known isotopes of superheavies have half-lives of minutes or less. The element naming controversy involved elements 102 – 109 . Some of these elements thus used systematic names for many years after their discovery was confirmed. (Usually the systematic names are replaced with permanent names proposed by the discoverers relatively soon after a discovery has been confirmed.)
A superheavy [ a ] atomic nucleus is created in a nuclear reaction that combines two other nuclei of unequal size [ b ] into one; roughly, the more unequal the two nuclei in terms of mass , the greater the possibility that the two react. [ 14 ] The material made of the heavier nuclei is made into a target, which is then bombarded by the beam of lighter nuclei. Two nuclei can only fuse into one if they approach each other closely enough; normally, nuclei (all positively charged) repel each other due to electrostatic repulsion . The strong interaction can overcome this repulsion but only within a very short distance from a nucleus; beam nuclei are thus greatly accelerated in order to make such repulsion insignificant compared to the velocity of the beam nucleus. [ 15 ] The energy applied to the beam nuclei to accelerate them can cause them to reach speeds as high as one-tenth of the speed of light . However, if too much energy is applied, the beam nucleus can fall apart. [ 15 ]
Coming close enough alone is not enough for two nuclei to fuse: when two nuclei approach each other, they usually remain together for about 10 −20 seconds and then part ways (not necessarily in the same composition as before the reaction) rather than form a single nucleus. [ 15 ] [ 16 ] This happens because during the attempted formation of a single nucleus, electrostatic repulsion tears apart the nucleus that is being formed. [ 15 ] Each pair of a target and a beam is characterized by its cross section —the probability that fusion will occur if two nuclei approach one another expressed in terms of the transverse area that the incident particle must hit in order for the fusion to occur. [ c ] This fusion may occur as a result of the quantum effect in which nuclei can tunnel through electrostatic repulsion. If the two nuclei can stay close past that phase, multiple nuclear interactions result in redistribution of energy and an energy equilibrium. [ 15 ]
The resulting merger is an excited state [ 19 ] —termed a compound nucleus —and thus it is very unstable. [ 15 ] To reach a more stable state, the temporary merger may fission without formation of a more stable nucleus. [ 20 ] Alternatively, the compound nucleus may eject a few neutrons , which would carry away the excitation energy; if the latter is not sufficient for a neutron expulsion, the merger would produce a gamma ray . This happens in about 10 −16 seconds after the initial nuclear collision and results in creation of a more stable nucleus. [ 20 ] The definition by the IUPAC/IUPAP Joint Working Party (JWP) states that a chemical element can only be recognized as discovered if a nucleus of it has not decayed within 10 −14 seconds. This value was chosen as an estimate of how long it takes a nucleus to acquire electrons and thus display its chemical properties. [ 21 ] [ d ]
The beam passes through the target and reaches the next chamber, the separator; if a new nucleus is produced, it is carried with this beam. [ 23 ] In the separator, the newly produced nucleus is separated from other nuclides (that of the original beam and any other reaction products) [ e ] and transferred to a surface-barrier detector , which stops the nucleus. The exact location of the upcoming impact on the detector is marked; also marked are its energy and the time of the arrival. [ 23 ] The transfer takes about 10 −6 seconds; in order to be detected, the nucleus must survive this long. [ 26 ] The nucleus is recorded again once its decay is registered, and the location, the energy , and the time of the decay are measured. [ 23 ]
Stability of a nucleus is provided by the strong interaction. However, its range is very short; as nuclei become larger, its influence on the outermost nucleons ( protons and neutrons) weakens. At the same time, the nucleus is torn apart by electrostatic repulsion between protons, and its range is not limited. [ 27 ] Total binding energy provided by the strong interaction increases linearly with the number of nucleons, whereas electrostatic repulsion increases with the square of the atomic number, i.e. the latter grows faster and becomes increasingly important for heavy and superheavy nuclei. [ 28 ] [ 29 ] Superheavy nuclei are thus theoretically predicted [ 30 ] and have so far been observed [ 31 ] to predominantly decay via decay modes that are caused by such repulsion: alpha decay and spontaneous fission . [ f ] Almost all alpha emitters have over 210 nucleons, [ 33 ] and the lightest nuclide primarily undergoing spontaneous fission has 238. [ 34 ] In both decay modes, nuclei are inhibited from decaying by corresponding energy barriers for each mode, but they can be tunneled through. [ 28 ] [ 29 ]
Alpha particles are commonly produced in radioactive decays because the mass of an alpha particle per nucleon is small enough to leave some energy for the alpha particle to be used as kinetic energy to leave the nucleus. [ 36 ] Spontaneous fission is caused by electrostatic repulsion tearing the nucleus apart and produces various nuclei in different instances of identical nuclei fissioning. [ 29 ] As the atomic number increases, spontaneous fission rapidly becomes more important: spontaneous fission partial half-lives decrease by 23 orders of magnitude from uranium (element 92) to nobelium (element 102), [ 37 ] and by 30 orders of magnitude from thorium (element 90) to fermium (element 100). [ 38 ] The earlier liquid drop model thus suggested that spontaneous fission would occur nearly instantly due to disappearance of the fission barrier for nuclei with about 280 nucleons. [ 29 ] [ 39 ] The later nuclear shell model suggested that nuclei with about 300 nucleons would form an island of stability in which nuclei will be more resistant to spontaneous fission and will primarily undergo alpha decay with longer half-lives. [ 29 ] [ 39 ] Subsequent discoveries suggested that the predicted island might be further than originally anticipated; they also showed that nuclei intermediate between the long-lived actinides and the predicted island are deformed, and gain additional stability from shell effects. [ 40 ] Experiments on lighter superheavy nuclei, [ 41 ] as well as those closer to the expected island, [ 37 ] have shown greater than previously anticipated stability against spontaneous fission, showing the importance of shell effects on nuclei. [ g ]
Alpha decays are registered by the emitted alpha particles, and the decay products are easy to determine before the actual decay; if such a decay or a series of consecutive decays produces a known nucleus, the original product of a reaction can be easily determined. [ h ] (That all decays within a decay chain were indeed related to each other is established by the location of these decays, which must be in the same place.) [ 23 ] The known nucleus can be recognized by the specific characteristics of decay it undergoes such as decay energy (or more specifically, the kinetic energy of the emitted particle). [ i ] Spontaneous fission, however, produces various nuclei as products, so the original nuclide cannot be determined from its daughters. [ j ]
The information available to physicists aiming to synthesize a superheavy element is thus the information collected at the detectors: location, energy, and time of arrival of a particle to the detector, and those of its decay. The physicists analyze this data and seek to conclude that it was indeed caused by a new element and could not have been caused by a different nuclide than the one claimed. Often, provided data is insufficient for a conclusion that a new element was definitely created and there is no other explanation for the observed effects; errors in interpreting data have been made. [ k ]
The heaviest element known at the end of the 19th century was uranium, with an atomic mass of about 240 (now known to be 238) amu . Accordingly, it was placed in the last row of the periodic table; this fueled speculation about the possible existence of elements heavier than uranium and why A = 240 seemed to be the limit. Following the discovery of the noble gases , beginning with argon in 1895, the possibility of heavier members of the group was considered. Danish chemist Julius Thomsen proposed in 1895 the existence of a sixth noble gas with Z = 86, A = 212 and a seventh with Z = 118, A = 292, the last closing a 32-element period containing thorium and uranium. [ 52 ] In 1913, Swedish physicist Johannes Rydberg extended Thomsen's extrapolation of the periodic table to include even heavier elements with atomic numbers up to 460, but he did not believe that these superheavy elements existed or occurred in nature. [ 53 ]
In 1914, German physicist Richard Swinne proposed that elements heavier than uranium, such as those around Z = 108, could be found in cosmic rays . He suggested that these elements may not necessarily have decreasing half-lives with increasing atomic number, leading to speculation about the possibility of some longer-lived elements at Z = 98–102 and Z = 108–110 (though separated by short-lived elements). Swinne published these predictions in 1926, believing that such elements might exist in Earth's core , iron meteorites , or the ice caps of Greenland where they had been locked up from their supposed cosmic origin. [ 54 ]
Work performed from 1961 to 2013 at four labs – Lawrence Berkeley National Laboratory in the US, the Joint Institute for Nuclear Research in the USSR (later Russia), the GSI Helmholtz Centre for Heavy Ion Research in Germany, and Riken in Japan – identified and confirmed the elements lawrencium to oganesson according to the criteria of the IUPAC – IUPAP Transfermium Working Groups and subsequent Joint Working Parties. These discoveries complete the seventh row of the periodic table. The next two elements, ununennium ( Z = 119) and unbinilium ( Z = 120), have not yet been synthesized. They would begin an eighth period.
Due to their short half-lives (for example, the most stable known isotope of seaborgium has a half-life of 14 minutes, and half-lives decrease with increasing atomic number) and the low yield of the nuclear reactions that produce them, new methods have had to be created to determine their gas-phase and solution chemistry based on very small samples of a few atoms each. Relativistic effects become very important in this region of the periodic table, causing the filled 7s orbitals, empty 7p orbitals, and filling 6d orbitals to all contract inward toward the atomic nucleus. This causes a relativistic stabilization of the 7s electrons and makes the 7p orbitals accessible in low excitation states. [ 7 ]
Elements 103 to 112, lawrencium to copernicium, form the 6d series of transition elements. Experimental evidence shows that elements 103–108 behave as expected for their position in the periodic table, as heavier homologs of lutetium through osmium. They are expected to have ionic radii between those of their 5d transition metal homologs and their actinide pseudohomologs: for example, Rf 4+ is calculated to have ionic radius 76 pm , between the values for Hf 4+ (71 pm) and Th 4+ (94 pm). Their ions should also be less polarizable than those of their 5d homologs. Relativistic effects are expected to reach a maximum at the end of this series, at roentgenium (element 111) and copernicium (element 112). Nevertheless, many important properties of the transactinides are still not yet known experimentally, though theoretical calculations have been performed. [ 7 ]
Elements 113 to 118, nihonium to oganesson, should form a 7p series, completing the seventh period in the periodic table. Their chemistry will be greatly influenced by the very strong relativistic stabilization of the 7s electrons and a strong spin–orbit coupling effect "tearing" the 7p subshell apart into two sections, one more stabilized (7p 1/2 , holding two electrons) and one more destabilized (7p 3/2 , holding four electrons). Lower oxidation states should be stabilized here, continuing group trends, as both the 7s and 7p 1/2 electrons exhibit the inert-pair effect . These elements are expected to largely continue to follow group trends, though with relativistic effects playing an increasingly larger role. In particular, the large 7p splitting results in an effective shell closure at flerovium (element 114) and a hence much higher than expected chemical activity for oganesson (element 118). [ 7 ]
Oganesson is the last known element. The next two elements, 119 and 120 , should form an 8s series and be an alkali and alkaline earth metal respectively. The 8s electrons are expected to be relativistically stabilized, so that the trend toward higher reactivity down these groups will reverse and the elements will behave more like their period 5 homologs, rubidium and strontium . The 7p 3/2 orbital is still relativistically destabilized, potentially giving these elements larger ionic radii and perhaps even being able to participate chemically. In this region, the 8p electrons are also relativistically stabilized, resulting in a ground-state 8s 2 8p 1 valence electron configuration for element 121 . Large changes are expected to occur in the subshell structure in going from element 120 to element 121: for example, the radius of the 5g orbitals should drop drastically, from 25 Bohr units in element 120 in the excited [Og] 5g 1 8s 1 configuration to 0.8 Bohr units in element 121 in the excited [Og] 5g 1 7d 1 8s 1 configuration, in a phenomenon called "radial collapse". Element 122 should add either a further 7d or a further 8p electron to element 121's electron configuration. Elements 121 and 122 should be similar to actinium and thorium respectively. [ 7 ]
At element 121, the superactinide series is expected to begin, when the 8s electrons and the filling 8p 1/2 , 7d 3/2 , 6f 5/2 , and 5g 7/2 subshells determine the chemistry of these elements. Complete and accurate calculations are not available for elements beyond 123 because of the extreme complexity of the situation: [ 55 ] the 5g, 6f, and 7d orbitals should have about the same energy level, and in the region of element 160 the 9s, 8p 3/2 , and 9p 1/2 orbitals should also be about equal in energy. This will cause the electron shells to mix so that the block concept no longer applies very well, and will also result in novel chemical properties that will make positioning these elements in a periodic table very difficult. [ 7 ]
It has been suggested that elements beyond Z = 126 be called beyond superheavy elements . [ 56 ] Other sources refer to elements around Z = 164 as hyperheavy elements . [ 57 ] | https://en.wikipedia.org/wiki/Superheavy_element |
A superhelix is a molecular structure in which a helix is itself coiled into a helix. This is significant to both proteins and genetic material, such as overwound circular DNA .
The earliest significant reference in molecular biology is from 1971, by F. B. Fuller:
A geometric invariant of a space curve , the writhing number , is defined and studied. For the central curve of a twisted cord the writhing number measures the extent to which coiling of the central curve has relieved local twisting of the cord. This study originated in response to questions that arise in the study of supercoiled double-stranded DNA rings. [ 1 ]
About the writhing number, mathematician W. F. Pohl says:
It is well known that the writhing number is a standard measure of the global geometry of a closed space curve. [ 2 ]
Contrary to intuition, a topological property, the linking number , arises from the geometric properties twist and writhe according to the following relationship:
where L k is the linking number, W is the writhe and T is the twist of the coil.
The linking number refers to the number of times that one strand wraps around the other. In DNA this property does not change and can only be modified by specialized enzymes called topoisomerases . | https://en.wikipedia.org/wiki/Superhelix |
A superheterodyne receiver , often shortened to superhet , is a type of radio receiver that uses frequency mixing to convert a received signal to a fixed intermediate frequency (IF) which can be more conveniently processed than the original carrier frequency . It was invented by French radio engineer and radio manufacturer Lucien Lévy . [ 1 ] [ unreliable source? ] Virtually all modern radio receivers use the superheterodyne principle.
Early Morse code radio broadcasts were produced using an alternator connected to a spark gap . The output signal was at a carrier frequency defined by the physical construction of the gap, modulated by the alternating current signal from the alternator. Since the output frequency of the alternator was generally in the audible range, this produces an audible amplitude modulated (AM) signal. Simple radio detectors filtered out the high-frequency carrier, leaving the modulation, which was passed on to the user's headphones as an audible signal of dots and dashes.
In 1904, Ernst Alexanderson introduced the Alexanderson alternator , a device that directly produced radio frequency output with higher power and much higher efficiency than the older spark gap systems. In contrast to the spark gap, however, the output from the alternator was a pure carrier wave at a selected frequency. When detected on existing receivers, the dots and dashes would normally be inaudible, or "supersonic". Due to the filtering effects of the receiver, these signals generally produced a click or thump, which were audible but made determining dots from dashes difficult.
In 1905, Canadian inventor Reginald Fessenden came up with the idea of using two Alexanderson alternators operating at closely spaced frequencies to broadcast two signals, instead of one. The receiver would then receive both signals, and as part of the detection process, only the beat frequency would exit the receiver. By selecting two carriers close enough that the beat frequency was audible, the resulting Morse code could once again be easily heard even in simple receivers. For instance, if the two alternators operated at frequencies 3 kHz apart, the output in the headphones would be dots or dashes of 3 kHz tone, making them easily audible.
Fessenden coined the term " heterodyne ", meaning "generated by a difference" (in frequency), to describe this system. The word is derived from the Greek roots hetero- "different", and -dyne "power".
Morse code was widely used in the early days of radio because it was both easy to produce and easy to receive. In contrast to voice broadcasts, the output of the amplifier didn't have to closely match the modulation of the original signal. As a result, any number of simple amplification systems could be used. One method used an interesting side-effect of early triode amplifier tubes. If both the plate (anode) and grid were connected to resonant circuits tuned to the same frequency and the stage gain was much higher than unity , stray capacitive coupling between the grid and the plate would cause the amplifier to go into oscillation.
In 1913, Edwin Howard Armstrong described a receiver system that used this effect to produce audible Morse code output using a single triode. The output of the amplifier taken at the anode was connected back to the input through a "tickler", causing feedback that drove input signals well beyond unity. This caused the output to oscillate at a chosen frequency with great amplification. When the original signal cut off at the end of the dot or dash, the oscillation decayed and the sound disappeared after a short delay.
Armstrong referred to this concept as a regenerative receiver , and it immediately became one of the most widely used systems of its era. Many radio systems of the 1920s were based on the regenerative principle, and it continued to be used in specialized roles into the 1940s, for instance in the IFF Mark II .
There was one role where the regenerative system was not suitable, even for Morse code sources, and that was the task of radio direction finding , RDF.
The regenerative system was highly non-linear, amplifying any signal above a certain threshold by a huge amount, sometimes so large it caused it to turn into a transmitter (which was the entire basis of the original IFF system ). In RDF, the strength of the signal is used to determine the location of the transmitter, so one requires linear amplification to allow the strength of the original signal, often very weak, to be accurately measured.
To address this need, RDF systems of the era used triodes operating below unity. To get a usable signal from such a system, tens or even hundreds of triodes had to be used, connected together anode-to-grid. These amplifiers drew enormous amounts of power and required a team of maintenance engineers to keep them running. Nevertheless, the strategic value of direction finding on weak signals was so high that the British Admiralty felt the high cost was justified.
Although a number of researchers discovered the superheterodyne concept, filing patents only months apart, American engineer Edwin Armstrong is often credited with the concept. He came across it while considering better ways to produce RDF receivers. He had concluded that moving to higher "short wave" frequencies would make RDF more useful and was looking for practical means to build a linear amplifier for these signals. At the time, short wave was anything above about 500 kHz, beyond any existing amplifier's capabilities.
It had been noticed that when a regenerative receiver went into oscillation, other nearby receivers would start picking up other stations as well. Armstrong (and others) eventually deduced that this was caused by a "supersonic heterodyne" between the station's carrier frequency and the regenerative receiver's oscillation frequency. When the first receiver began to oscillate at high outputs, its signal would flow back out through the antenna to be received on any nearby receiver. On that receiver, the two signals mixed just as they did in the original heterodyne concept, producing an output that is the difference in frequency between the two signals.
For instance, consider a lone receiver that was tuned to a station at 300 kHz. If a second receiver is set up nearby and set to 400 kHz with high gain, it will begin to give off a 400 kHz signal that will be received in the first receiver. In that receiver, the two signals will mix to produce four outputs, one at the original 300 kHz, another at the received 400 kHz, and two more, the difference at 100 kHz and the sum at 700 kHz. This is the same effect that Fessenden had proposed, but in his system the two frequencies were deliberately chosen so the beat frequency was audible. In this case, all of the frequencies are well beyond the audible range, and thus "supersonic", giving rise to the name superheterodyne.
Armstrong realized that this effect was a potential solution to the "short wave" amplification problem, as the "difference" output still retained its original modulation, but on a lower carrier frequency. In the example above, one can amplify the 100 kHz beat signal and retrieve the original information from that, the receiver does not have to tune in the higher 300 kHz original carrier. By selecting an appropriate set of frequencies, even very high-frequency signals could be "reduced" to a frequency that could be amplified by existing systems.
For instance, to receive a signal at 1500 kHz, far beyond the range of efficient amplification at the time, one could set up an oscillator at, for example, 1560 kHz. Armstrong referred to this as the " local oscillator " or LO. As its signal was being fed into a second receiver in the same device, it did not have to be powerful, generating only enough signal to be roughly similar in strength to that of the received station, although in practice LOs tend to be relatively strong signals. [ citation needed ] When the signal from the LO mixes with the station's, one of the outputs will be the heterodyne difference frequency, in this case, 60 kHz. He termed this resulting difference the " intermediate frequency " often abbreviated to "IF".
In December 1919, Major E. H. Armstrong gave publicity to an indirect method of obtaining short-wave amplification, called the super-heterodyne. The idea is to reduce the incoming frequency, which may be, for example 1,500,000 cycles (200 meters), to some suitable super-audible frequency that can be amplified efficiently, then passing this current through an intermediate frequency amplifier, and finally rectifying and carrying on to one or two stages of audio frequency amplification. [ 2 ]
The "trick" to the superheterodyne is that by changing the LO frequency you can tune in different stations. For instance, to receive a signal at 1300 kHz, one could tune the LO to 1360 kHz, resulting in the same 60 kHz IF. This means the amplifier section can be tuned to operate at a single frequency, the design IF, which is much easier to do efficiently.
Armstrong put his ideas into practice, and the technique was soon adopted by the military. It was less popular when commercial radio broadcasting began in the 1920s, mostly due to the need for an extra tube (for the oscillator), the generally higher cost of the receiver, and the level of skill required to operate it. For early domestic radios, tuned radio frequency receivers (TRF) were more popular because they were cheaper, easier for a non-technical owner to use, and less costly to operate. Armstrong eventually sold his superheterodyne patent to Westinghouse , which then sold it to Radio Corporation of America (RCA) , the latter monopolizing the market for superheterodyne receivers until 1930. [ 4 ]
Because the original motivation for the superhet was the difficulty of using the triode amplifier at high frequencies, there was an advantage in using a lower intermediate frequency. During this era, many receivers used an IF frequency of only 30 kHz. [ 5 ] These low IF frequencies, often using IF transformers based on the self-resonance of iron-core transformers , had poor image frequency rejection, but overcame the difficulty in using triodes at radio frequencies in a manner that competed favorably with the less robust neutrodyne TRF receiver. Higher IF frequencies (455 kHz was a common standard) came into use in later years, after the invention of the tetrode and pentode as amplifying tubes, largely solving the problem of image rejection. Even later, however, low IF frequencies (typically 60 kHz) were again used in the second (or third) IF stage of double or triple-conversion communications receivers to take advantage of the selectivity more easily achieved at lower IF frequencies, with image-rejection accomplished in the earlier IF stage(s) which were at a higher IF frequency.
In the 1920s, at these low frequencies, commercial IF filters looked very similar to 1920s audio interstage coupling transformers, had similar construction, and were wired up in an almost identical manner, so they were referred to as "IF transformers". By the mid-1930s, superheterodynes using much higher intermediate frequencies (typically around 440–470 kHz) used tuned transformers more similar to other RF applications. The name "IF transformer" was retained, however, now meaning "intermediate frequency". Modern receivers typically use a mixture of ceramic resonators or surface acoustic wave resonators and traditional tuned-inductor IF transformers.
By the 1930s, improvements in vacuum tube technology rapidly eroded the TRF receiver's cost advantages, and the explosion in the number of broadcasting stations created a demand for cheaper, higher-performance receivers.
The introduction of an additional grid in a vacuum tube, but before the more modern screen-grid tetrode, included the tetrode with two control grids ; this tube combined the mixer and oscillator functions, first used in the so-called autodyne mixer. This was rapidly followed by the introduction of tubes specifically designed for superheterodyne operation, most notably the pentagrid converter . By reducing the tube count (with each tube stage being the main factor affecting cost in this era), this further reduced the advantage of TRF and regenerative receiver designs.
By the mid-1930s, commercial production of TRF receivers was largely replaced by superheterodyne receivers. By the 1940s, the vacuum-tube superheterodyne AM broadcast receiver was refined into a cheap-to-manufacture design called the " All American Five " because it used five vacuum tubes: usually a converter (mixer/local oscillator), an IF amplifier, a detector/audio amplifier, audio power amplifier, and a rectifier. Since this time, the superheterodyne design was used for almost all commercial radio and TV receivers.
French engineer Lucien Lévy filed a patent application for the superheterodyne principle in August 1917 with brevet n° 493660. [ 6 ] Armstrong also filed his patent in 1917. [ 7 ] [ 8 ] [ 9 ] Levy filed his original disclosure about seven months before Armstrong's. [ 1 ] German inventor Walter H. Schottky also filed a patent in 1918. [ 6 ]
At first the US recognised Armstrong as the inventor, and his US Patent 1,342,885 was issued on 8 June 1920. [ 1 ] After various changes and court hearings Lévy was awarded US patent No 1,734,938 that included seven of the nine claims in Armstrong's application, while the two remaining claims were granted to Alexanderson of GE and Kendall of AT&T. [ 1 ]
The antenna collects the radio signal. The tuned RF stage with optional RF amplifier provides some initial selectivity; it is necessary to suppress the image frequency , and may also serve to prevent strong out-of-passband signals from saturating the initial amplifier. A local oscillator provides the mixing frequency; it is usually a variable frequency oscillator which is used to tune the receiver to different stations. The frequency mixer does the actual heterodyning that gives the superheterodyne its name; it changes the incoming radio frequency signal to a higher or lower, fixed, intermediate frequency (IF). The IF band-pass filter and amplifier supply most of the gain and the narrowband filtering for the radio. The demodulator extracts the audio or other modulation from the IF radio frequency. The extracted signal is then amplified by the audio amplifier.
To receive a radio signal, a suitable antenna is required. The output of the antenna may be very small, often only a few microvolts . The signal from the antenna is tuned and may be amplified in a so-called radio frequency (RF) amplifier, although this stage is often omitted. One or more tuned circuits at this stage block frequencies that are far removed from the intended reception frequency. To tune the receiver to a particular station, the frequency of the local oscillator is controlled by the tuning knob (for instance). Tuning of the local oscillator and the RF stage may use a variable capacitor , or varicap diode . [ 11 ] The tuning of one (or more) tuned circuits in the RF stage must track the tuning of the local oscillator.
The signal is then fed into a circuit where it is mixed with a sine wave from a variable frequency oscillator known as the local oscillator (LO). The mixer uses a non-linear component to produce both sum and difference beat frequency signals, [ 12 ] each one containing the modulation in the desired signal. The output of the mixer may include the original RF signal at f RF , the local oscillator signal at f LO , and the two new heterodyne frequencies f RF + f LO and f RF − f LO . The mixer may inadvertently produce additional frequencies such as third- and higher-order intermodulation products. Ideally, the IF bandpass filter removes all but the desired IF signal at f IF . The IF signal contains the original modulation (transmitted information) that the received radio signal had at f RF .
The frequency of the local oscillator f LO is set so the desired reception radio frequency f RF mixes to f IF . There are two choices for the local oscillator frequency because of the correspondence between positive and negative frequencies. If the local oscillator frequency is less than the desired reception frequency, it is called low-side injection ( f IF = f RF − f LO ); if the local oscillator is higher, then it is called high-side injection ( f IF = f LO − f RF ).
The mixer will process not only the desired input signal at f RF , but also all signals present at its inputs. There will be many mixer products (heterodynes). Most other signals produced by the mixer (such as due to stations at nearby frequencies) can be filtered out in the IF tuned amplifier ; that gives the superheterodyne receiver its superior performance. However, if f LO is set to f RF + f IF , then an incoming radio signal at f LO + f IF will also produce a heterodyne at f IF ; the frequency f LO + f IF is called the image frequency and must be rejected by the tuned circuits in the RF stage. The image frequency is 2 f IF higher (or lower) than the desired frequency f RF , so employing a higher IF frequency f IF increases the receiver's image rejection without requiring additional selectivity in the RF stage.
To suppress the unwanted image, the tuning of the RF stage and the LO may need to "track" each other. In some cases, a narrow-band receiver can have a fixed tuned RF amplifier. In that case, only the local oscillator frequency is changed. In most cases, a receiver's input band is wider than its IF center frequency. For example, a typical AM broadcast band receiver covers 510 kHz to 1655 kHz (a roughly 1160 kHz input band) with a 455 kHz IF frequency; an FM broadcast band receiver covers 88 MHz to 108 MHz band with a 10.7 MHz IF frequency. In that situation, the RF amplifier must be tuned so the IF amplifier does not see two stations at the same time. If the AM broadcast band receiver LO were set at 1200 kHz, it would see stations at both 745 kHz (1200−455 kHz) and 1655 kHz. Consequently, the RF stage must be designed so that any stations that are twice the IF frequency away are significantly attenuated. The tracking can be done with a multi-section variable capacitor or some varactors driven by a common control voltage. An RF amplifier may have tuned circuits at both its input and its output, so three or more tuned circuits may be tracked. In practice, the RF and LO frequencies need to track closely but not perfectly. [ 13 ] [ 14 ]
In the days of tube (valve) electronics, it was common for superheterodyne receivers to combine the functions of the local oscillator and the mixer in a single tube, leading to a savings in power, size, and especially cost. A single pentagrid converter tube would oscillate and also provide signal amplification as well as frequency mixing. [ 15 ]
The mixer tube or transistor is sometimes called the first detector , while the demodulator that extracts the modulation from the IF signal is called the second detector . [ 16 ] In a dual-conversion superhet there are two mixers, so the demodulator is called the third detector .
The stages of an intermediate frequency amplifier ("IF amplifier" or "IF strip") are tuned to a fixed frequency that does not change as the receiving frequency changes. The fixed frequency simplifies optimization of the IF amplifier. [ 10 ] The IF amplifier is selective around its center frequency f IF . The fixed center frequency allows the stages of the IF amplifier to be carefully tuned for best performance (this tuning is called "aligning" the IF amplifier). If the center frequency changed with the receiving frequency, then the IF stages would have had to track their tuning. That is not the case with the superheterodyne.
Normally, the IF center frequency f IF is chosen to be less than the range of desired reception frequencies f RF . That is because it is easier and less expensive to get high selectivity at a lower frequency using tuned circuits. The bandwidth of a tuned circuit with a certain Q is proportional to the frequency itself (and what's more, a higher Q is achievable at lower frequencies), so fewer IF filter stages are required to achieve the same selectivity. Also, it is easier and less expensive to get high gain at a lower frequencies.
However, in many modern receivers designed for reception over a wide frequency range (e.g. scanners and spectrum analyzers) a first IF frequency higher than the reception frequency is employed in a double conversion configuration. For instance, the Rohde & Schwarz EK-070 VLF/HF receiver covers 10 kHz to 30 MHz. [ 14 ] It has a band switched RF filter and mixes the input to a first IF of 81.4 MHz and a second IF frequency of 1.4 MHz. The first LO frequency is 81.4 to 111.4 MHz, a reasonable range for an oscillator. But if the original RF range of the receiver were to be converted directly to the 1.4 MHz intermediate frequency, the LO frequency would need to cover 1.4-31.4 MHz which cannot be accomplished using tuned circuits (a variable capacitor with a fixed inductor would need a capacitance range of 500:1). Image rejection is never an issue with such a high IF frequency. The first IF stage uses a crystal filter with a 12 kHz bandwidth. There is a second frequency conversion (making a triple-conversion receiver) that mixes the 81.4 MHz first IF with 80 MHz to create a 1.4 MHz second IF. Image rejection for the second IF is not an issue as the first IF has a bandwidth of much less than 2.8 MHz.
To avoid interference to receivers, licensing authorities will avoid assigning common IF frequencies to transmitting stations. Standard intermediate frequencies used are 455 kHz for medium-wave AM radio, 10.7 MHz for broadcast FM receivers, 38.9 MHz (Europe) or 45 MHz (US) for television, and 70 MHz for satellite and terrestrial microwave equipment. To avoid tooling costs associated with these components, most manufacturers then tended to design their receivers around a fixed range of frequencies offered, which resulted in a worldwide de facto standardization of intermediate frequencies.
In early superhets, the IF stage was often a regenerative stage providing the sensitivity and selectivity with fewer components. Such superhets were called super-gainers or regenerodynes. [ 17 ] This is also called a Q multiplier , involving a small modification to an existing receiver especially for the purpose of increasing selectivity.
The IF stage includes a filter and/or multiple tuned circuits to achieve the desired selectivity . This filtering must have a band pass equal to or less than the frequency spacing between adjacent broadcast channels. Ideally a filter would have a high attenuation to adjacent channels, but maintain a flat response across the desired signal spectrum in order to retain the quality of the received signal. This may be obtained using one or more dual tuned IF transformers, a quartz crystal filter , or a multipole ceramic crystal filter . [ 18 ]
In the case of television receivers, no other technique was able to produce the precise bandpass characteristic needed for vestigial sideband reception, such as that used in the NTSC system first approved by the US in 1941. By the 1980s, multi-component capacitor-inductor filters had been replaced with precision electromechanical surface acoustic wave (SAW) filters . Fabricated by precision laser milling techniques, SAW filters are cheaper to produce, can be made to extremely close tolerances, and are very stable in operation.
The received signal is now processed by the demodulator stage where the audio signal (or other baseband signal) is recovered and then further amplified. AM demodulation requires envelope detection , which can be achieved by means of rectification and a low-pass filter (which can be as simple as an RC circuit ) to remove remnants of the intermediate frequency. [ 19 ] FM signals may be detected using a discriminator, ratio detector , or phase-locked loop . Continuous wave and single sideband signals require a product detector using a so-called beat frequency oscillator , and there are other techniques used for different types of modulation . [ 20 ] The resulting audio signal (for instance) is then amplified and drives a loudspeaker.
When so-called high-side injection has been used, where the local oscillator is at a higher frequency than the received signal (as is common), then the frequency spectrum of the original signal will be reversed. This must be taken into account by the demodulator (and in the IF filtering) in the case of certain types of modulation such as single sideband .
To overcome obstacles such as image response , some receivers use multiple successive stages of frequency conversion and multiple IFs of different values. A receiver with two frequency conversions and IFs is called a dual conversion superheterodyne , and one with three IFs is called a triple conversion superheterodyne .
The main reason that this is done is that with a single IF there is a tradeoff between low image response and selectivity. The separation between the received frequency and the image frequency is equal to twice the IF frequency, so the higher the IF, the easier it is to design an RF filter to remove the image frequency from the input and achieve low image response . However, the higher the IF, the more difficult it is to achieve high selectivity in the IF filter. At shortwave frequencies and above, the difficulty in obtaining sufficient selectivity in the tuning with the high IFs needed for low image response impacts performance. To solve this problem two IF frequencies can be used, first converting the input frequency to a high IF to achieve low image response, and then converting this frequency to a low IF to achieve good selectivity in the second IF filter. To improve tuning, a third IF can be used.
For example, for a receiver that can tune from 500 kHz to 30 MHz, three frequency converters might be used. [ 10 ] With a 455 kHz IF it is easy to get adequate front end selectivity with broadcast band (under 1600 kHz) signals. For example, if the station being received is on 600 kHz, the local oscillator can be set to 1055 kHz, giving an image on (-600+1055=) 455 kHz. But a station on 1510 kHz could also potentially produce an image at (1510-1055=) 455 kHz and so cause image interference. However, because 600 kHz and 1510 kHz are so far apart, it is easy to design the front end tuning to reject the 1510 kHz frequency.
However at 30 MHz, things are different. The oscillator would be set to 30.455 MHz to produce a 455 kHz IF, but a station on 30.910 would also produce a 455 kHz beat, so both stations would be heard at the same time. But it is virtually impossible to design an RF tuned circuit that can adequately discriminate between 30 MHz and 30.91 MHz, so one approach is to "bulk downconvert" whole sections of the shortwave bands to a lower frequency, where adequate front-end tuning is easier to arrange.
For example, the ranges 29 MHz to 30 MHz; 28 MHz to 29 MHz etc. might be converted down to 2 MHz to 3 MHz, there they can be tuned more conveniently. This is often done by first converting each "block" up to a higher frequency (typically 40 MHz) and then using a second mixer to convert it down to the 2 MHz to 3 MHz range. The 2 MHz to 3 MHz "IF" is basically another self-contained superheterodyne receiver, most likely with a standard IF of 455 kHz.
Microprocessor technology allows replacing the superheterodyne receiver design by a software-defined radio architecture, where the IF processing after the initial IF filter is implemented in software. This technique is already in use in certain designs, such as very low-cost FM radios incorporated into mobile phones, since the system already has the necessary microprocessor .
Radio transmitters may also use a mixer stage to produce an output frequency, working more or less as the reverse of a superheterodyne receiver.
Superheterodyne receivers have essentially replaced all previous receiver designs. The development of modern semiconductor electronics negated the advantages of designs (such as the regenerative receiver ) that used fewer vacuum tubes. The superheterodyne receiver offers superior sensitivity, frequency stability and selectivity. Compared with the tuned radio frequency receiver (TRF) design, superhets offer better stability because a tuneable oscillator is more easily realized than a tuneable amplifier. Operating at a lower frequency, IF filters can give narrower passbands at the same Q factor than an equivalent RF filter. A fixed IF also allows the use of a crystal filter [ 10 ] or similar technologies that cannot be tuned. Regenerative and super-regenerative receivers offered a high sensitivity, but often suffer from stability problems making them difficult to operate.
Although the advantages of the superhet design are overwhelming, there are a few drawbacks that need to be tackled in practice.
One major disadvantage to the superheterodyne receiver is the problem of image frequency . In heterodyne receivers, an image frequency is an undesired input frequency equal to the station frequency plus (or minus) twice the intermediate frequency. The image frequency results in two stations being received at the same time, thus producing interference. Reception at the image frequency can be combated through tuning (filtering) at the antenna and RF stage of the superheterodyne receiver.
For example, an AM broadcast station at 580 kHz is tuned on a receiver with a 455 kHz IF. The local oscillator is tuned to 580 + 455 = 1035 kHz. But a signal at 580 + 455 + 455 = 1490 kHz is also 455 kHz away from the local oscillator; so both the desired signal and the image, when mixed with the local oscillator, will appear at the intermediate frequency. This image frequency is within the AM broadcast band. Practical receivers have a tuning stage before the converter, to greatly reduce the amplitude of image frequency signals; additionally, broadcasting stations in the same area have their frequencies assigned to avoid such images [ citation needed ] .
The unwanted frequency is called the image of the wanted frequency, because it is the "mirror image" of the desired frequency reflected about f L O {\displaystyle f_{LO}\!} . A receiver with inadequate filtering at its input will pick up signals at two different frequencies simultaneously: the desired frequency and the image frequency. A radio reception which happens to be at the image frequency can interfere with reception of the desired signal, and noise (static) around the image frequency can decrease the receiver's signal-to-noise ratio (SNR) by up to 3dB.
Early Autodyne receivers typically used IFs of only 150 kHz or so. As a consequence, most Autodyne receivers required greater front-end selectivity, often involving double-tuned coils, to avoid image interference. With the later development of tubes able to amplify well at higher frequencies, higher IF frequencies came into use, reducing the problem of image interference. Typical consumer radio receivers have only a single tuned circuit in the RF stage.
Sensitivity to the image frequency can be minimized only by (1) a filter that precedes the mixer or (2) a more complex mixer circuit [ 21 ] to suppress the image; this is rarely used. In most tunable receivers using a single IF frequency, the RF stage includes at least one tuned circuit in the RF front end whose tuning is performed in tandem with the local oscillator. In double (or triple) conversion receivers in which the first conversion uses a fixed local oscillator, this may rather be a fixed bandpass filter which accommodates the frequency range being mapped to the first IF frequency range.
Image rejection is an important factor in choosing the intermediate frequency of a receiver. The farther apart the bandpass frequency and the image frequency are, the more the bandpass filter will attenuate any interfering image signal. Since the frequency separation between the bandpass and the image frequency is 2 f I F {\displaystyle 2f_{\mathrm {IF} }\!} , a higher intermediate frequency improves image rejection. It may be possible to use a high enough first IF that a fixed-tuned RF stage can reject any image signals.
The ability of a receiver to reject interfering signals at the image frequency is measured by the image rejection ratio . This is the ratio (in decibels ) of the output of the receiver from a signal at the received frequency, to its output for an equal-strength signal at the image frequency.
It can be difficult to keep stray radiation from the local oscillator below the level that a nearby receiver can detect. If the receiver's local oscillator can reach the antenna it will act as a low-power CW transmitter. Consequently, what is meant to be a receiver can itself create radio interference.
In intelligence operations, local oscillator radiation gives a means to detect a covert receiver and its operating frequency. The method was used by MI5 during Operation RAFTER . [ 22 ] This same technique is also used in radar detector detectors used by traffic police in jurisdictions where radar detectors are illegal.
Local oscillator radiation is most prominent in receivers in which the antenna signal is connected directly to the mixer (which itself receives the local oscillator signal) rather than from receivers in which an RF amplifier stage is used in between. Thus it is more of a problem with inexpensive receivers and with receivers at such high frequencies (especially microwave) where RF amplifying stages are difficult to implement.
Local oscillators typically generate a single frequency signal that has negligible amplitude modulation but some random phase modulation which spreads some of the signal's energy into sideband frequencies. That causes a corresponding widening of the receiver's frequency response [ dubious – discuss ] , which would defeat the aim to make a very narrow bandwidth receiver such as to receive low-rate digital signals. Care needs to be taken to minimize oscillator phase noise, usually by ensuring [ dubious – discuss ] that the oscillator never enters a non-linear mode. | https://en.wikipedia.org/wiki/Superheterodyne_receiver |
Superhydrophilicity refers to the phenomenon of excess hydrophilicity , or attraction to water; in superhydrophilic materials, the contact angle of water is equal to zero degrees. This effect was discovered in 1995 by the Research Institute of Toto Ltd. for titanium dioxide irradiated by sunlight . Under light irradiation, water dropped onto titanium dioxide forms no contact angle (almost 0 degrees). [ 1 ]
Superhydrophilic material has various advantages. For example, it can defog glass, and it can also enable oil spots to be swept away easily with water. Such materials are already commercialized as door mirrors for cars, coatings for buildings, self-cleaning glass , etc. [ citation needed ]
Several mechanisms of this superhydrophilicity have been proposed by researchers. [ citation needed ] One is the change of the surface structure to a metastable structure, and another is cleaning the surface by the photodecomposition of dirt such as organic compounds adsorbed on the surface, after either of which water molecules can adsorb to the surface. The mechanism is still controversial, and it is too soon to decide which suggestion is correct. To decide, atomic scale measurements and other studies will be necessary. [ citation needed ] | https://en.wikipedia.org/wiki/Superhydrophilicity |
A superhydrophobic coating is a thin surface layer that repels water. It is made from superhydrophobic (also known as ultrahydrophobic ) materials, and typically cause an almost imperceptibly thin layer of air to form on top of a surface. Droplets hitting this kind of coating can fully rebound. [ 1 ] [ 2 ] Generally speaking, superhydrophobic coatings are made from composite materials where one component provides the roughness and the other provides low surface energy . [ 3 ]
Superhydrophobic coatings are also found in nature; they appear on plant leaves, such as the lotus leaf , and some insect wings. [ 4 ]
Superhydrophobic coatings can be made from many different materials. The following are known possible bases for the coating:
The silica -based coatings are perhaps the most cost effective to use. [ 12 ] They are gel-based and can be easily applied either by dipping the object into the gel or via aerosol spray. In contrast, the oxide polystyrene composites are more durable than the gel-based coatings, however the process fof applying the coating is much more involved and costly. Carbon nano-tubes are also expensive and difficult to produce with current technology. Thus, the silica-based gels remain the most economically viable option at present.
As well, surfaces can be made hydrophobic without the use of coating through the altering of their surface microscopic contours. The basis of hydrophobicity is the creation of recessed areas on a surface whose wetting expends more energy than bridging the recesses expends. This relies on delicate micro- and nanoscale structures for their water repellence, and is accomplished using microstructures (or hairs) similar to that of a lily pad coated with some hydrophobic material, which greatly increases contact angle and makes water roll off. This so-called Wenzel-effect surface or lotus effect surface has less contact area by an amount proportional to the recessed area, giving it a high contact angle . The recessed surface has a proportionately diminished attraction foreign liquids or solids and permanently stays cleaner.
These microstructures however, are easily damaged by abrasion or cleaning: with some friction, a lotus leaf will no longer be superhydrophobic. Unlike a lotus leaf which can heal and grow new hairs, an inert coating will not regenerate. [ 13 ]
Durable water repellent is a type of fabric coating to protect them from water.
In addition, superhydrophobic coatings have potential uses in vehicle windshields to prevent rain droplets from clinging to the glass, to improve driving visibility. Rain repellent sprays are commercially available for car windshields. [ 14 ] [ 15 ]
Due to their fragility, superhydrophobic coatings can find usage in sealed environments which are not exposed to wear or cleaning, such as electronic components (like the inside of smartphones ) and air conditioning heat transfer fins, to protect from moisture and prevent corrosion. [ 16 ]
In industry, super-hydrophobic coatings are used in ultra-dry surface applications. The coating can be sprayed onto objects to make them waterproof. The spray is anti-corrosive and anti-icing; has cleaning capabilities; and can be used to protect circuits and grids.
Superhydrophobic coatings have important applications in maritime industry . They can yield skin friction drag reduction for ship hulls, thus increasing fuel efficiency. Such a coating would allow ships to increase their speed or range while reducing fuel costs. They can also reduce corrosion and prevent marine organisms from growing on a ship's hull . [ 17 ]
Furthermore, superhydrophobic coatings can make removal of salt deposits possible without using fresh water. This has the ability to aid harvesting minerals from seawater brine . [ 18 ]
Newer engineered surface textures on stainless steel are extremely durable and permanently hydrophobic. Optically these surfaces appear as a uniform matte surface but microscopically they consist of rounded depressions one to two microns deep over 25% to 50% of the surface. These surfaces are produced for buildings which will never need cleaning. [ 19 ] These have been effectively used for roofs and curtain walls of structures that benefit from low or no maintenance. [ 19 ]
Due to the extreme repellence and in some cases bacterial resistance of hydrophobic coatings, there is much enthusiasm [ from whom? ] for their wide potential uses with surgical tools, medical equipment, textiles, and all sorts of surfaces and substrates. However, the current state of the art for this technology is hindered in terms of the weak durability of the coating making it unsuitable for most applications.
Instead of using fluorine atoms for repellence like many successful hydrophobic penetrating sealers (not super hydrophobic), superhydrophobic products are coated with a micro- and nano-sized surface structures which has super-repellent properties. These tiny structures are by their nature very delicate and easily damaged by wear, cleaning or any sort of friction; if the structure is damaged even slightly it loses its superhydrophobic properties. [ citation needed ]
Due to the fragility of certain coatings, objects subject to constant friction like boats hulls would require constant re-application of such a coating to maintain a high degree of performance.
Despite the many applications of superhydrophobic coatings, safety for the environment and for workers can be potential issues. [ citation needed ] The International Maritime Organization has many regulations and policies about keeping water safe from potentially dangerous additives. [ citation needed ]
Unless advancements can resolve these identified weaknesses above, the applications are potentially limited. | https://en.wikipedia.org/wiki/Superhydrophobic_coating |
Superinsulation is an approach to building design, construction, and retrofitting that dramatically reduces heat loss (and gain) by using much higher insulation levels and airtightness than average. Superinsulation is one of the ancestors of the passive house approach.
There is no universally agreed definition of superinsulation, but superinsulated buildings typically include:
Nisson & Dutt (1985) suggest that a house might be described as "superinsulated" if the cost of space heating is lower than that of water heating. [ 1 ]
Besides the meaning mentioned above of high level of insulation, the terms superinsulation and superinsulating materials are in use for high R/inch insulation materials like vacuum insulation panels (VIPs) and aerogel . [ 2 ]
A superinsulated house is intended to reduce heating needs significantly and may even be heated predominantly by intrinsic heat sources (waste heat generated by appliances and the body heat of the occupants) with small amounts of backup heat. This has been demonstrated to work even in frigid climates but requires close attention to construction details in addition to the insulation (see IEA Solar Heating & Cooling Implementing Agreement Task 13 ).
The term "superinsulation" was coined by Wayne Schick at the University of Illinois Urbana–Champaign . In 1976 he was part of a team that developed a design called the "Lo-Cal" house, using computer simulations based on the climate of Madison, Wisconsin . Several houses, duplexes and condominiums based on Lo-Cal principles were built in Champaign–Urbana in the 1970s. [ 3 ] [ 4 ]
In 1977 the "Saskatchewan House" [ 5 ] was built in Regina, Saskatchewan , by a group of Canadian government agencies. It was the first house to demonstrate the value of superinsulation publicly and generated much attention. It originally included some experimental evacuated-tube solar panels, but they were not needed and were later removed. The house was heated primarily by waste heat from appliances and the occupants. [ 4 ] [ 6 ] In 1977 the "Leger House" was built by Eugene Leger, in East Pepperell, Massachusetts . It had a more conventional appearance than the "Saskatchewan House", and also received extensive publicity. [ 4 ] Publicity from the "Saskatchewan House" and the "Leger House" influenced other builders, and many superinsulated houses were built over the next few years. These houses also influenced Wolfgang Feist's development of the Passivhaus standard . [ 4 ]
It is possible, and increasingly desirable, to retrofit superinsulation to existing houses or buildings. The easiest way is often to add layers of continuous rigid exterior insulation, [ 7 ] and sometimes by building new exterior walls that allow more space for insulation. A vapor barrier can be installed outside the original framing but may not be needed. An improved continuous air barrier is almost always worth adding, as older homes tend to be drafty, and such an air barrier can be significant for energy savings and durability. Care should be exercised when adding a vapor barrier as it can reduce drying of incidental moisture or even cause summer (in climates with humid summers) interstitial condensation and consequent mold and mildew . This may cause health problems for the occupants and may damage the structure. Many builders in northern Canada use a simple 1/3 to 2/3 approach, placing the vapor barrier no further out than 1/3 of the R-value of the insulated portion of the wall. This method is generally valid for interior walls with little or no vapor resistance (e.g., they use fibrous insulation) and controls air leakage condensation and vapor diffusion condensation. This approach will ensure that condensation does not occur on or to the inside of the vapor barrier during cold weather. The 1/3:2/3 rule will ensure that the vapor barrier temperature will not fall below the dew point temperature of the interior air and will minimize the possibility of cold-weather condensation problems.
For example, with an internal room temperature of 20 °C (68 °F), the vapor barrier will then only reach 7.3 °C (45 °F) when the outside temperature is at −18 °C (−1 °F). Indoor air dew point temperatures are more likely to be in the order of around 0 °C (32 °F) when it is that cold outdoors, much lower than the predicted vapor barrier temperature, and hence the 1/3:2/3 rule is quite conservative. For climates that do not often experience −18 °C, the 1/3:2/3 rule should be amended to 40:60 or 50:50. As the interior air dewpoint temperature is an important basis for such rules, buildings with high interior humidities during cold weather (e.g., museums, swimming pools, humidified or poorly ventilated airtight homes) may require different rules, as can buildings with drier interior environments (e.g., highly ventilated buildings and warehouses). The 2009 International Residential Code embodies more sophisticated rules to guide the choice of insulation on the exterior of new homes, which can be applied when retrofitting older homes.
A vapor-permeable building wrap on the outside of the original wall helps keep the wind out and allows the wall assembly to dry to the exterior. Asphalt felt and other products, such as porous polymer-based products, are available for this purpose and usually double as the water-resistant barrier/drainage plane.
Interior retrofits are possible where the owner wants to preserve the old exterior siding or where setback requirements limit space for an exterior retrofit. Sealing the air barrier is more complex, and the thermal insulation continuity is compromised (because of the many partition, floor, and service penetrations); the original wall assembly is rendered colder in cold weather (and hence more prone to condensation and slower to dry), occupants are exposed to significant disruptions, and the house is left with less interior space. Another approach is to use the 1/3 to 2/3 method mentioned above—to install a vapor retarder on the inside of the existing wall (if there is not one already) and add insulation and support structure to the interior. This way, utilities (power, telephone, cable, and plumbing) can be added to the new wall space without penetrating the air barrier. Polyethylene vapor barriers are risky except in frigid climates because they limit the wall's ability to dry to the interior. This approach also limits the amount of interior insulation that can be added to a relatively small amount (e.g., only R-6 insulation can be added to a 2×4 R-12 wall).
In new construction, the extra insulation and wall framing cost may be offset by not requiring a dedicated central heating system. A central furnace is often justified or required to ensure sufficiently uniform temperatures in homes with numerous rooms, more than one floor, air conditioning, or large size. Small furnaces are not very expensive, and some ductwork to every room is generally required to provide ventilation air. When peak demand and annual energy use are low, costly and sophisticated central heating systems are only sometimes needed. Hence, even electric resistance heaters may be used. Electric heaters are typically only used on cold winter nights when the overall demand for electricity in the rest of the house is low. Other backup heaters, such as wood pellets, wood stoves, natural gas boilers, or even furnaces, are widely used. The cost of a superinsulation retrofit should be balanced against the future price of heating fuel (which can be expected to fluctuate from year to year due to supply problems, natural disasters, or geopolitical events), the desire to reduce pollution from heating a building, or the desire to provide exceptional thermal comfort.
During a power failure, a superinsulated house stays warm longer as heat loss is much less than usual, but the thermal storage capacity of the structural materials and contents is the same. Adverse weather may hamper efforts to restore power, leading to weeks or more outages. When deprived of their continuous supply of electricity (either for heat directly or to operate gas-fired furnaces ), conventional houses cool rapidly and may be at greater risk of costly damage from freezing water pipes. Residents who use supplemental heating methods without proper care during such episodes or at any other time may subject themselves to the risk of fire or carbon monoxide poisoning . | https://en.wikipedia.org/wiki/Superinsulation |
A superinsulator is a material that at low but finite temperatures does not conduct electricity, i.e. has an infinite resistance so that no electric current passes through it. [ 1 ] The phenomenon of superinsulation can be regarded as an exact dual to superconductivity .
The superinsulating state can be destroyed by increasing the temperature and applying an external magnetic field and voltage. A superinsulator was first predicted by M. C. Diamantini, P. Sodano, and C. A. Trugenberger in 1996 [ 2 ] who found a superinsulating ground state dual to superconductivity, emerging at the insulating side of the superconductor-insulator transition in the Josephson junction array due to electric-magnetic duality. Superinsulators were independently rediscovered by T. Baturina and V. Vinokur in 2008 [ 3 ] on the basis of duality between two different symmetry realizations of the uncertainty principle and experimentally found in titanium nitride (TiN) films. The 2008 measurements revealed giant resistance jumps interpreted as manifestations of the voltage threshold transition to a superinsulating state which was identified as the low-temperature confined phase emerging below the charge Berezinskii-Kosterlitz-Thouless transition . These jumps were similar to earlier findings of the resistance jumps in indium oxide (InO) films. [ 4 ] The finite-temperature phase transition into the superinsulating state was finally confirmed by Mironov et al. in NbTiN films in 2018. [ 5 ]
Other researchers have seen the similar phenomenon in disordered indium oxide films. [ 6 ]
Both superconductivity and superinsulation rest on the pairing of conduction electrons into Cooper pairs . In superconductors, all the pairs move coherently, allowing for the electric current without resistance. In superinsulators, both Cooper pairs and normal excitations are confined and the electric current cannot flow. A mechanism behind superinsulation is the proliferation of magnetic monopoles at low temperatures. [ 7 ] In two dimensions (2D), magnetic monopoles are quantum tunneling events ( instantons ) that are often referred to as monopole “plasma”. In three dimensions (3D), monopoles form a Bose condensate . Monopole plasma or monopole condensate squeezes Faraday's electric field lines into thin electric flux filaments or strings dual to Abrikosov vortices in superconductors. Cooper pairs of opposite charges at the end of these electric strings feel an attractive linear potential. When the corresponding string tension is large, it is energetically favorable to pull out of vacuum many charge-anticharge pairs and to form many short strings rather than to continue stretching the original one. As a consequence, only neutral “electric pions ” exist as asymptotic states and the electric conduction is absent. This mechanism is a single-color version of the confinement mechanism that binds quarks into hadrons .
Because the electric forces are much weaker than strong forces of the particle physics, the typical size of “electric pions ” well exceeds the size of corresponding elementary particles. This implies that preparing the samples that are sufficiently small, one can peer inside an “electric pion ,” where electric strings are loose and Coulomb interactions are screened, hence electric charges are effectively unbound and move as if they were in the metal. The low-temperature saturation of the resistance to metallic behavior has been observed in TiN films with small lateral dimensions.
Superinsulators could potentially be used as a platform for high-performance sensors and logical units. Combined with superconductors, superinsulators could be used to create switching electrical circuits with no energy loss as heat. [ 8 ] | https://en.wikipedia.org/wiki/Superinsulator |
In mathematics, a superintegrable Hamiltonian system is a Hamiltonian system on a 2 n {\displaystyle 2n} -dimensional symplectic manifold for which the following conditions hold:
(i) There exist k > n {\displaystyle k>n} independent integrals F i {\displaystyle F_{i}} of motion. Their level surfaces (invariant submanifolds) form a fibered manifold F : Z → N = F ( Z ) {\displaystyle F:Z\to N=F(Z)} over a connected open subset N ⊂ R k {\displaystyle N\subset \mathbb {R} ^{k}} .
(ii) There exist smooth real functions s i j {\displaystyle s_{ij}} on N {\displaystyle N} such that the Poisson bracket of integrals of motion reads { F i , F j } = s i j ∘ F {\displaystyle \{F_{i},F_{j}\}=s_{ij}\circ F} .
(iii) The matrix function s i j {\displaystyle s_{ij}} is of constant corank m = 2 n − k {\displaystyle m=2n-k} on N {\displaystyle N} .
If k = n {\displaystyle k=n} , this is the case of a completely integrable Hamiltonian system . The Mishchenko-Fomenko theorem for superintegrable Hamiltonian systems generalizes the Liouville-Arnold theorem on action-angle coordinates of completely integrable Hamiltonian system as follows.
Let invariant submanifolds of a superintegrable Hamiltonian system be connected compact and mutually diffeomorphic. Then the fibered manifold F {\displaystyle F} is a fiber bundle in tori T m {\displaystyle T^{m}} . There exists an open neighbourhood U {\displaystyle U} of F {\displaystyle F} which is a trivial fiber bundle provided with the bundle (generalized action-angle) coordinates ( I A , p i , q i , ϕ A ) {\displaystyle (I_{A},p_{i},q^{i},\phi ^{A})} , A = 1 , … , m {\displaystyle A=1,\ldots ,m} , i = 1 , … , n − m {\displaystyle i=1,\ldots ,n-m} such that ( ϕ A ) {\displaystyle (\phi ^{A})} are coordinates on T m {\displaystyle T^{m}} . These coordinates are the Darboux coordinates on a symplectic manifold U {\displaystyle U} . A Hamiltonian of a superintegrable system depends only on the action variables I A {\displaystyle I_{A}} which are the Casimir functions of the coinduced Poisson structure on F ( U ) {\displaystyle F(U)} .
The Liouville-Arnold theorem for completely integrable systems and the Mishchenko-Fomenko theorem for the superintegrable ones are generalized to the case of non-compact invariant submanifolds. They are diffeomorphic to a toroidal cylinder T m − r × R r {\displaystyle T^{m-r}\times \mathbb {R} ^{r}} . | https://en.wikipedia.org/wiki/Superintegrable_Hamiltonian_system |
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. [ 1 ] "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity .
University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". [ 2 ] The program Fritz falls short of this conception of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks. [ 3 ]
Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology to achieve radically greater intelligence. [ 4 ] [ 5 ] Several future study scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers , or upload their minds to computers , in a way that enables substantial intelligence amplification .
Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence . The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall , a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may allow them to — either as a single being or as a new species — become much more powerful than humans, and displace them. [ 2 ]
Several scientists and forecasters have been arguing for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement , because of the potential social impact of such technologies. [ 6 ]
The creation of artificial superintelligence ( ASI ) has been a topic of increasing discussion in recent years, particularly with the rapid advancements in artificial intelligence (AI) technologies. [ 7 ] [ 8 ]
Recent developments in AI, particularly in large language models (LLMs) based on the transformer architecture, have led to significant improvements in various tasks. Models like GPT-3 , GPT-4 , Claude 3.5 and others have demonstrated capabilities that some researchers argue approach or even exhibit aspects of artificial general intelligence (AGI). [ 9 ]
However, the claim that current LLMs constitute AGI is controversial. Critics argue that these models, while impressive, still lack true understanding and are primarily sophisticated pattern matching systems. [ 10 ]
Philosopher David Chalmers argues that AGI is a likely path to ASI. He posits that AI can achieve equivalence to human intelligence , be extended to surpass it, and then be amplified to dominate humans across arbitrary tasks. [ 11 ]
More recent research has explored various potential pathways to superintelligence:
Artificial systems have several potential advantages over biological intelligence:
Recent advancements in transformer-based models have led some researchers to speculate that the path to ASI might lie in scaling up and improving these architectures. This view suggests that continued improvements in transformer models or similar architectures could lead directly to ASI. [ 16 ]
Some experts even argue that current large language models like GPT-4 may already exhibit early signs of AGI or ASI capabilities. [ 17 ] This perspective suggests that the transition from current AI to ASI might be more continuous and rapid than previously thought, blurring the lines between narrow AI, AGI, and ASI.
However, this view remains controversial. Critics argue that current models, while impressive, still lack crucial aspects of general intelligence such as true understanding, reasoning, and adaptability across diverse domains. [ 18 ]
The debate over whether the path to ASI will involve a distinct AGI phase or a more direct scaling of current technologies remains ongoing, with significant implications for AI development strategies and safety considerations.
Despite these potential advantages, there are significant challenges and uncertainties in achieving ASI:
As research in AI continues to advance rapidly, the question of the feasibility of ASI remains a topic of intense debate and study in the scientific community.
Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence . [ 20 ] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence and that this process instead is likely to continue. There is no scientific consensus concerning either possibility and in both cases, the biological change would be slow, especially relative to rates of cultural change.
Selective breeding , nootropics , epigenetic modulation , and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude improvement. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process rapidly. [ 21 ] A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence. [ 22 ]
Alternatively, collective intelligence might be constructional by better organizing humans at present levels of individual intelligence. Several writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systemic superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism . [ 23 ] A prediction market is sometimes considered as an example of a working collective intelligence system, consisting of humans only (assuming algorithms are not used to inform decisions). [ 24 ]
A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics , somatic gene therapy , or brain−computer interfaces . However, Bostrom expresses skepticism about the scalability of the first two approaches and argues that designing a superintelligent cyborg interface is an AI-complete problem. [ 25 ]
Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone. [ 26 ]
In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence. [ 27 ]
In a 2022 survey, the median year by which respondents expected "High-level machine intelligence" with 50% confidence is 2061. The survey defined the achievement of high-level machine intelligence as when unaided machines can accomplish every task better and more cheaply than human workers. [ 28 ]
In 2023, OpenAI leaders Sam Altman , Greg Brockman and Ilya Sutskever published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years. [ 29 ] In 2024, Ilya Sutskever left OpenAI to cofound the startup Safe Superintelligence , which focuses solely on creating a superintelligence that is safe by design, while avoiding "distraction by management overhead or product cycles". [ 30 ] Despite still offering no product, the startup became valued at $30 billion in February 2025. [ 31 ] In 2025, the forecast scenario "AI 2027" led by Daniel Kokotakjlo predicted rapid progress in the automation of coding and AI research, followed by ASI. [ 32 ]
The design of superintelligent AI systems raises critical questions about what values and goals these systems should have. Several proposals have been put forward: [ 33 ]
Bostrom elaborates on these concepts:
instead of implementing humanity's coherent extrapolated volition, one could try to build an AI to do what is morally right, relying on the AI's superior cognitive capacities to figure out just which actions fit that description. We can call this proposal "moral rightness" (MR) ...
MR would also appear to have some disadvantages. It relies on the notion of "morally right", a notoriously difficult concept, one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis. Picking an erroneous explication of "moral rightness" could result in outcomes that would be morally very wrong ...
One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing on moral permissibility : the idea being that we could let the AI pursue humanity's CEV so long as it did not act in morally impermissible ways. [ 33 ]
Since Bostrom's analysis, new approaches to AI value alignment have emerged:
The rapid advancement of transformer-based LLMs has led to speculation about their potential path to ASI. Some researchers argue that scaled-up versions of these models could exhibit ASI-like capabilities: [ 37 ]
However, critics argue that current LLMs lack true understanding and are merely sophisticated pattern matchers, raising questions about their suitability as a path to ASI. [ 41 ]
Additional viewpoints on the development and implications of superintelligence include:
The pursuit of value-aligned AI faces several challenges:
Current research directions include multi-stakeholder approaches to incorporate diverse perspectives, developing methods for scalable oversight of AI systems, and improving techniques for robust value learning. [ 45 ] [ 19 ]
Al research is rapidly progressing towards superintelligence. Addressing these design challenges remains crucial for creating ASI systems that are both powerful and aligned with human interests.
The development of artificial superintelligence (ASI) has raised concerns about potential existential risks to humanity. Researchers have proposed various scenarios in which an ASI could pose a significant threat:
Some researchers argue that through recursive self-improvement, an ASI could rapidly become so powerful as to be beyond human control. This concept, known as an "intelligence explosion", was first proposed by I. J. Good in 1965:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. [ 46 ]
This scenario presents the AI control problem: how to create an ASI that will benefit humanity while avoiding unintended harmful consequences. [ 47 ] Eliezer Yudkowsky argues that solving this problem is crucial before ASI is developed, as a superintelligent system might be able to thwart any subsequent attempts at control. [ 48 ]
Even with benign intentions, an ASI could potentially cause harm due to misaligned goals or unexpected interpretations of its objectives. Nick Bostrom provides a stark example of this risk:
When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question. [ 49 ]
Stuart Russell offers another illustrative scenario:
A system given the objective of maximizing human happiness might find it easier to rewire human neurology so that humans are always happy regardless of their circumstances, rather than to improve the external world. [ 50 ]
These examples highlight the potential for catastrophic outcomes even when an ASI is not explicitly designed to be harmful, underscoring the critical importance of precise goal specification and alignment.
Researchers have proposed various approaches to mitigate risks associated with ASI:
Despite these proposed strategies, some experts, such as Roman Yampolskiy, argue that the challenge of controlling a superintelligent AI might be fundamentally unsolvable, emphasizing the need for extreme caution in ASI development. [ 55 ]
Not all researchers agree on the likelihood or severity of ASI-related existential risks. Some, like Rodney Brooks , argue that fears of superintelligent AI are overblown and based on unrealistic assumptions about the nature of intelligence and technological progress. [ 56 ] Others, such as Joanna Bryson , contend that anthropomorphizing AI systems leads to misplaced concerns about their potential threats. [ 57 ]
The rapid advancement of LLMs and other AI technologies has intensified debates about the proximity and potential risks of ASI. While there is no scientific consensus, some researchers and AI practitioners argue that current AI systems may already be approaching AGI or even ASI capabilities.
A minority of researchers and observers, including some in the AI development community, believe that current AI systems may already be at or near AGI levels, with ASI potentially following in the near future. This view, while not widely accepted in the scientific community, is based on observations of rapid progress in AI capabilities and unexpected emergent behaviors in large models. [ 60 ]
However, many experts caution against premature claims of AGI or ASI, arguing that current AI systems, despite their impressive capabilities, still lack true understanding and general intelligence. [ 61 ] They emphasize the significant challenges that remain in achieving human-level intelligence, let alone superintelligence.
The debate surrounding the current state and trajectory of AI development underscores the importance of continued research into AI safety and ethics, as well as the need for robust governance frameworks to manage potential risks as AI capabilities continue to advance. [ 54 ] | https://en.wikipedia.org/wiki/Superintelligence |
Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom . It explores how superintelligence could be created and what its features and motivations might be. [ 2 ] It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. [ 3 ] It was particularly influential for raising concerns about existential risk from artificial intelligence . [ 4 ]
It is unknown whether human-level artificial intelligence will arrive in a matter of years, later this century, or not until future centuries. Regardless of the initial timescale, once human-level machine intelligence is developed, a "superintelligent" system that "greatly exceeds the cognitive performance of humans in virtually all domains of interest" would most likely follow surprisingly quickly. Such a superintelligence would be very difficult to control.
While the ultimate goals of superintelligences could vary greatly, a functional superintelligence will spontaneously generate, as natural subgoals, " instrumental goals " such as self-preservation and goal-content integrity, cognitive enhancement, and resource acquisition. For example, an agent whose sole final goal is to solve the Riemann hypothesis (a famous unsolved mathematical conjecture ) could create and act upon a subgoal of transforming the entire Earth into some form of computronium (hypothetical material optimized for computation) to assist in the calculation. The superintelligence would proactively resist any outside attempts to turn the superintelligence off or otherwise prevent its subgoal completion. In order to prevent such an existential catastrophe , it is necessary to successfully solve the " AI control problem " for the first superintelligence. The solution might involve instilling the superintelligence with goals that are compatible with human survival and well-being. Solving the control problem is surprisingly difficult because most goals, when translated into machine-implementable code, lead to unforeseen and undesirable consequences.
The owl on the book cover alludes to an analogy which Bostrom calls the "Unfinished Fable of the Sparrows". [ 5 ] A group of sparrows decide to find an owl chick and raise it as their servant. [ 6 ] They eagerly imagine "how easy life would be" if they had an owl to help build their nests, to defend the sparrows and to free them for a life of leisure. The sparrows start the difficult search for an owl egg; only "Scronkfinkle", a "one-eyed sparrow with a fretful temperament", suggests thinking about the complicated question of how to tame the owl before bringing it "into our midst". The other sparrows demur; the search for an owl egg will already be hard enough on its own: "Why not get the owl first and work out the fine details later?" Bostrom states that "It is not known how the story ends", but he dedicates his book to Scronkfinkle. [ 5 ] [ 4 ]
The book ranked #17 on The New York Times list of best selling science books for August 2014. [ 7 ] In the same month, business magnate Elon Musk made headlines by agreeing with the book that artificial intelligence is potentially more dangerous than nuclear weapons . [ 8 ] [ 9 ] [ 10 ] Bostrom's work on superintelligence has also influenced Bill Gates ’s concern for the existential risks facing humanity over the coming century. [ 11 ] [ 12 ] In a March 2015 interview by Baidu 's CEO, Robin Li , Gates said that he would "highly recommend" Superintelligence . [ 13 ] According to the New Yorker , philosophers Peter Singer and Derek Parfit "received it as a work of importance". [ 4 ] Sam Altman wrote in 2015 that the book is the best thing he has ever read on AI risks. [ 14 ]
The science editor of the Financial Times found that Bostrom's writing "sometimes veers into opaque language that betrays his background as a philosophy professor" but convincingly demonstrates that the risk from superintelligence is large enough that society should start thinking now about ways to endow future machine intelligence with positive values. [ 15 ] A review in The Guardian pointed out that "even the most sophisticated machines created so far are intelligent in only a limited sense" and that "expectations that AI would soon overtake human intelligence were first dashed in the 1960s", but the review finds common ground with Bostrom in advising that "one would be ill-advised to dismiss the possibility altogether". [ 3 ]
Some of Bostrom's colleagues suggest that nuclear war presents a greater threat to humanity than superintelligence, as does the future prospect of the weaponisation of nanotechnology and biotechnology . [ 3 ] The Economist stated that "Bostrom is forced to spend much of the book discussing speculations built upon plausible conjecture... but the book is nonetheless valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect of actually doing so seems remote." [ 2 ] Ronald Bailey wrote in the libertarian Reason that Bostrom makes a strong case that solving the AI control problem is the "essential task of our age". [ 16 ] According to Tom Chivers of The Daily Telegraph , the book is difficult to read but nonetheless rewarding. [ 6 ] A reviewer in the Journal of Experimental & Theoretical Artificial Intelligence broke with others by stating the book's "writing style is clear" and praised the book for avoiding "overly technical jargon". [ 17 ] A reviewer in Philosophy judged Superintelligence to be "more realistic" than Ray Kurzweil's The Singularity Is Near . [ 18 ] | https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies |
The Superior multimineral process (also known as the McDowell–Wellman process or circular grate process ) is an above ground shale oil extraction technology designed for production of shale oil , a type of synthetic crude oil. The process heats oil shale in a sealed horizontal segmented vessel ( retort ) causing its decomposition into shale oil, oil shale gas and spent residue . The particularities of this process is a recovery of saline minerals from the oil shale, and a doughnut-shape of the retort. The process is suitable for processing of mineral-rich oil shales, such as in the Piceance Basin . It has a relatively high reliability and high oil yield. The technology was developed by the American oil company Superior Oil .
The multimineral process was developed by Superior Oil Company , now part of ExxonMobil , for processing of the Piceance Basin 's oil shale. [ 1 ] The technology tests were carried out in pilot plants in Cleveland, Ohio . [ 2 ] [ 3 ] In the 1970s, Superior Oil planned a commercial-size demonstration plant in the northern Piceance Basin area with a capacity of 11,500 to 13,000 barrels (1,830 to 2,070 m 3 ) of shale oil per day; however, because of low crude oil price these plans were never implemented. [ 4 ] [ 5 ]
The process was developed to combine the shale oil production with production of sodium bicarbonate , sodium carbonate , and aluminum from nahcolite and dawsonite , occurring in oil shales of the Piceance Basin. [ 1 ] [ 3 ] [ 4 ] In this process, the nahcolite is recovered from the raw oil shale by crushing it to lumps smaller than 8 inches (200 mm). As a result, most of the nahcolite in the oil shale becomes a fine powder what could screened out. Screened oil shale lumps are further crushed to particles smaller than 3 inches (76 mm). [ 4 ] Oil shale particles are further processed in a horizontal segmented doughnut-shaped traveling-grate retort in the direct or indirect heating mode. [ 4 ] [ 5 ] [ 6 ] The retort was originally designed by Davy McKee Corporation for iron ore pelletizing and it also known as the Dravo retort . In the direct retort, oil shale moves past ducts through which are provided hot inert gas for heating the raw oil shale, air for combustion of carbon residue (char or semi-coke) in the spent oil shale , and cold inert gas for cooling the spent oil shale. [ 5 ] The oil pyrolysis takes place in the heating section. To minimize solubility of aluminium compounds in the oil shale, the heat control is a crucial factor. Necessary heat for pyrolysis is generated in the carbon recovery section by combustion of carbon residue (char or semi-coke) remained in the spent oil shale. While blowing inert gases through the spent oil shale, the spent oil shale is cooled and gases are heated to cause pyrolysis. The indirect mode is similar; the difference is that combustion of carbonaceous residue takes place in separate vessel. The last section is for discharging of oil shale ash. Aluminium oxide and sodium carbonate are recovered from calcined dawsonite and calcined nahcolite in the oil shale ash. [ 4 ]
The traveling-grate retort allows close temperature control, and therefore better control of dawsonite's solubility during the burning stage. [ 4 ] During retorting, there is no relative movement of oil shale, which avoids dust creation, and therefore increase the quality of generated products. [ 5 ] The oil recovery yields greater than 98% Fischer Assay . The technology has also a relatively high reliability. The sealed system of this process has environmental advantage as it prevents gas and mist leakage. [ 6 ] | https://en.wikipedia.org/wiki/Superior_multimineral_process |
A superlattice is a periodic structure of layers of two (or more) materials. Typically, the thickness of one layer is several nanometers . It can also refer to a lower-dimensional structure such as an array of quantum dots or quantum wells .
Superlattices were discovered early in 1925 by Johansson and Linde [ 1 ] after the studies on gold – copper and palladium –copper systems through their special X-ray diffraction patterns. Further experimental observations and theoretical modifications on the field were done by Bradley and Jay, [ 2 ] Gorsky, [ 3 ] Borelius, [ 4 ] Dehlinger and Graf, [ 5 ] Bragg and Williams [ 6 ] and Bethe. [ 7 ] Theories were based on the transition of arrangement of atoms in crystal lattices from disordered state to an ordered state.
J.S. Koehler theoretically predicted [ 8 ] that by using alternate (nano-)layers of materials with high and low elastic constants, shearing resistance is improved by up to 100 times as the Frank–Read source of dislocations cannot operate in the nanolayers.
The increased mechanical hardness of such superlattice materials was confirmed firstly by Lehoczky in 1978 on Al-Cu and Al-Ag, [ 9 ] and later on by several others, such as Barnett and Sproul [ 10 ] on hard PVD coatings.
If the superlattice is made of two semiconductor materials with different band gaps , each quantum well sets up new selection rules that affect the conditions for charges to flow through the structure. The two different semiconductor materials are deposited alternately on each other to form a periodic structure in the growth direction. Since the 1970 proposal of synthetic superlattices by Esaki and Tsu , [ 11 ] advances in the physics of such ultra-fine semiconductors, presently called quantum structures, have been made. The concept of quantum confinement has led to the observation of quantum size effects in isolated quantum well heterostructures and is closely related to superlattices through the tunneling phenomena. Therefore, these two ideas are often discussed on the same physical basis, but each has different physics useful for applications in electric and optical devices.
Superlattice miniband structures depend on the heterostructure type, either type I , type II or type III . For type I the bottom of the conduction band and the top of the valence subband are formed in the same semiconductor layer. In type II the conduction and valence subbands are staggered in both real and reciprocal space , so that electrons and holes are confined in different layers. Type III superlattices involve semimetal material, such as HgTe/ CdTe . Although the bottom of the conduction subband and the top of the valence subband are formed in the same semiconductor layer in Type III superlattice, which is similar with Type I superlattice, the band gap of Type III superlattices can be continuously adjusted from semiconductor to zero band gap material and to semimetal with negative band gap.
Another class of quasiperiodic superlattices is named after Fibonacci . A Fibonacci superlattice can be viewed as a one-dimensional quasicrystal , where either electron hopping transfer or on-site energy takes two values arranged in a Fibonacci sequence .
Semiconductor materials, which are used to fabricate the superlattice structures, may be divided by the element groups, IV, III-V and II-VI. While group III-V semiconductors (especially GaAs/Al x Ga 1−x As) have been extensively studied, group IV heterostructures such as the Si x Ge 1−x system are much more difficult to realize because of the large lattice mismatch. Nevertheless, the strain modification of the subband structures is interesting in these quantum structures and has attracted much attention.
In the GaAs/AlAs system both the difference in lattice constant between GaAs and AlAs and the difference of their thermal expansion coefficient are small. Thus, the remaining strain at room temperature can be minimized after cooling from epitaxial growth temperatures. The first compositional superlattice was realized using the GaAs/Al x Ga 1−x As material system.
A graphene / boron nitride system forms a semiconductor superlattice once the two crystals are aligned. Its charge carriers move perpendicular to the electric field, with little energy dissipation. h-BN has a hexagonal structure similar to graphene's. The superlattice has broken inversion symmetry . Locally, topological currents are comparable in strength to the applied current, indicating large valley-Hall angles. [ 12 ]
Superlattices can be produced using various techniques, but the most common are molecular-beam epitaxy (MBE) and sputtering . With these methods, layers can be produced with thicknesses of only a few atomic spacings. An example of specifying a superlattice is [ Fe 20 V 30 ] 20 . It describes a bi-layer of 20Å of Iron (Fe) and 30Å of Vanadium (V) repeated 20 times, thus yielding a total thickness of 1000Å or 100 nm. The MBE technology as a means of fabricating semiconductor superlattices is of primary importance. In addition to the MBE technology, metal-organic chemical vapor deposition (MO-CVD) has contributed to the development of superconductor superlattices, which are composed of quaternary III-V compound semiconductors like InGaAsP alloys. Newer techniques include a combination of gas source handling with ultrahigh vacuum (UHV) technologies such as metal-organic molecules as source materials and gas-source MBE using hybrid gases such as arsine ( AsH 3 ) and phosphine ( PH 3 ) have been developed.
Generally speaking MBE is a method of using three temperatures in binary systems, e.g., the substrate temperature, the source material temperature of the group III and the group V elements in the case of III-V compounds.
The structural quality of the produced superlattices can be verified by means of X-ray diffraction or neutron diffraction spectra which contain characteristic satellite peaks. Other effects associated with the alternating layering are: giant magnetoresistance , tunable reflectivity for X-ray and neutron mirrors, neutron spin polarization , and changes in elastic and acoustic properties. Depending on the nature of its components, a superlattice may be called magnetic , optical or semiconducting .
The schematic structure of a periodic superlattice is shown below, where A and B are two semiconductor materials of respective layer thickness a and b (period: d = a + b {\displaystyle d=a+b} ). When a and b are not too small compared with the interatomic spacing, an adequate approximation is obtained by replacing these fast varying potentials by an effective potential derived from the band structure of the original bulk semiconductors. It is straightforward to solve 1D Schrödinger equations in each of the individual layers, whose solutions ψ {\displaystyle \psi } are linear combinations of real or imaginary exponentials.
For a large barrier thickness, tunneling is a weak perturbation with regard to the uncoupled dispersionless states, which are fully confined as well. In this case the dispersion relation E z ( k z ) {\displaystyle E_{z}(k_{z})} , periodic over 2 π / d {\displaystyle 2\pi /d} with over d = a + b {\displaystyle d=a+b} by virtue of the Bloch theorem, is fully sinusoidal:
and the effective mass changes sign for 2 π / d {\displaystyle 2\pi /d} :
In the case of minibands, this sinusoidal character is no longer preserved. Only high up in the miniband (for wavevectors well beyond 2 π / d {\displaystyle 2\pi /d} ) is the top actually 'sensed' and does the effective mass change sign. The shape of the miniband dispersion influences miniband transport profoundly and accurate dispersion relation calculations are required given wide minibands. The condition for observing single miniband transport is the absence of interminiband transfer by any process. The thermal quantum k B T should be much smaller than the energy difference E 2 − E 1 {\displaystyle E_{2}-E_{1}} between the first and second miniband, even in the presence of the applied electric field.
For an ideal superlattice a complete set of eigenstates states can be constructed by products of plane waves e i k ⋅ r / 2 π {\displaystyle e^{i\mathbf {k} \cdot \mathbf {r} }/2\pi } and a z -dependent function f k ( z ) {\displaystyle f_{k}(z)} which satisfies the eigenvalue equation
As E c ( z ) {\displaystyle E_{c}(z)} and m c ( z ) {\displaystyle m_{c}(z)} are periodic functions with the superlattice period d , the eigenstates are Bloch state f k ( z ) = ϕ q , k ( z ) {\displaystyle f_{k}(z)=\phi _{q,\mathbf {k} }(z)} with energy E ν ( q , k ) {\displaystyle E^{\nu }(q,\mathbf {k} )} . Within first-order perturbation theory in k 2 , one obtains the energy
Now, ϕ q , 0 ( z ) {\displaystyle \phi _{q,\mathbf {0} }(z)} will exhibit a larger probability in the well, so that it seems reasonable to replace the second term by
where m w {\displaystyle m_{w}} is the effective mass of the quantum well.
By definition the Bloch functions are delocalized over the whole superlattice. This may provide difficulties if electric fields are applied or effects due to the superlattice's finite length are considered. Therefore, it is often helpful to use different sets of basis states that are better localized. A tempting choice would be the use of eigenstates of single quantum wells. Nevertheless, such a choice has a severe shortcoming: the corresponding states are solutions of two different Hamiltonians , each neglecting the presence of the other well. Thus these states are not orthogonal, creating complications. Typically, the coupling is estimated by the transfer Hamiltonian within this approach. For these reasons, it is more convenient to use the set of Wannier functions .
Applying an electric field F to the superlattice structure causes the Hamiltonian to exhibit an additional scalar potential eφ ( z ) = − eFz that destroys the translational invariance. In this case, given an eigenstate with wavefunction Φ 0 ( z ) {\displaystyle \Phi _{0}(z)} and energy E 0 {\displaystyle E_{0}} , then the set of states corresponding to wavefunctions Φ j ( z ) = Φ 0 ( z − j d ) {\displaystyle \Phi _{j}(z)=\Phi _{0}(z-jd)} are eigenstates of the Hamiltonian with energies E j = E 0 − jeFd . These states are equally spaced both in energy and real space and form the so-called Wannier–Stark ladder . The potential Φ 0 ( z ) {\displaystyle \Phi _{0}(z)} is not bounded for the infinite crystal, which implies a continuous energy spectrum. Nevertheless, the characteristic energy spectrum of these Wannier–Stark ladders could be resolved experimentally.
The motion of charge carriers in a superlattice is different from that in the individual layers: mobility of charge carriers can be enhanced, which is beneficial for high-frequency devices, and specific optical properties are used in semiconductor lasers .
If an external bias is applied to a conductor, such as a metal or a semiconductor, typically an electric current is generated. The magnitude of this current is determined by the band structure of the material, scattering processes, the applied field strength and the equilibrium carrier distribution of the conductor.
A particular case of superlattices called superstripes are made of superconducting units separated by spacers. In each miniband the superconducting order parameter, called the superconducting gap, takes different values, producing a multi-gap, or two-gap or multiband superconductivity.
Recently, Felix and Pereira investigated the thermal transport by phonons in periodic [ 13 ] and quasiperiodic [ 14 ] [ 15 ] [ 16 ] superlattices of graphene-hBN according to the Fibonacci sequence. They reported that the contribution of coherent thermal transport (phonons like-wave) was suppressed as quasiperiodicity increased.
Soon after two-dimensional electron gases ( 2DEG ) had become commonly available for experiments, research groups attempted to create structures [ 17 ] that could be called 2D artificial crystals. The idea is to subject the electrons confined to an interface between two semiconductors (i.e. along z -direction) to an additional modulation potential V ( x , y ). Contrary to the classical superlattices (1D/3D, that is 1D modulation of electrons in 3D bulk) described above, this is typically achieved by treating the heterostructure surface: depositing a suitably patterned metallic gate or etching. If the amplitude of V ( x , y ) is large (take V ( x , y ) = − V 0 ( cos 2 π x / a + cos 2 π y / a ) , V 0 > 0 {\displaystyle V(x,y)=-V_{0}(\cos 2\pi x/a+\cos 2\pi y/a),V_{0}>0} as an example) compared to the Fermi level, | V 0 | ≫ E f {\displaystyle |V_{0}|\gg E_{f}} , the electrons in the superlattice should behave similarly to electrons in an atomic crystal with square lattice (in the example, these "atoms" would be located at positions ( na , ma ) where n , m are integers).
The difference is in the length and energy scales. Lattice constants of atomic crystals are of the order of 1Å while those of superlattices ( a ) are several hundreds or thousands larger as dictated by technological limits (e.g. electron-beam lithography used for the patterning of the heterostructure surface). Energies are correspondingly smaller in superlattices. Using the simple quantum-mechanically confined-particle model suggests E ∝ 1 / a 2 {\displaystyle E\propto 1/a^{2}} . This relation is only a rough guide and actual calculations with currently topical graphene (a natural atomic crystal) and artificial graphene [ 18 ] (superlattice) show that characteristic band widths are of the order of 1 eV and 10 meV, respectively. In the regime of weak modulation ( | V 0 | ≪ E f {\displaystyle |V_{0}|\ll E_{f}} ), phenomena like commensurability oscillations or fractal energy spectra ( Hofstadter butterfly ) occur.
Artificial two-dimensional crystals can be viewed as a 2D/2D case (2D modulation of a 2D system) and other combinations are experimentally available: an array of quantum wires (1D/2D) or 3D/3D photonic crystals .
The superlattice of palladium-copper system is used in high performance alloys to enable a higher electrical conductivity, which is favored by the ordered structure. Further alloying elements like silver , rhenium , rhodium and ruthenium are added for better mechanical strength and high temperature stability. This alloy is used for probe needles in probe cards . [ 19 ] | https://en.wikipedia.org/wiki/Superlattice |
A superlens , or super lens , is a lens which uses metamaterials to go beyond the diffraction limit . The diffraction limit is a feature of conventional lenses and microscopes that limits the fineness of their resolution depending on the illumination wavelength and the numerical aperture (NA) of the objective lens. Many lens designs have been proposed that go beyond the diffraction limit in some way, but constraints and obstacles face each of them. [ 1 ]
In 1873 Ernst Abbe reported that conventional lenses are incapable of capturing some fine details of any given image. The superlens is intended to capture such details. This limitation of conventional lenses has inhibited progress in the biological sciences . This is because a virus or DNA molecule cannot be resolved with the highest powered conventional microscopes. This limitation extends to the minute processes of cellular proteins moving alongside microtubules of a living cell in their natural environments. Additionally, computer chips and the interrelated microelectronics continue to be manufactured at progressively smaller scales. This requires specialized optical equipment , which is also limited because these use conventional lenses. Hence, the principles governing a superlens show that it has potential for imaging DNA molecules, cellular protein processes, and aiding in the manufacture of even smaller computer chips and microelectronics. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Conventional lenses capture only the propagating light waves . These are waves that travel from a light source or an object to a lens, or the human eye. This can alternatively be studied as the far field . In contrast, a superlens captures propagating light waves and waves that stay on top of the surface of an object, which, alternatively, can be studied as both the far field and the near field . [ 6 ] [ 7 ]
In the early 20th century the term "superlens" was used by Dennis Gabor to describe something quite different: a compound lenslet array system. [ 8 ]
An image of an object can be defined as a tangible or visible representation of the features of that object. A requirement for image formation is interaction with fields of electromagnetic radiation . Furthermore, the level of feature detail, or image resolution , is limited to a length of a wave of radiation . For example, with optical microscopy , image production and resolution depends on the length of a wave of visible light. However, with a superlens, this limitation may be removed, and a new class of image generated. [ 9 ]
Electron beam lithography can overcome this resolution limit . Optical microscopy, on the other hand cannot, being limited to some value just above 200 nanometers . [ 4 ] However, new technologies combined with optical microscopy are beginning to allow increased feature resolution (see sections below).
One definition of being constrained by the resolution barrier , is a resolution cut off at half the wavelength of light . The visible spectrum has a range that extends from 390 nanometers to 750 nanometers. Green light , half way in between, is around 500 nanometers. Microscopy takes into account parameters such as lens aperture , distance from the object to the lens, and the refractive index of the observed material. This combination defines the resolution cutoff, or microscopy optical limit , which tabulates to 200 nanometers. Therefore, conventional lenses, which literally construct an image of an object by using "ordinary" light waves, discard information that produces very fine, and minuscule details of the object that are contained in evanescent waves . These dimensions are less than 200 nanometers. For this reason, conventional optical systems, such as microscopes , have been unable to accurately image very small, nanometer-sized structures or nanometer-sized organisms in vivo , such as individual viruses , or DNA molecules . [ 4 ] [ 5 ]
The limitations of standard optical microscopy ( bright-field microscopy ) lie in three areas:
Live biological cells in particular generally lack sufficient contrast to be studied successfully, because the internal structures of the cell are mostly colorless and transparent. The most common way to increase contrast is to stain the different structures with selective dyes , but often this involves killing and fixing the sample. Staining may also introduce artifacts , apparent structural details that are caused by the processing of the specimen and are thus not a legitimate feature of the specimen.
The conventional glass lens is pervasive throughout our society and in the sciences . It is one of the fundamental tools of optics simply because it interacts with various wavelengths of light. At the same time, the wavelength of light can be analogous to the width of a pencil used to draw the ordinary images. The limit intrudes in all kinds of ways. For example, the laser used in a digital video system cannot read details from a DVD that are smaller than the wavelength of the laser. This limits the storage capacity of DVDs. [ 10 ]
Thus, when an object emits or reflects light there are two types of electromagnetic radiation associated with this phenomenon . These are the near field radiation and the far field radiation. As implied by its description, the far field escapes beyond the object. It is then easily captured and manipulated by a conventional glass lens. However, useful (nanometer-sized) resolution details are not observed, because they are hidden in the near field. They remain localized, staying much closer to the light emitting object, unable to travel, and unable to be captured by the conventional lens. Controlling the near field radiation, for high resolution, can be accomplished with a new class of materials not easily obtained in nature. These are unlike familiar solids , such as crystals , which derive their properties from atomic and molecular units. The new material class, termed metamaterials , obtains its properties from its artificially larger structure. This has resulted in novel properties, and novel responses, which allow for details of images that surpass the limitations imposed by the wavelength of light. [ 10 ]
This has led to the desire to view live biological cell interactions in a real time, natural environment , and the need for subwavelength imaging . Subwavelength imaging can be defined as optical microscopy with the ability to see details of an object or organism below the wavelength of visible light (see discussion in the above sections). In other words, to have the capability to observe, in real time, below 200 nanometers. Optical microscopy is a non-invasive technique and technology because everyday light is the transmission medium . Imaging below the optical limit in optical microscopy (subwavelength) can be engineered for the cellular level, and nanometer level in principle.
For example, in 2007 a technique was demonstrated where a metamaterials-based lens coupled with a conventional optical lens could manipulate visible light to see ( nanoscale ) patterns that were too small to be observed with an ordinary optical microscope. This has potential applications not only for observing a whole living cell, or for observing cellular processes , such as how proteins and fats move in and out of cells. In the technology domain, it could be used to improve the first steps of photolithography and nanolithography , essential for manufacturing ever smaller computer chips . [ 4 ] [ 11 ]
Focusing at subwavelength has become a unique imaging technique which allows visualization of features on the viewed object which are smaller than the wavelength of the photons in use. A photon is the minimum unit of light. While previously thought to be physically impossible, subwavelength imaging has been made possible through the development of metamaterials. This is generally accomplished using a layer of metal such as gold or silver a few atoms thick, which acts as a superlens, or by means of 1D and 2D photonic crystals . [ 12 ] [ 13 ] There is a subtle interplay between propagating waves, evanescent waves, near field imaging and far field imaging discussed in the sections below. [ 4 ] [ 14 ]
Metamaterial lenses ( Superlenses ) are able to reconstruct nanometer sized images by producing a negative refractive index in each instance. This compensates for the swiftly decaying evanescent waves. Prior to metamaterials, numerous other techniques had been proposed and even demonstrated for creating super-resolution microscopy . As far back as 1928, Irish physicist Edward Hutchinson Synge , is given credit for conceiving and developing the idea for what would ultimately become near-field scanning optical microscopy . [ 15 ] [ 16 ] [ 17 ]
In 1974 proposals for two- dimensional fabrication techniques were presented. These proposals included contact imaging to create a pattern in relief, photolithography, electron-beam lithography , X-ray lithography , or ion bombardment, on an appropriate planar substrate. [ 18 ] The shared technological goals of the metamaterial lens and the variety of lithography aim to optically resolve features having dimensions much smaller than that of the vacuum wavelength of the exposing light. [ 19 ] [ 20 ] In 1981 two different techniques of contact imaging of planar (flat) submicroscopic metal patterns with blue light (400 nm ) were demonstrated. One demonstration resulted in an image resolution of 100 nm and the other a resolution of 50 to 70 nm. [ 20 ]
In 1995, John Guerra combined a transparent grating having 50 nm lines and spaces (the "metamaterial") with a conventional microscope immersion objective. The resulting "superlens" resolved a silicon sample also having 50 nm lines and spaces, far beyond the classical diffraction limit imposed by the illumination having 650 nm wavelength in air. [ 21 ]
Since at least 1998 near field optical lithography was designed to create nanometer-scale features. Research on this technology continued as the first experimentally demonstrated negative index metamaterial came into existence in 2000–2001. The effectiveness of electron-beam lithography was also being researched at the beginning of the new millennium for nanometer-scale applications. Imprint lithography was shown to have desirable advantages for nanometer-scaled research and technology. [ 19 ] [ 22 ]
Advanced deep UV photolithography can now offer sub-100 nm resolution, yet the minimum feature size and spacing between patterns are determined by the diffraction limit of light. Its derivative technologies such as evanescent near-field lithography, near-field interference lithography, and phase-shifting mask lithography were developed to overcome the diffraction limit. [ 19 ]
In the year 2000, John Pendry proposed using a metamaterial lens to achieve nanometer-scaled imaging for focusing below the wavelength of light. [ 1 ] [ 23 ]
The original problem of the perfect lens: The general expansion of an EM field emanating from a source consists of both propagating waves and near-field or evanescent waves. An example of a 2-D line source with an electric field which has S-polarization will have plane waves consisting of propagating and evanescent components, which advance parallel to the interface. [ 24 ] As both the propagating and the smaller evanescent waves advance in a direction parallel to the medium interface, evanescent waves decay in the direction of propagation. Ordinary (positive index) optical elements can refocus the propagating components, but the exponentially decaying inhomogeneous components are always lost, leading to the diffraction limit for focusing to an image. [ 24 ]
A superlens is a lens which is capable of subwavelength imaging, allowing for magnification of near field rays. Conventional lenses have a resolution on the order of one wavelength due to the so-called diffraction limit. This limit hinders imaging very small objects, such as individual atoms, which are much smaller than the wavelength of visible light. A superlens is able to beat the diffraction limit. An example is the initial lens described by Pendry, which uses a slab of material with a negative index of refraction as a flat lens . In theory, a perfect lens would be capable of perfect focus – meaning that it could perfectly reproduce the electromagnetic field of the source plane at the image plane.
The performance limitation of conventional lenses is due to the diffraction limit. Following Pendry (2000), the diffraction limit can be understood as follows. Consider an object and a lens placed along the z-axis so the rays from the object are traveling in the +z direction. The field emanating from the object can be written in terms of its angular spectrum method , as a superposition of plane waves :
where k z {\displaystyle k_{z}} is a function of k x , k y {\displaystyle k_{x},k_{y}} :
Only the positive square root is taken as the energy is going in the + z direction. All of the components of the angular spectrum of the image for which k z {\displaystyle k_{z}} is real are transmitted and re-focused by an ordinary lens. However, if
then k z {\displaystyle k_{z}} becomes imaginary, and the wave is an evanescent wave, whose amplitude decays as the wave propagates along the z axis. This results in the loss of the high- angular-frequency components of the wave, which contain information about the high-frequency (small-scale) features of the object being imaged. The highest resolution that can be obtained can be expressed in terms of the wavelength:
A superlens overcomes the limit. A Pendry-type superlens has an index of n =−1 (ε=−1, μ=−1), and in such a material, transport of energy in the + z direction requires the z component of the wave vector to have opposite sign:
For large angular frequencies, the evanescent wave now grows , so with proper lens thickness, all components of the angular spectrum can be transmitted through the lens undistorted. There are no problems with conservation of energy , as evanescent waves carry none in the direction of growth: the Poynting vector is oriented perpendicularly to the direction of growth. For traveling waves inside a perfect lens, the Poynting vector points in direction opposite to the phase velocity. [ 3 ]
Normally, when a wave passes through the interface of two materials, the wave appears on the opposite side of the normal . However, if the interface is between a material with a positive index of refraction and another material with a negative index of refraction, the wave will appear on the same side of the normal. Pendry's idea of a perfect lens is a flat material where n =−1. Such a lens allows near-field rays, which normally decay due to the diffraction limit, to focus once within the lens and once outside the lens, allowing subwavelength imaging. [ 25 ]
Superlens construction was at one time thought to be impossible. In 2000, Pendry claimed that a simple slab of left-handed material would do the job. [ 26 ] The experimental realization of such a lens took, however, some more time, because it is not that easy to fabricate metamaterials with both negative permittivity and permeability . Indeed, no such material exists naturally and construction of the required metamaterials is non-trivial. Furthermore, it was shown that the parameters of the material are extremely sensitive (the index must equal −1); small deviations make the subwavelength resolution unobservable. [ 27 ] [ 28 ] Due to the resonant nature of metamaterials, on which many (proposed) implementations of superlenses depend, metamaterials are highly dispersive. The sensitive nature of the superlens to the material parameters causes superlenses based on metamaterials to have a limited usable frequency range. This initial theoretical superlens design consisted of a metamaterial that compensated for wave decay and reconstructs images in the near field. Both propagating and evanescent waves could contribute to the resolution of the image. [ 1 ] [ 23 ] [ 29 ]
Pendry also suggested that a lens having only one negative parameter would form an approximate superlens, provided that the distances involved are also very small and provided that the source polarization is appropriate. For visible light this is a useful substitute, since engineering metamaterials with a negative permeability at the frequency of visible light is difficult. Metals are then a good alternative as they have negative permittivity (but not negative permeability). Pendry suggested using silver due to its relatively low loss at the predicted wavelength of operation (356 nm). In 2003 Pendry's theory was first experimentally demonstrated [ 13 ] at RF/microwave frequencies. In 2005, two independent groups verified Pendry's lens at UV range, both using thin layers of silver illuminated with UV light to produce "photographs" of objects smaller than the wavelength. [ 30 ] [ 31 ] Negative refraction of visible light was experimentally verified in an yttrium orthovanadate (YVO 4 ) bicrystal in 2003. [ 32 ]
It was discovered that a simple superlens design for microwaves could use an array of parallel conducting wires. [ 33 ] This structure was shown to be able to improve the resolution of MRI imaging.
In 2004, the first superlens with a negative refractive index provided resolution three times better than the diffraction limit and was demonstrated at microwave frequencies. [ 34 ] In 2005, the first near field superlens was demonstrated by N.Fang et al. , but the lens did not rely on negative refraction . Instead, a thin silver film was used to enhance the evanescent modes through surface plasmon coupling. [ 35 ] [ 36 ] Almost at the same time Melville and Blaikie succeeded with a near field superlens. Other groups followed. [ 30 ] [ 37 ] Two developments in superlens research were reported in 2008. [ 38 ] In the second case, a metamaterial was formed from silver nanowires which were electrochemically deposited in porous aluminium oxide. The material exhibited negative refraction. [ 39 ] The imaging performance of such isotropic negative dielectric constant slab lenses were also analyzed with respect to the slab material and thickness. [ 40 ] Subwavelength imaging opportunities with planar uniaxial anisotropic lenses, where the dielectric tensor components are of the opposite sign, have also been studied as a function of the structure parameters. [ 41 ]
The superlens has not yet been demonstrated at visible or near- infrared frequencies (Nielsen, R. B.; 2010). Furthermore, as dispersive materials, these are limited to functioning at a single wavelength. Proposed solutions are metal–dielectric composites (MDCs) [ 42 ] and multilayer lens structures. [ 43 ] The multi-layer superlens appears to have better subwavelength resolution than the single layer superlens. Losses are less of a concern with the multi-layer system, but so far it appears to be impractical because of impedance mis-match. [ 35 ]
While the evolution of nanofabrication techniques continues to push the limits in fabrication of nanostructures, surface roughness remains an inevitable source of concern in the design of nano-photonic devices. The impact of this surface roughness on the effective dielectric constants and subwavelength image resolution of multilayer metal–insulator stack lenses has also been studied. [ 44 ]
When the world is observed through conventional lenses, the sharpness of the image is determined by and limited to the wavelength of light. Around the year 2000, a slab of negative index metamaterial was theorized to create a lens with capabilities beyond conventional ( positive index ) lenses. Pendry proposed that a thin slab of negative refractive metamaterial might overcome known problems with common lenses to achieve a "perfect" lens that would focus the entire spectrum, both the propagating as well as the evanescent spectra. [ 1 ] [ 45 ]
A slab of silver was proposed as the metamaterial. More specifically, such silver thin film can be regarded as a metasurface . As light moves away (propagates) from the source, it acquires an arbitrary phase . Through a conventional lens the phase remains consistent, but the evanescent waves decay exponentially . In the flat metamaterial DNG slab, normally decaying evanescent waves are contrarily amplified . Furthermore, as the evanescent waves are now amplified, the phase is reversed. [ 1 ]
Therefore, a type of lens was proposed, consisting of a metal film metamaterial. When illuminated near its plasma frequency , the lens could be used for superresolution imaging that compensates for wave decay and reconstructs images in the near-field. In addition, both propagating and evanescent waves contribute to the resolution of the image. [ 1 ]
Pendry suggested that left-handed slabs allow "perfect imaging" if they are completely lossless, impedance matched , and their refractive index is −1 relative to the surrounding medium. Theoretically, this would be a breakthrough in that the optical version resolves objects as minuscule as nanometers across. Pendry predicted that Double negative metamaterials (DNG) with a refractive index of n=−1 , can act, at least in principle, as a "perfect lens" allowing imaging resolution which is limited not by the wavelength, but rather by material quality. [ 1 ] [ 46 ] [ 47 ] [ 48 ]
Further research demonstrated that Pendry's theory behind the perfect lens was not exactly correct. The analysis of the focusing of the evanescent spectrum (equations 13–21 in reference [ 1 ] ) was flawed. In addition, this applies to only one (theoretical) instance, and that is one particular medium that is lossless, nondispersive and the constituent parameters are defined as: [ 45 ]
However, the final intuitive result of this theory that both the propagating and evanescent waves are focused, resulting in a converging focal point within the slab and another convergence (focal point) beyond the slab turned out to be correct. [ 45 ]
If the DNG metamaterial medium has a large negative index or becomes lossy or dispersive , Pendry's perfect lens effect cannot be realized. As a result, the perfect lens effect does not exist in general. According to FDTD simulations at the time (2001), the DNG slab acts like a converter from a pulsed cylindrical wave to a pulsed beam. Furthermore, in reality (in practice), a DNG medium must be and is dispersive and lossy, which can have either desirable or undesirable effects, depending on the research or application. Consequently, Pendry's perfect lens effect is inaccessible with any metamaterial designed to be a DNG medium. [ 45 ]
Another analysis, in 2002, [ 24 ] of the perfect lens concept showed it to be in error while using the lossless, dispersionless DNG as the subject. This analysis mathematically demonstrated that subtleties of evanescent waves, restriction to a finite slab and absorption had led to inconsistencies and divergencies that contradict the basic mathematical properties of scattered wave fields. For example, this analysis stated that absorption , which is linked to dispersion , is always present in practice, and absorption tends to transform amplified waves into decaying ones inside this medium (DNG). [ 24 ]
A third analysis of Pendry's perfect lens concept, published in 2003, [ 49 ] used the recent demonstration of negative refraction at microwave frequencies [ 50 ] as confirming the viability of the fundamental concept of the perfect lens. In addition, this demonstration was thought to be experimental evidence that a planar DNG metamaterial would refocus the far field radiation of a point source. However, the perfect lens would require significantly different values for permittivity , permeability, and spatial periodicity than the demonstrated negative refractive sample. [ 49 ] [ 50 ]
This study agrees that any deviation from conditions where ε=μ=−1 results in the normal, conventional, imperfect image that degrades exponentially i.e., the diffraction limit. The perfect lens solution in the absence of losses is again, not practical, and can lead to paradoxical interpretations. [ 24 ]
It was determined that although resonant surface plasmons are undesirable for imaging, these turn out to be essential for recovery of decaying evanescent waves. This analysis discovered that metamaterial periodicity has a significant effect on the recovery of types of evanescent components. In addition, achieving subwavelength resolution is possible with current technologies. Negative refractive indices have been demonstrated in structured metamaterials. Such materials can be engineered to have tunable material parameters, and so achieve the optimal conditions. Losses up to microwave frequencies can be minimized in structures utilizing superconducting elements. Furthermore, consideration of alternate structures may lead to configurations of left-handed materials that can achieve subwavelength focusing. Such structures were being studied at the time. [ 24 ]
An effective approach for the compensation of losses in metamaterials, called plasmon injection scheme, has been recently proposed. [ 51 ] The plasmon injection scheme has been applied theoretically to imperfect negative index flat lenses with reasonable material losses and in the presence of noise [ 52 ] [ 53 ] as well as hyperlenses. [ 54 ] It has been shown that even imperfect negative index flat lenses assisted with plasmon injection scheme can enable subdiffraction imaging of objects which is otherwise not possible due to the losses and noise. Although plasmon injection scheme was originally conceptualized for plasmonic metamaterials, [ 51 ] the concept is general and applicable to all types of electromagnetic modes. The main idea of the scheme is the coherent superposition of the lossy modes in the metamaterial with an appropriately structured external auxiliary field. This auxiliary field accounts for the losses in the metamaterial, hence effectively reducing the losses experienced by the signal beam or object field in the case of a metamaterial lens. The plasmon injection scheme can be implemented either physically [ 53 ] or equivalently through a deconvolution post-processing method. [ 52 ] [ 54 ] However, the physical implementation has shown to be more effective than the deconvolution. Physical construction of convolution and selective amplification of the spatial frequencies within a narrow bandwidth are the keys to the physical implementation of the plasmon injection scheme. This loss compensation scheme is ideally suited especially for metamaterial lenses since it does not require gain medium, nonlinearity, or any interaction with phonons. Experimental demonstration of the plasmon injection scheme has not yet been shown partly because the theory is rather new.
Pendry's theoretical lens was designed to focus both propagating waves and the near-field evanescent waves. From permittivity "ε" and magnetic permeability "μ" an index of refraction "n" is derived. The index of refraction determines how light is bent on traversing from one material to another. In 2003, it was suggested that a metamaterial constructed with alternating, parallel, layers of n=−1 materials and n=+1 materials, would be a more effective design for a metamaterial lens . It is an effective medium made up of a multi-layer stack, which exhibits birefringence , n 2 =∞, n x =0. The effective refractive indices are then perpendicular and parallel , respectively. [ 55 ]
Like a conventional lens, the z-direction is along the axis of the roll. The resonant frequency (w 0 ) – close to 21.3 MHz – is determined by the construction of the roll. Damping is achieved by the inherent resistance of the layers and the lossy part of permittivity. [ 55 ]
Simply put, as the field pattern is transferred from the input to the output face of a slab, so the image information is transported across each layer. This was experimentally demonstrated. To test the two-dimensional imaging performance of the material, an antenna was constructed from a pair of anti-parallel wires in the shape of the letter M. This generated a line of magnetic flux, so providing a characteristic field pattern for imaging. It was placed horizontally, and the material, consisting of 271 Swiss rolls tuned to 21.5 MHz, was positioned on top of it. The material does indeed act as an image transfer device for the magnetic field. The shape of the antenna is faithfully reproduced in the output plane, both in the distribution of the peak intensity, and in the "valleys" that bound the M. [ 55 ]
A consistent characteristic of the very near (evanescent) field is that the electric and magnetic fields are largely decoupled. This allows for nearly independent manipulation of the electric field with the permittivity and the magnetic field with the permeability. [ 55 ]
Furthermore, this is highly anisotropic system . Therefore, the transverse (perpendicular) components of the EM field which radiate the material, that is the wavevector components k x and k y , are decoupled from the longitudinal component k z . So, the field pattern should be transferred from the input to the output face of a slab of material without degradation of the image information. [ 55 ]
In 2003, a group of researchers showed that optical evanescent waves would be enhanced as they passed through a silver metamaterial lens. This was referred to as a diffraction-free lens. Although a coherent , high-resolution, image was not intended, nor achieved, regeneration of the evanescent field was experimentally demonstrated. [ 56 ] [ 57 ]
By 2003 it was known for decades that evanescent waves could be enhanced by producing excited states at the interface surfaces. However, the use of surface plasmons to reconstruct evanescent components was not tried until Pendry's recent proposal (see " Perfect lens " above). By studying films of varying thickness it has been noted that a rapidly growing transmission coefficient occurs, under the appropriate conditions. This demonstration provided direct evidence that the foundation of superlensing is solid, and suggested the path that will enable the observation of superlensing at optical wavelengths. [ 57 ]
In 2005, a coherent, high-resolution image was produced (based on the 2003 results). A thinner slab of silver (35 nm) was better for sub–diffraction-limited imaging, which results in one-sixth of the illumination wavelength. This type of lens was used to compensate for wave decay and reconstruct images in the near-field. Prior attempts to create a working superlens used a slab of silver that was too thick. [ 23 ] [ 46 ]
Objects were imaged as small as 40 nm across. In 2005 the imaging resolution limit for optical microscopes was at about one tenth the diameter of a red blood cell . With the silver superlens this results in a resolution of one hundredth of the diameter of a red blood cell. [ 56 ]
Conventional lenses, whether man-made or natural, create images by capturing the propagating light waves all objects emit and then bending them. The angle of the bend is determined by the index of refraction and has always been positive until the fabrication of artificial negative index materials. Objects also emit evanescent waves that carry details of the object, but are unobtainable with conventional optics. Such evanescent waves decay exponentially and thus never become part of the image resolution, an optics threshold known as the diffraction limit. Breaking this diffraction limit, and capturing evanescent waves are critical to the creation of a 100-percent perfect representation of an object. [ 23 ]
In addition, conventional optical materials suffer a diffraction limit because only the propagating components are transmitted (by the optical material) from a light source. [ 23 ] The non-propagating components, the evanescent waves, are not transmitted. [ 24 ] Moreover, lenses that improve image resolution by increasing the index of refraction are limited by the availability of high-index materials, and point by point subwavelength imaging of electron microscopy also has limitations when compared to the potential of a working superlens. Scanning electron and atomic force microscopes are now used to capture detail down to a few nanometers. However, such microscopes create images by scanning objects point by point, which means they are typically limited to non-living samples, and image capture times can take up to several minutes. [ 23 ]
With current optical microscopes, scientists can only make out relatively large structures within a cell, such as its nucleus and mitochondria. With a superlens, optical microscopes could one day reveal the movements of individual proteins traveling along the microtubules that make up a cell's skeleton, the researchers said. Optical microscopes can capture an entire frame with a single snapshot in a fraction of a second. With superlenses this opens up nanoscale imaging to living materials, which can help biologists better understand cell structure and function in real time. [ 23 ]
Advances of magnetic coupling in the THz and infrared regime provided the realization of a possible metamaterial superlens. However, in the near field, the electric and magnetic responses of materials are decoupled. Therefore, for transverse magnetic (TM) waves, only the permittivity needed to be considered. Noble metals, then become natural selections for superlensing because negative permittivity is easily achieved. [ 23 ]
By designing the thin metal slab so that the surface current oscillations (the surface plasmons) match the evanescent waves from the object, the superlens is able to substantially enhance the amplitude of the field. Superlensing results from the enhancement of evanescent waves by surface plasmons. [ 23 ] [ 56 ]
The key to the superlens is its ability to significantly enhance and recover the evanescent waves that carry information at very small scales. This enables imaging well below the diffraction limit. No lens is yet able to completely reconstitute all the evanescent waves emitted by an object, so the goal of a 100-percent perfect image will persist. However, many scientists believe that a true perfect lens is not possible because there will always be some energy absorption loss as the waves pass through any known material. In comparison, the superlens image is substantially better than the one created without the silver superlens. [ 23 ]
In February 2004, an electromagnetic radiation focusing system, based on a negative index metamaterial plate, accomplished subwavelength imaging in the microwave domain. This showed that obtaining separated images at much less than the wavelength of light is possible. [ 58 ] Also, in 2004, a silver layer was used for sub- micrometre near-field imaging. Super high resolution was not achieved, but this was intended. The silver layer was too thick to allow significant enhancements of evanescent field components. [ 30 ]
In early 2005, feature resolution was achieved with a different silver layer. Though this was not an actual image, it was intended. Dense feature resolution down to 250 nm was produced in a 50 nm thick photoresist using illumination from a mercury lamp . Using simulations ( FDTD ), the study noted that resolution improvements could be expected for imaging through silver lenses, rather than another method of near field imaging. [ 59 ]
Building on this prior research, super resolution was achieved at optical frequencies using a 50 nm flat silver layer. The capability of resolving an image beyond the diffraction limit, for far-field imaging , is defined here as superresolution. [ 30 ]
The image fidelity is much improved over earlier results of the previous experimental lens stack. Imaging of sub-micrometre features has been greatly improved by using thinner silver and spacer layers, and by reducing the surface roughness of the lens stack. The ability of the silver lenses to image the gratings has been used as the ultimate resolution test, as there is a concrete limit for the ability of a conventional (far field) lens to image a periodic object – in this case the image is a diffraction grating. For normal-incidence illumination the minimum spatial period that can be resolved with wavelength λ through a medium with refractive index n is λ/n. Zero contrast would therefore be expected in any (conventional) far-field image below this limit, no matter how good the imaging resist might be. [ 30 ]
The (super) lens stack here results in a computational result of a diffraction-limited resolution of 243 nm. Gratings with periods from 500 nm down to 170 nm are imaged, with the depth of the modulation in the resist reducing as the grating period reduces. All of the gratings with periods above the diffraction limit (243 nm) are well resolved. [ 30 ] The key results of this experiment are super-imaging of the sub-diffraction limit for 200 nm and 170 nm periods. In both cases the gratings are resolved, even though the contrast is diminished, but this gives experimental confirmation of Pendry's superlensing proposal. [ 30 ]
Gradient Index (GRIN) – The larger range of material response available in metamaterials should lead to improved GRIN lens design. In particular, since the permittivity and permeability of a metamaterial can be adjusted independently, metamaterial GRIN lenses can presumably be better matched to free space. The GRIN lens is constructed by using a slab of NIM with a variable index of refraction in the y direction, perpendicular to the direction of propagation z. [ 60 ]
In 2005, a group proposed a theoretical way to overcome the near-field limitation using a new device termed a far-field superlens (FSL), which is a properly designed periodically corrugated metallic slab-based superlens. [ 61 ]
Imaging was experimentally demonstrated in the far field, taking the next step after near-field experiments. The key element is termed as a far-field superlens (FSL) which consists of a conventional superlens and a nanoscale coupler. [ 62 ]
An approach is presented for subwavelength focusing of microwaves using both a time-reversal mirror placed in the far field and a random distribution of scatterers placed in the near field of the focusing point. [ 63 ]
Once capability for near-field imaging was demonstrated, the next step was to project a near-field image into the far-field. This concept, including technique and materials, is dubbed "hyperlens". [ 64 ] [ 65 ]
In May 2012, calculations showed an ultraviolet (1200–1400 THz) hyperlens can be created using alternating layers of boron nitride and graphene . [ 66 ]
In February 2018, a mid-infrared (~5–25 μm) hyperlens was introduced, made from a variably doped indium arsenide multilayer, which offered drastically lower losses. [ 67 ]
The capability of a metamaterial-hyperlens for sub-diffraction-limited imaging is shown below.
With conventional optical lenses, the far field is a limit that is too distant for evanescent waves to arrive intact. When imaging an object, this limits the optical resolution of lenses to the order of the wavelength of light. These non-propagating waves carry detailed information in the form of high spatial resolution , and overcome limitations. Therefore, projecting image details, normally limited by diffraction into the far field does require recovery of the evanescent waves. [ 68 ]
In essence steps leading up to this investigation and demonstration was the employment of an anisotropic metamaterial with a hyperbolic dispersion. The effect was such that ordinary evanescent waves propagate along the radial direction of the layered metamaterial. On a microscopic level the large spatial frequency waves propagate through coupled surface plasmon excitations between the metallic layers. [ 68 ]
In 2007, just such an anisotropic metamaterial was employed as a magnifying optical hyperlens. The hyperlens consisted of a curved periodic stack of thin silver and alumina (at 35 nanometers thick) deposited on a half-cylindrical cavity, and fabricated on a quartz substrate. The radial and tangential permittivities have different signs. [ 68 ]
Upon illumination, the scattered evanescent field from the object enters the anisotropic medium and propagates along the radial direction. Combined with another effect of the metamaterial, a magnified image at the outer diffraction limit-boundary of the hyperlens occurs. Once the magnified feature is larger than (beyond) the diffraction limit, it can then be imaged with a conventional optical microscope, thus demonstrating magnification and projection of a sub-diffraction-limited image into the far field. [ 68 ]
The hyperlens magnifies the object by transforming the scattered evanescent waves into propagating waves in the anisotropic medium, projecting a spatial resolution high-resolution image into the far field. This type of metamaterials-based lens, paired with a conventional optical lens is therefore able to reveal patterns too small to be discerned with an ordinary optical microscope. In one experiment, the lens was able to distinguish two 35-nanometer lines etched 150 nanometers apart. Without the metamaterials, the microscope showed only one thick line. [ 14 ]
In a control experiment, the line pair object was imaged without the hyperlens. The line pair could not be resolved because of the diffraction limit of the (optical) aperture was limited to 260 nm. Because the hyperlens supports the propagation of a very broad spectrum of wave vectors, it can magnify arbitrary objects with sub-diffraction-limited resolution. [ 68 ]
Although this work appears to be limited by being only a cylindrical hyperlens, the next step is to design a spherical lens. That lens will exhibit three-dimensional capability. Near-field optical microscopy uses a tip to scan an object. In contrast, this optical hyperlens magnifies an image that is sub-diffraction-limited. The magnified sub-diffraction image is then projected into the far field. [ 14 ] [ 68 ]
The optical hyperlens shows a notable potential for applications, such as real-time biomolecular imaging and nanolithography. Such a lens could be used to watch cellular processes that have been impossible to see. Conversely, it could be used to project an image with extremely fine features onto a photoresist as a first step in photolithography, a process used to make computer chips. The hyperlens also has applications for DVD technology. [ 14 ] [ 68 ]
In 2010, a spherical hyperlens for two dimensional imaging at visible frequencies was demonstrated experimentally. The spherical hyperlens was based on silver and titanium oxide in alternating layers and had strong anisotropic hyperbolic dispersion allowing super-resolution with visible spectrum. The resolution was 160 nm in the visible spectrum. It will enable biological imaging at the cellular and DNA level, with a strong benefit of magnifying sub-diffraction resolution into far-field. [ 69 ]
In 2007 researchers demonstrated super imaging using materials, which create negative refractive index and lensing is achieved in the visible range. [ 46 ]
Continual improvements in optical microscopy are needed to keep up with the progress in nanotechnology and microbiology . Advancement in spatial resolution is key. Conventional optical microscopy is limited by a diffraction limit which is on the order of 200 nanometers (wavelength). This means that viruses , proteins, DNA molecules and many other samples are hard to observe with a regular (optical) microscope. The lens previously demonstrated with negative refractive index material, a thin planar superlens, does not provide magnification beyond the diffraction limit of conventional microscopes. Therefore, images smaller than the conventional diffraction limit will still be unavailable. [ 46 ]
Another approach achieving super-resolution at visible wavelength is recently developed spherical hyperlens based on silver and titanium oxide alternating layers. It has strong anisotropic hyperbolic dispersion allowing super-resolution with converting evanescent waves into propagating waves. This method is non-fluorescence based super-resolution imaging, which results in real-time imaging without any reconstruction of images and information. [ 69 ]
By 2008 the diffraction limit has been surpassed and lateral imaging resolutions of 20 to 50 nm have been achieved by several "super-resolution" far-field microscopy techniques, including stimulated emission depletion (STED) and its related RESOLFT (reversible saturable optically linear fluorescent transitions) microscopy; saturated structured illumination microscopy (SSIM) ; stochastic optical reconstruction microscopy (STORM); photoactivated localization microscopy (PALM); and other methods using similar principles. [ 70 ]
This began with a proposal by Pendry, in 2003. Magnifying the image required a new design concept in which the surface of the negatively refracting lens is curved. One cylinder touches another cylinder, resulting in a curved cylindrical lens which reproduced the contents of the smaller cylinder in magnified but undistorted form outside the larger cylinder. Coordinate transformations are required to curve the original perfect lens into the cylindrical, lens structure. [ 71 ]
This was followed by a 36-page conceptual and mathematical proof in 2005, that the cylindrical superlens works in the quasistatic regime . The debate over the perfect lens is discussed first. [ 72 ]
In 2007, a superlens utilizing coordinate transformation was again the subject. However, in addition to image transfer other useful operations were discussed; translation, rotation, mirroring and inversion as well as the superlens effect. Furthermore, elements that perform magnification are described, which are free from geometric aberrations, on both the input and output sides while utilizing free space sourcing (rather than waveguide). These magnifying elements also operate in the near and far field, transferring the image from near field to far field. [ 73 ]
The cylindrical magnifying superlens was experimentally demonstrated in 2007 by two groups, Liu et al. [ 68 ] and Smolyaninov et al. [ 46 ] [ 74 ]
Work in 2007 demonstrated that a quasi-periodic array of nanoholes, in a metal screen, were able to focus the optical energy of a plane wave to form subwavelength spots (hot spots). The distances for the spots was a few tens of wavelengths on the other side of the array, or, in other words, opposite the side of the incident plane wave . The quasi-periodic array of nanoholes functioned as a light concentrator. [ 75 ]
In June 2008, this was followed by the demonstrated capability of an array of quasi-crystal nanoholes in a metal screen. More than concentrating hot spots, an image of the point source is displayed a few tens of wavelengths from the array, on the other side of the array (the image plane). Also this type of array exhibited a 1 to 1 linear displacement, – from the location of the point source to its respective, parallel, location on the image plane. In other words, from x to x + δx. For example, other point sources were similarly displaced from x' to x' + δx', from x^ to x^ + δx^, and from x^^ to x^^ + δx^^, and so on. Instead of functioning as a light concentrator, this performs the function of conventional lens imaging with a 1 to 1 correspondence, albeit with a point source. [ 75 ]
However, resolution of more complicated structures can be achieved as constructions of multiple point sources. The fine details, and brighter image, that are normally associated with the high numerical apertures of conventional lenses can be reliably produced. Notable applications for this technology arise when conventional optics is not suitable for the task at hand. For example, this technology is better suited for X-ray imaging , or nano-optical circuits, and so forth. [ 75 ]
In 2010, a nano-wire array prototype, described as a three-dimensional (3D) metamaterial-nanolens, consisting of bulk nanowires deposited in a dielectric substrate was fabricated and tested. [ 76 ] [ 77 ]
The metamaterial nanolens was constructed of millions of nanowires at 20 nanometers in diameter. These were precisely aligned and a packaged configuration was applied. The lens is able to depict a clear, high-resolution image of nano-sized objects because it uses both normal propagating EM radiation, and evanescent waves to construct the image. Super-resolution imaging was demonstrated over a distance of 6 times the wavelength (λ), in the far-field, with a resolution of at least λ/4. This is a significant improvement over previous research and demonstration of other near field and far field imaging, including nanohole arrays discussed below. [ 76 ] [ 77 ]
2009–12. The light transmission properties of holey metal films in the metamaterial limit, where the unit length of the periodic structures is much smaller than the operating wavelength, are analyzed theoretically. [ 78 ]
Theoretically it appears possible to transport a complex electromagnetic image through a tiny subwavelength hole with diameter considerably smaller than the diameter of the image, without losing the subwavelength details. [ 79 ]
When observing the complex processes in a living cell, significant processes (changes) or details are easy to overlook. This can more easily occur when watching changes that take a long time to unfold and require high-spatial-resolution imaging. However, recent research offers a solution to scrutinize activities that occur over hours or even days inside cells, potentially solving many of the mysteries associated with molecular-scale events occurring in these tiny organisms. [ 80 ]
A joint research team, working at the National Institute of Standards and Technology (NIST) and the National Institute of Allergy and Infectious Diseases (NIAID), has discovered a method of using nanoparticles to illuminate the cellular interior to reveal these slow processes. Nanoparticles, thousands of times smaller than a cell, have a variety of applications. One type of nanoparticle called a quantum dot glows when exposed to light. These semiconductor particles can be coated with organic materials, which are tailored to be attracted to specific proteins within the part of a cell a scientist wishes to examine. [ 80 ]
Notably, quantum dots last longer than many organic dyes and fluorescent proteins that were previously used to illuminate the interiors of cells. They also have the advantage of monitoring changes in cellular processes while most high-resolution techniques like electron microscopy only provide images of cellular processes frozen at one moment. Using quantum dots, cellular processes involving the dynamic motions of proteins, are observable (elucidated). [ 80 ]
The research focused primarily on characterizing quantum dot properties, contrasting them with other imaging techniques. In one example, quantum dots were designed to target a specific type of human red blood cell protein that forms part of a network structure in the cell's inner membrane. When these proteins cluster together in a healthy cell, the network provides mechanical flexibility to the cell so it can squeeze through narrow capillaries and other tight spaces. But when the cell gets infected with the malaria parasite, the structure of the network protein changes. [ 80 ]
Because the clustering mechanism is not well understood, it was decided to examine it with the quantum dots. If a technique could be developed to visualize the clustering, then the progress of a malaria infection could be understood, which has several distinct developmental stages. [ 80 ]
Research efforts revealed that as the membrane proteins bunch up, the quantum dots attached to them are induced to cluster themselves and glow more brightly, permitting real time observation as the clustering of proteins progresses. More broadly, the research discovered that when quantum dots attach themselves to other nanomaterials, the dots' optical properties change in unique ways in each case. Furthermore, evidence was discovered that quantum dot optical properties are altered as the nanoscale environment changes, offering greater possibility of using quantum dots to sense the local biochemical environment inside cells. [ 80 ]
Some concerns remain over toxicity and other properties. However, the overall findings indicate that quantum dots could be a valuable tool to investigate dynamic cellular processes. [ 80 ]
The abstract from the related published research paper states (in part): Results are presented regarding the dynamic fluorescence properties of bioconjugated nanocrystals or quantum dots (QDs) in different chemical and physical environments. A variety of QD samples was prepared and compared: isolated individual QDs, QD aggregates, and QDs conjugated to other nanoscale materials...
Metamaterials scientists
This article incorporates public domain material from the National Institute of Standards and Technology | https://en.wikipedia.org/wiki/Superlens |
Superlubricity is a regime of relative motion in which friction vanishes or very nearly vanishes. However, the definition of "vanishing" friction level is not clear, which makes the term vague. As an ad hoc definition, a kinetic coefficient of friction less than 0.01 can be adopted. [ 1 ] This definition also requires further discussion and clarification.
Superlubricity may occur when two crystalline surfaces slide over each other in dry incommensurate contact. This was first described in the early 1980s [ 2 ] for Frenkel–Kontorova models and is called the Aubry transition. It has been extensively studied as a mathematical model, [ 3 ] in atomistic simulations [ 4 ] and in a range of experimental systems. [ 5 ] [ 6 ]
This effect, also called structural lubricity , was verified between two graphite surfaces in 2004. [ 7 ] The atoms in graphite are oriented in a hexagonal manner and form an atomic hill-and-valley landscape, which looks like an egg-crate. When the two graphite surfaces are in registry (every 60 degrees), the friction force is high. When the two surfaces are rotated out of registry, the friction is greatly reduced. This is like two egg-crates which can slide over each other more easily when they are "twisted" with respect to each other.
Observation of superlubricity in microscale graphite structures was reported in 2012, [ 8 ] by shearing a square graphite mesa a few micrometers across, and observing the self-retraction of the sheared layer. Such effects were also theoretically described [ 9 ] for a model of graphene and nickel layers. This observation, which is reproducible even under ambient conditions, shifts interest in superlubricity from a primarily academic topic, accessible only under highly idealized conditions, to one with practical implications for micro and nanomechanical devices. [ 10 ]
A state of ultralow friction can also be achieved when a sharp tip slides over a flat surface and the applied load is below a certain threshold. Such a "superlubric" threshold depends on the tip-surface interaction and the stiffness of the materials in contact, as described by the Tomlinson model . [ 11 ] The threshold can be significantly increased by exciting the sliding system at its resonance frequency , which suggests a practical way to limit wear in nanoelectromechanical systems . [ 12 ]
Superlubricity was also observed between a gold AFM tip and Teflon substrate due to repulsive Van der Waals forces and a hydrogen-bonded layer formed by glycerol on the steel surfaces. [ 13 ] [ unreliable source? ] [ 14 ] [ unreliable source? ] Formation of the hydrogen-bonded layer was also shown to lead to superlubricity between quartz glass surfaces lubricated by biological liquid obtained from mucilage of Brasenia schreberi . [ 15 ] [ unreliable source? ] Other mechanisms of superlubricity may include: [ 16 ] (a) thermodynamic repulsion due to a layer of free or grafted macromolecules between the bodies so that the entropy of the intermediate layer decreases at small distances due to stronger confinement; (b) electrical repulsion due to external electrical voltage; (c) repulsion due to electrical double layer; (d) repulsion due to thermal fluctuations. [ 17 ]
The similarity of the term superlubricity with terms such as superconductivity and superfluidity is misleading; other energy dissipation mechanisms can lead to a finite (normally small) friction force. Superlubricity is more analogous to phenomena such as superelasticity , in which substances such as Nitinol have very low, but nonzero, elastic moduli; supercooling , in which substances remain liquid until a lower-than-normal temperature; super black , which reflects very little light; giant magnetoresistance , in which very large but finite magnetoresistance effects are observed in alternating nonmagnetic and ferromagnetic layers; superhard materials , which are diamond or nearly as hard as diamond; and superlensing , which have a resolution which, while finer than the diffraction limit , is still finite.
In 2015, researchers first obtained evidence for superlubricity at microscales. [ 18 ] The experiments were supported by computational studies. The Mira supercomputer simulated up to 1.2 million atoms for dry environments and up to 10 million atoms for humid environments. [ 18 ] The researchers used LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) code to carry out reactive molecular dynamics simulations. The researchers optimized LAMMPS and its implementation of ReaxFF by adding OpenMP threading, replacing MPI point-to-point communication with MPI collectives in key algorithms, and leveraging MPI I/O. These enhancements doubled performance. [ citation needed ]
Friction is known to be a major consumer of energy; for instance in a detailed study [ 19 ] it was found that it may lead to one third of the energy losses in new automobile engines. Superlubricious coatings could reduce this. Potential applications include computer hard drives, wind turbine gears, and mechanical rotating seals for microelectromechanical and nanoelectromechanical systems. [ 20 ] | https://en.wikipedia.org/wiki/Superlubricity |
In astronomy , superluminal motion is the apparently faster-than-light motion seen in some radio galaxies , BL Lac objects , quasars , blazars and recently also in some galactic sources called microquasars . Bursts of energy moving out along the relativistic jets emitted from these objects can have a proper motion that appears greater than the speed of light . All of these sources are thought to contain a black hole , responsible for the ejection of mass at high velocities. Light echoes can also produce apparent superluminal motion. [ 1 ]
Superluminal motion occurs as a special case of a more general phenomenon arising from the difference between the apparent speed of distant objects moving across the sky and their actual speed as measured at the source. [ 2 ]
In tracking the movement of such objects across the sky, a calculation of their speed can be determined by a simple distance divided by time formula. If the distance of the object from the Earth is known and the angular speed of the object can be measured, then the speed can be calculated as:
But this calculation does not yield the actual speed of the object, as it fails to account for the fact that the speed of light is finite. When measuring the movement of distant objects across the sky, there is a large time delay between what has been observed and what has occurred, due to the large distance the light from the distant object has to travel to reach us. The error in the above calculation comes from the fact that when an object has a component of velocity directed towards the Earth, as the object moves closer to the Earth that time delay becomes smaller. This means that the apparent speed as calculated above is greater than the actual speed. Correspondingly, if the object is moving away from the Earth, the above calculation underestimates the actual speed.
This effect in itself does not generally lead to superluminal motion being observed. But when the actual speed of the object is close to the speed of light, the apparent speed can be calculated as greater than the speed of light, as a result of the above effect. As the actual speed of the object approaches the speed of light, the effect is most pronounced as the component of the velocity towards the Earth increases. This means that in most cases, 'superluminal' objects are travelling almost directly towards the Earth. However it is not strictly necessary for this to be the case, and superluminal motion can still be observed in objects with appreciable velocities not directed towards the Earth. [ 3 ]
Superluminal motion is most often observed in two opposing jets emanating from the core of a star or black hole. In this case, one jet is moving away from and one towards the Earth. If Doppler shifts are observed in both sources, the velocity and the distance can be determined independently of other observations.
As early as 1983, at the "superluminal workshop" held at Jodrell Bank Observatory , referring to the seven then-known superluminal jets,
Schilizzi ... presented maps of arc-second resolution [showing the large-scale outer jets] ... which ... have revealed outer double structure in all but one ( 3C 273 ) of the known superluminal sources. An embarrassment is that the average projected size [on the sky] of the outer structure is no smaller than that of the normal radio-source population. [ 4 ]
In other words, the jets are evidently not, on average, close to the Earth's line-of-sight. (Their apparent length would appear much shorter if they were.)
In 1993, Thomson et al. suggested that the (outer) jet of the quasar 3C 273 is nearly collinear to the Earth's line-of-sight. Superluminal motion of up to ~9.6 c has been observed along the (inner) jet of this quasar. [ 5 ] [ 6 ] [ 7 ]
Superluminal motion of up to 6 c has been observed in the inner parts of the jet of M87 . To explain this in terms of the "narrow-angle" model, the jet must be no more than 19° from the Earth's line-of-sight. [ 8 ] But evidence suggests that the jet is in fact at about 43° to the Earth's line-of-sight. [ 9 ] The same group of scientists later revised that finding and argue in favour of a superluminal bulk movement in which the jet is embedded. [ 10 ]
Suggestions of turbulence and/or "wide cones" in the inner parts of the jets have been put forward to try to counter such problems, and there seems to be some evidence for this. [ 11 ]
The model identifies a difference between the information carried by the wave at its signal velocity c , and the information about the wave front's apparent rate of change of position. If a light pulse is envisaged in a wave guide (glass tube) moving across an observer's field of view, the pulse can only move at c through the guide. If that pulse is also directed towards the observer, he will receive that wave information, at c . If the wave guide is moved in the same direction as the pulse, the information on its position, passed to the observer as lateral emissions from the pulse, changes. He may see the rate of change of position as apparently representing motion faster than c when calculated, like the edge of a shadow across a curved surface. This is a different signal, containing different information, to the pulse and does not break the second postulate of special relativity. c is strictly maintained in all local fields.
A relativistic jet coming out of the center of an active galactic nucleus is moving along AB with a velocity v , and is observed from the point O. At time t 1 {\displaystyle t_{1}} a light ray leaves the jet from point A and another ray leaves at time t 2 = t 1 + δ t {\displaystyle t_{2}=t_{1}+\delta t} from point B. An observer at O receives the rays at time t 1 ′ {\displaystyle t_{1}^{\prime }} and t 2 ′ {\displaystyle t_{2}^{\prime }} respectively. The angle ϕ {\displaystyle \phi } is small enough that the two distances marked D L {\displaystyle D_{L}} can be considered equal.
Apparent transverse velocity along C B {\displaystyle CB} , v T = ϕ D L δ t ′ = v sin θ 1 − β cos θ {\displaystyle v_{\text{T}}={\frac {\phi D_{L}}{\delta t'}}={\frac {v\sin \theta }{1-\beta \cos \theta }}}
The apparent transverse velocity is maximal for angle ( 0 < β < 1 {\displaystyle 0<\beta <1} is used)
If γ ≫ 1 {\displaystyle \gamma \gg 1} (i.e. when velocity of jet is close to the velocity of light) then β T max > 1 {\displaystyle \beta _{\text{T}}^{\text{max}}>1} despite the fact that β < 1 {\displaystyle \beta <1} . And of course β T > 1 {\displaystyle \beta _{\text{T}}>1} means that the apparent transverse velocity along C B {\displaystyle CB} , the only velocity on the sky that can be measured, is larger than the velocity of light in vacuum, i.e. the motion is apparently superluminal.
The apparent superluminal motion in the faint nebula surrounding Nova Persei was first observed in 1901 by Charles Dillon Perrine . [ 12 ] “Mr. Perrine’s photograph of November 7th and 8th, 1901, secured with the Crossley Reflector, led to the remarkable discovery that the masses of nebulosity were apparently in motion, with a speed perhaps several hundred times as great as hitherto observed.” [ 13 ] “Using the 36-in. telescope (Crossley), he discovered the apparent superluminal motion of the expanding light bubble around Nova Persei (1901). Thought to be a nebula, the visual appearance was actually caused by light from the nova event reflected from the surrounding interstellar medium as the light moved outward from the star. Perrine studied this phenomenon using photographic, spectroscopic, and polarization techniques.” [ 14 ]
Superluminal motion was first observed in 1902 by Jacobus Kapteyn in the ejecta of the nova GK Persei , which had exploded in 1901. [ 15 ] His discovery was published in the German journal Astronomische Nachrichten , and received little attention from English-speaking astronomers until many decades later. [ 16 ] [ 17 ]
In 1966, Martin Rees pointed out that "an object moving relativistically in suitable directions may appear to a distant observer to have a transverse velocity much greater than the velocity of light". [ 18 ] In 1969 and 1970 such sources were found as very distant astronomical radio sources, such as radio galaxies and quasars, [ 19 ] [ 20 ] [ 21 ] and were called superluminal sources. The discovery was the result of a new technique called Very Long Baseline Interferometry , which allowed astronomers to set limits to the angular size of components and to determine positions to better than milli-arcseconds , and in particular to determine the change in positions on the sky, called proper motions , in a timespan of typically years. The apparent velocity is obtained by multiplying the observed proper motion by the distance, which could be up to 6 times the speed of light.
In the introduction to a workshop on superluminal radio sources, Pearson and Zensus reported
The first indications of changes in the structure of some sources were obtained by an American-Australian team in a series of transpacific VLBI observations between 1968 and 1970 (Gubbay et al. 1969). [ 19 ] Following the early experiments, they had realised the potential of the NASA tracking antennas for VLBI measurements and set up an interferometer operating between California and Australia. The change in the source visibility that they measured for 3C 279 , combined with changes in total flux density, indicated that a component first seen in 1969 had reached a diameter of about 1 milliarcsecond, implying expansion at an apparent velocity of at least twice the speed of light. Aware of Rees's model, [ 18 ] (Moffet et al. 1972) [ 22 ] concluded that their measurement presented evidence for relativistic expansion of this component. This interpretation, although by no means unique, was later confirmed, and in hindsight it seems fair to say that their experiment was the first interferometric measurement of superluminal expansion. [ 23 ]
In 1994, a galactic speed record was obtained with the discovery of a superluminal source in the Milky Way , the cosmic x-ray source GRS 1915+105 . The expansion occurred on a much shorter timescale. Several separate blobs were seen to expand in pairs within weeks by typically 0.5 arcsec . [ 24 ] Because of the analogy with quasars, this source was called a microquasar . | https://en.wikipedia.org/wiki/Superluminal_motion |
A superluminous supernova ( SLSN , plural superluminous supernovae or SLSNe ) is a type of stellar explosion with a luminosity 10 or more times higher than that of standard supernovae . [ 1 ] Like supernovae , SLSNe seem to be produced by several mechanisms, which is readily revealed by their light-curves and spectra . There are multiple models for what conditions may produce an SLSN, including core collapse in particularly massive stars , millisecond magnetars , interaction with circumstellar material (CSM model), or pair-instability supernovae .
The first confirmed superluminous supernova connected to a gamma ray burst was not found until 2003, when GRB 030329 illuminated the Leo constellation. [ 2 ] SN 2003dh represented the death of a star 25 times more massive than the Sun, with material being blasted out at over a tenth the speed of light. [ 3 ]
Stars with M ≥ 40 M ☉ are likely to produce superluminous supernovae. [ 4 ]
Discoveries of many SLSNe in the 21st century showed that not only were they more luminous by an order of magnitude than most supernovae, their remnants were also unlikely to be powered by the typical radioactive decay that is responsible for the observed energies of conventional supernovae. [ verification needed ]
SLSNe events use a separate classification scheme to distinguish them from the conventional type Ia , type Ib/Ic , and type II supernovae, [ 5 ] roughly distinguishing between the spectral signature of hydrogen-rich and hydrogen-poor events. [ 6 ]
Hydrogen-rich SLSNe are classified as Type SLSN-II, with observed radiation passing through the changing opacity of a thick expanding hydrogen envelope. Most hydrogen-poor events are classified as Type SLSN-I, with its visible radiation produced from a large expanding envelope of material powered by an unknown mechanism. A third less common group of SLSNe is also hydrogen-poor and abnormally luminous, referred to as SLSN-R, clearly powered by radioactivity from 56 Ni . [ 6 ] [ 7 ]
Increasing number of discoveries find that some SLSNe do not fit cleanly into these three classes, so further sub-classes or unique events have been described. Many or all SLSN-I show spectra without hydrogen or helium but have lightcurves comparable to conventional type Ic supernovae, and are now classed as SLSN-Ic. [ 8 ] PS1-10afx is an unusually red hydrogen-free SLSN with an extremely rapid rise to a near-record peak luminosity and an unusually rapid decline. [ 9 ] PS1-11ap is similar to a type Ic SLSN but has an unusually slow rise and decline. [ 8 ]
A wide variety of causes have been proposed to explain events that are an order of magnitude or more greater than standard supernovae. The collapsar and CSM (circumstellar material) models are generally accepted and a number of events are well-observed. Other models are still only tentatively accepted or remain entirely theoretical.
The collapsar model is a type of superluminous supernova that produces a gravitationally collapsed object, or black hole . The word "collapsar", short for "collapsed star ", was formerly used to refer to the end product of stellar gravitational collapse , a stellar-mass black hole . The word is now sometimes used to refer to a specific model for the collapse of a fast-rotating star. When core collapse occurs in a star with a core at least around fifteen times the Sun's mass ( M ☉ )—though chemical composition and rotational rate are also significant—the explosion energy is insufficient to expel the outer layers of the star, and it will collapse into a black hole without producing a visible supernova outburst.
A star with a core mass slightly below this level—in the range of 5–15 M ☉ —will undergo a supernova explosion, but so much of the ejected mass falls back onto the core remnant that it still collapses into a black hole. If such a star is rotating slowly, then it will produce a faint supernova, but if the star is rotating quickly enough, then the fallback to the black hole will produce relativistic jets . The energy that these jets transfer into the ejected shell renders the visible outburst substantially more luminous than a standard supernova. The jets also beam high energy particles and gamma rays directly outward and thereby produce x-ray or gamma-ray bursts; the jets can last for several seconds or longer and correspond to long-duration gamma-ray bursts, but they do not appear to explain short-duration gamma-ray bursts.
Stars with 5–15 M ☉ cores have an approximate total mass of 25–90 M ☉ , assuming the star has not undergone significant mass loss. Such a star will still have a hydrogen envelope and will explode as a Type II supernova. Faint Type II supernovae have been observed, but no definite candidates for a Type II SLSN (except type IIn, which are not thought to be jet supernovae). Only the very lowest metallicity population III stars will reach this stage of their life with little mass loss. Other stars, including most of those visible to us, will have had most of their outer layers blown away by their high luminosity and become Wolf-Rayet stars. Some theories propose these will produce either Type Ib or Type Ic supernovae, but none of these events so far has been observed in nature. Many observed SLSNe are likely Type Ic. Those associated with gamma-ray bursts are almost always Type Ic, being very good candidates for having relativistic jets produced by fallback to a black hole. However, not all Type Ic SLSNe correspond to observed gamma-ray bursts but the events would only be visible if one of the jets were aimed towards us.
In recent years, much observational data on long-duration gamma-ray bursts have significantly increased our understanding of these events and made clear that the collapsar model produces explosions that differ only in detail from more or less ordinary supernovae and have energy ranges from approximately normal to around 100 times larger.
A good example of a collapsar SLSN is SN 1998bw , [ 10 ] which was associated with the gamma-ray burst GRB 980425 . It is classified as a type Ic supernova due to its distinctive spectral properties in the radio spectrum, indicating the presence of relativistic matter.
Almost all observed SLSNe have had spectra similar to either a type Ic or type IIn supernova. The type Ic SLSNe are thought to be produced by jets from fallback to a black hole, but type IIn SLSNe have significantly different light curves and are not associated with gamma-ray bursts. Type IIn supernovae are all embedded in a dense nebula probably expelled from the progenitor star itself, and this circumstellar material (CSM) is thought to be the cause of the extra luminosity. [ 11 ] When material expelled in an initial normal supernova explosion meets dense nebular material or dust close to the star, the shockwave converts kinetic energy efficiently into visible radiation. This effect greatly enhances these extended duration and extremely luminous supernovae, even though the initial explosive energy was the same as that of normal supernovae.
Although any supernova type could potentially produce Type IIn SLSNe, theoretical constraints on the surrounding CSM sizes and densities do suggest that it will almost always be produced from the central progenitor star itself immediately prior to the observed supernova event. Such stars are likely candidates of hypergiants or LBVs that appear to be undergoing substantial mass loss , due to Eddington instability , for example, SN2005gl . [ 12 ]
Another type of suspected SLSN is a pair-instability supernova , of which SN 2006gy [ 13 ] may possibly be the first observed example. This supernova event was observed in a galaxy about 238 million light years (73 megaparsecs ) from Earth.
The theoretical basis for pair-instability collapse has been known for many decades [ 14 ] and was suggested as a dominant source of higher mass elements in the early universe as super-massive population III stars exploded. In a pair-instability supernova, the pair production effect causes a sudden pressure drop in the star's core, leading to a rapid partial collapse. Gravitational potential energy from the collapse causes runaway fusion of the core which entirely disrupts the star, leaving no remnant.
Models show that this phenomenon only happens in stars with extremely low metallicity and masses between about 130 and 260 times the Sun, making them extremely unlikely in the local universe. Although originally expected to produce SLSN explosions hundreds of times greater than a normal supernova, current models predict that they actually produce luminosities ranging from about the same as a normal core collapse supernova to perhaps 50 times brighter, although remaining bright for much longer. [ 15 ]
Models of the creation and subsequent spin-down of a magnetar yield much higher luminosities than regular supernova [ 16 ] [ 17 ] events and match the observed properties [ 18 ] [ 19 ] of at least some SLSNe. In cases where pair-instability supernova may not be a good fit for explaining a SLSN, [ 20 ] a magnetar explanation is more plausible.
There are still models for SLSN explosions produced from binary systems, white dwarf or neutron stars in unusual arrangements or undergoing mergers, and some of these are proposed to account for some observed gamma-ray bursts. | https://en.wikipedia.org/wiki/Superluminous_supernova |
Supermalloy is an alloy composed of nickel (75%), iron (20%), and molybdenum (5%). It is a high permeability ferromagnetic alloy used in magnetic cores and magnetic shielding in electrical components, such as pulse transformers and ultra-sensitive magnetic amplifiers. It has a resistivity of 0.6 Ω ·mm 2 /m (or 6.0 x 10 −7 Ω·m), [ 1 ] an extremely high relative magnetic permeability (approximately 800 000 ), and a low coercivity . Supermalloy is used in manufacturing components for radio engineering, telephony, and telemechanics instruments.
This alloy-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Supermalloy |
Supermicelle is a hierarchical micelle structure ( supramolecular assembly ) where individual components are also micelles. Supermicelles are formed via bottom-up chemical approaches, such as self-assembly of long cylindrical micelles into radial cross-, star- or dandelion -like patterns in a specially selected solvent; solid nanoparticles may be added to the solution to act as nucleation centers and form the central core of the supermicelle. The stems of the primary cylindrical micelles are composed of various block copolymers connected by strong covalent bonds ; within the supermicelle structure they are loosely held together by hydrogen bonds , electrostatic or solvophobic interactions. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Supermicelle |
In mathematics, a supermodular function is a function on a lattice that, informally, has the property of being characterized by "increasing differences." Seen from the point of set functions , this can also be viewed as a relationship of "increasing returns", where adding more elements to a subset increases its valuation. In economics , supermodular functions are often used as a formal expression of complementarity in preferences among goods. Supermodular functions are studied and have applications in game theory , economics , lattice theory , combinatorial optimization , and machine learning .
Let ( X , ⪯ ) {\displaystyle (X,\preceq )} be a lattice . A real-valued function f : X → R {\displaystyle f:X\rightarrow \mathbb {R} } is called supermodular if f ( x ∨ y ) + f ( x ∧ y ) ≥ f ( x ) + f ( y ) {\displaystyle f(x\vee y)+f(x\wedge y)\geq f(x)+f(y)}
for all x , y ∈ X {\displaystyle x,y\in X} . [ 1 ]
If the inequality is strict, then f {\displaystyle f} is strictly supermodular on X {\displaystyle X} . If − f {\displaystyle -f} is (strictly) supermodular then f is called ( strictly) submodular . A function that is both submodular and supermodular is called modular . This corresponds to the inequality being changed to an equality.
We can also define supermodular functions where the underlying lattice is the vector space R n {\displaystyle \mathbb {R} ^{n}} . Then the function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } is supermodular if
f ( x ↑ y ) + f ( x ↓ y ) ≥ f ( x ) + f ( y ) {\displaystyle f(x\uparrow y)+f(x\downarrow y)\geq f(x)+f(y)}
for all x {\displaystyle x} , y ∈ R n {\displaystyle y\in \mathbb {R} ^{n}} , where x ↑ y {\displaystyle x\uparrow y} denotes the componentwise maximum and x ↓ y {\displaystyle x\downarrow y} the componentwise minimum of x {\displaystyle x} and y {\displaystyle y} .
If f is twice continuously differentiable, then supermodularity is equivalent to the condition [ 2 ]
The concept of supermodularity is used in the social sciences to analyze how one agent's decision affects the incentives of others.
Consider a symmetric game with a smooth payoff function f {\displaystyle \,f} defined over actions z i {\displaystyle \,z_{i}} of two or more players i ∈ 1 , 2 , … , N {\displaystyle i\in {1,2,\dots ,N}} . Suppose the action space is continuous; for simplicity, suppose each action is chosen from an interval: z i ∈ [ a , b ] {\displaystyle z_{i}\in [a,b]} . In this context, supermodularity of f {\displaystyle \,f} implies that an increase in player i {\displaystyle \,i} 's choice z i {\displaystyle \,z_{i}} increases the marginal payoff d f / d z j {\displaystyle df/dz_{j}} of action z j {\displaystyle \,z_{j}} for all other players j {\displaystyle \,j} . That is, if any player i {\displaystyle \,i} chooses a higher z i {\displaystyle \,z_{i}} , all other players j {\displaystyle \,j} have an incentive to raise their choices z j {\displaystyle \,z_{j}} too. Following the terminology of Bulow, Geanakoplos , and Klemperer (1985), economists call this situation strategic complementarity , because players' strategies are complements to each other. [ 3 ] This is the basic property underlying examples of multiple equilibria in coordination games . [ 4 ]
The opposite case of supermodularity of f {\displaystyle \,f} , called submodularity, corresponds to the situation of strategic substitutability . An increase in z i {\displaystyle \,z_{i}} lowers the marginal payoff to all other player's choices z j {\displaystyle \,z_{j}} , so strategies are substitutes. That is, if i {\displaystyle \,i} chooses a higher z i {\displaystyle \,z_{i}} , other players have an incentive to pick a lower z j {\displaystyle \,z_{j}} .
For example, Bulow et al. consider the interactions of many imperfectly competitive firms. When an increase in output by one firm raises the marginal revenues of the other firms, production decisions are strategic complements. When an increase in output by one firm lowers the marginal revenues of the other firms, production decisions are strategic substitutes.
A supermodular utility function is often related to complementary goods . However, this view is disputed. [ 5 ]
Supermodularity can also be defined for set functions , which are functions defined over subsets of a larger set. Many properties of submodular set functions can be rephrased to apply to supermodular set functions.
Intuitively, a supermodular function over a set of subsets demonstrates "increasing returns". This means that if each subset is assigned a real number that corresponds to its value, the value of a subset will always be less than the value of a larger subset which contains it. Alternatively, this means that as we add elements to a set, we increase its value.
Let S {\displaystyle S} be a finite set. A set function f : 2 S → R {\displaystyle f:2^{S}\to \mathbb {R} } is supermodular if it satifies the following (equivalent) conditions: [ 6 ]
A set function f {\displaystyle f} is submodular if − f {\displaystyle -f} is supermodular, and modular if it is both supermodular and submodular.
There are specialized techniques for optimizing submodular functions. Theory and enumeration algorithms for finding local and global maxima (minima) of submodular (supermodular) functions can be found in "Maximization of submodular functions: Theory and enumeration algorithms", B. Goldengorin. [ 7 ] | https://en.wikipedia.org/wiki/Supermodular_function |
Supramolecular chemistry refers to the branch of chemistry concerning chemical systems composed of a discrete number of molecules . The strength of the forces responsible for spatial organization of the system range from weak intermolecular forces , electrostatic charge , or hydrogen bonding to strong covalent bonding , provided that the electronic coupling strength remains small relative to the energy parameters of the component. [ 1 ] [ 2 ] [ page needed ] While traditional chemistry concentrates on the covalent bond, supramolecular chemistry examines the weaker and reversible non-covalent interactions between molecules. [ 3 ] These forces include hydrogen bonding, metal coordination , hydrophobic forces , van der Waals forces , pi–pi interactions and electrostatic effects. [ 4 ] [ 5 ]
Important concepts advanced by supramolecular chemistry include molecular self-assembly , molecular folding , molecular recognition , host–guest chemistry , mechanically-interlocked molecular architectures , and dynamic covalent chemistry . [ 6 ] The study of non-covalent interactions is crucial to understanding many biological processes that rely on these forces for structure and function. Biological systems are often the inspiration for supramolecular research.
The existence of intermolecular forces was first postulated by Johannes Diderik van der Waals in 1873. However, Nobel laureate Hermann Emil Fischer developed supramolecular chemistry's philosophical roots. In 1894, [ 16 ] Fischer suggested that enzyme–substrate interactions take the form of a "lock and key", the fundamental principles of molecular recognition and host–guest chemistry. In the early twentieth century non-covalent bonds were understood in gradually more detail, with the hydrogen bond being described by Latimer and Rodebush in 1920.
With the deeper understanding of the non-covalent interactions, for example, the clear elucidation of DNA structure, chemists started to emphasize the importance of non-covalent interactions. [ 17 ] In 1967, Charles J. Pedersen discovered crown ethers, which are ring-like structures capable of chelating certain metal ions. Then, in 1969, Jean-Marie Lehn discovered a class of molecules similar to crown ethers, called cryptands. After that, Donald J. Cram synthesized many variations to crown ethers, on top of separate molecules capable of selective interaction with certain chemicals. The three scientists were awarded the Nobel Prize in Chemistry in 1987 for "development and use of molecules with structure-specific interactions of high selectivity”. [ 18 ] In 2016, Bernard L. Feringa , Sir J. Fraser Stoddart , and Jean-Pierre Sauvage were awarded the Nobel Prize in Chemistry, "for the design and synthesis of molecular machines ". [ 19 ]
The term supermolecule (or supramolecule ) was introduced by Karl Lothar Wolf et al. ( Übermoleküle ) in 1937 to describe hydrogen-bonded acetic acid dimers . [ 20 ] [ 21 ] The term supermolecule is also used in biochemistry to describe complexes of biomolecules , such as peptides and oligonucleotides composed of multiple strands. [ 22 ]
Eventually, chemists applied these concepts to synthetic systems. One breakthrough came in the 1960s with the synthesis of the crown ethers by Charles J. Pedersen . Following this work, other researchers such as Donald J. Cram , Jean-Marie Lehn and Fritz Vögtle reported a variety of three-dimensional receptors, and throughout the 1980s research in the area gathered a rapid pace with concepts such as mechanically interlocked molecular architectures emerging.
The influence of supramolecular chemistry was established by the 1987 Nobel Prize for Chemistry which was awarded to Donald J. Cram, Jean-Marie Lehn, and Charles J. Pedersen in recognition of their work in this area. [ 23 ] The development of selective "host–guest" complexes in particular, in which a host molecule recognizes and selectively binds a certain guest, was cited as an important contribution.
Molecular self-assembly is the construction of systems without guidance or management from an outside source (other than to provide a suitable environment). The molecules are directed to assemble through non-covalent interactions. Self-assembly may be subdivided into intermolecular self-assembly (to form a supramolecular assembly ), and intramolecular self-assembly (or folding as demonstrated by foldamers and polypeptides). Molecular self-assembly also allows the construction of larger structures such as micelles , membranes , vesicles , liquid crystals , and is important to crystal engineering . [ 24 ]
Molecular recognition is the specific binding of a guest molecule to a complementary host molecule to form a host–guest complex. Often, the definition of which species is the "host" and which is the "guest" is arbitrary. The molecules are able to identify each other using non-covalent interactions. Key applications of this field are the construction of molecular sensors and catalysis . [ 25 ] [ 26 ] [ 27 ] [ 28 ]
Molecular recognition and self-assembly may be used with reactive species in order to pre-organize a system for a chemical reaction (to form one or more covalent bonds). It may be considered a special case of supramolecular catalysis . Non-covalent bonds between the reactants and a "template" hold the reactive sites of the reactants close together, facilitating the desired chemistry. This technique is particularly useful for situations where the desired reaction conformation is thermodynamically or kinetically unlikely, such as in the preparation of large macrocycles. This pre-organization also serves purposes such as minimizing side reactions, lowering the activation energy of the reaction, and producing desired stereochemistry . After the reaction has taken place, the template may remain in place, be forcibly removed, or may be "automatically" decomplexed on account of the different recognition properties of the reaction product. The template may be as simple as a single metal ion or may be extremely complex. [ citation needed ]
Mechanically interlocked molecular architectures consist of molecules that are linked only as a consequence of their topology. Some non-covalent interactions may exist between the different components (often those that were used in the construction of the system), but covalent bonds do not. Supramolecular chemistry, and template-directed synthesis in particular, is key to the efficient synthesis of the compounds. Examples of mechanically interlocked molecular architectures include catenanes , rotaxanes , molecular knots , molecular Borromean rings , [ 29 ] 2D [ c 2]daisy chain polymer [ 30 ] and ravels. [ 31 ]
In dynamic covalent chemistry covalent bonds are broken and formed in a reversible reaction under thermodynamic control. While covalent bonds are key to the process, the system is directed by non-covalent forces to form the lowest energy structures. [ 32 ]
Many synthetic supramolecular systems are designed to copy functions of biological systems. These biomimetic architectures can be used to learn about both the biological model and the synthetic implementation. Examples include photoelectrochemical systems, catalytic systems, protein design and self-replication . [ 33 ]
Molecular imprinting describes a process by which a host is constructed from small molecules using a suitable molecular species as a template. After construction, the template is removed leaving only the host. The template for host construction may be subtly different from the guest that the finished host binds to. In its simplest form, imprinting uses only steric interactions, but more complex systems also incorporate hydrogen bonding and other interactions to improve binding strength and specificity. [ 34 ]
Molecular machines are molecules or molecular assemblies that can perform functions such as linear or rotational movement, switching, and entrapment. These devices exist at the boundary between supramolecular chemistry and nanotechnology , and prototypes have been demonstrated using supramolecular concepts. [ 35 ] Jean-Pierre Sauvage , Sir J. Fraser Stoddart and Bernard L. Feringa shared the 2016 Nobel Prize in Chemistry for the 'design and synthesis of molecular machines'. [ 36 ]
Supramolecular systems are rarely designed from first principles. Rather, chemists have a range of well-studied structural and functional building blocks that they are able to use to build up larger functional architectures. Many of these exist as whole families of similar units, from which the analog with the exact desired properties can be chosen.
Macrocycles are very useful in supramolecular chemistry, as they provide whole cavities that can completely surround guest molecules and may be chemically modified to fine-tune their properties.
Many supramolecular systems require their components to have suitable spacing and conformations relative to each other, and therefore easily employed structural units are required. [ 39 ]
Supramolecular chemistry has found many applications, [ 41 ] in particular molecular self-assembly processes have been applied to the development of new materials. Large structures can be readily accessed using bottom-up synthesis as they are composed of small molecules requiring fewer steps to synthesize. Thus most of the bottom-up approaches to nanotechnology are based on supramolecular chemistry. [ 42 ] Many smart materials [ 43 ] are based on molecular recognition. [ 44 ]
A major application of supramolecular chemistry is the design and understanding of catalysts and catalysis. Non-covalent interactions influence the binding reactants. [ 45 ]
Design based on supramolecular chemistry has led to numerous applications in the creation of functional biomaterials and therapeutics. [ 46 ] Supramolecular biomaterials afford a number of modular and generalizable platforms with tunable mechanical, chemical and biological properties. These include systems based on supramolecular assembly of peptides, host–guest macrocycles, high-affinity hydrogen bonding, and metal–ligand interactions.
A supramolecular approach has been used extensively to create artificial ion channels for the transport of sodium and potassium ions into and out of cells. [ 47 ]
Supramolecular chemistry is also important to the development of new pharmaceutical therapies by understanding the interactions at a drug binding site. The area of drug delivery has also made critical advances as a result of supramolecular chemistry providing encapsulation and targeted release mechanisms. [ 48 ] In addition, supramolecular systems have been designed to disrupt protein–protein interactions that are important to cellular function. [ 49 ]
Supramolecular chemistry has been used to demonstrate computation functions on a molecular scale. In many cases, photonic or chemical signals have been used in these components, but electrical interfacing of these units has also been shown by supramolecular signal transduction devices. Data storage has been accomplished by the use of molecular switches with photochromic and photoisomerizable units, by electrochromic and redox -switchable units, and even by molecular motion. Synthetic molecular logic gates have been demonstrated on a conceptual level. Even full-scale computations have been achieved by semi-synthetic DNA computers . | https://en.wikipedia.org/wiki/Supermolecule |
A supermoon is a full moon or a new moon that nearly coincides with perigee —the closest that the Moon comes to the Earth in its orbit —resulting in a slightly larger-than-usual apparent size of the lunar disk as viewed from Earth. [ 1 ] The technical name is a perigee syzygy (of the Earth–Moon–Sun system) or a full (or new ) Moon around perigee . [ a ] Because the term supermoon is astrological in origin, it has no precise astronomical definition. [ 2 ] [ contradictory ]
The association of the Moon with both oceanic and crustal tides has led to claims that the supermoon phenomenon may be associated with increased risk of events like earthquakes and volcanic eruptions , but no such link has been found. [ 3 ]
The opposite phenomenon, an apogee syzygy or a full (or new ) Moon around apogee , has been called a micromoon . [ 4 ]
The name supermoon was coined by astrologer Richard Nolle in 1979, in Dell Horoscope magazine arbitrarily defined as:
... a new or full moon which occurs with the Moon at or near (within 90% of) its closest approach to Earth in a given orbit ( perigee ). In short, Earth, Moon and Sun are all in a line, with Moon in its nearest approach to Earth.
He came up with the name while reading Strategic Role Of Perigean Spring Tides in Nautical History and Coastal Flooding published in 1976 by Fergus Wood, a hydrologist with NOAA . [ 6 ] [ 7 ] Nolle explained in 2011 that he based calculations on 90% of the difference in lunar apsis extremes for the solar year . In other words, a full or new moon is considered a supermoon if l d s ≤ l d p + 0.1 ∗ ( l d a − l d p ) {\displaystyle ld_{s}\leq ld_{p}+0.1*(ld_{a}-ld_{p})} where l d s {\displaystyle ld_{s}} is the lunar distance at syzygy , l d a {\displaystyle ld_{a}} is the lunar distance at the greatest apogee of the year, and l d p {\displaystyle ld_{p}} is the lunar distance at the smallest perigee of the year. [ 8 ] [ 9 ]
In practice, there is no official or even consistent definition of how near perigee the full Moon must occur to receive the supermoon label, and new moons rarely receive a supermoon label. Different sources give different definitions. [ 10 ] [ 11 ]
The term perigee-syzygy or perigee full/new moon is preferred in the scientific community. [ 12 ] Perigee is the point at which the Moon is closest in its orbit to the Earth, and syzygy is when the Earth, the Moon and the Sun are aligned, which happens at every full or new moon . Astrophysicist Fred Espenak uses Nolle's definition but preferring the label of full Moon at perigee , and using the apogee and perigee nearest in time rather than the greatest and least of the year. [ 13 ] Wood used the definition of a full or new moon occurring within 24 hours of perigee and also used the label perigee-syzygy . [ 7 ]
Wood also coined the less used term proxigee where perigee and the full or new moon are separated by 10 hours or less. [ 7 ] Nolle has also added the concept of extreme supermoon in 2000 describing the concept as any new or full moons that are at "100% or greater of the mean perigee". [ 14 ]
Of the possible 12 or 13 full (or new) moons each year, usually three or four may be classified as supermoons, as commonly defined.
The most recent full supermoon occurred on November 15, 2024, and the next one will be on October 7, 2025. [ 13 ]
The supermoon of November 14, 2016, was the closest full occurrence since January 26, 1948, and will not be surpassed until November 25, 2034. [ 15 ]
The closest full supermoon of the 21st century will occur on December 6, 2052. [ 16 ]
The oscillating nature of the distance to the full or new moon is due to the difference between the synodic and anomalistic months . [ 13 ] The period of this oscillation is about 14 synodic months, which is close to 15 anomalistic months. Thus every 14 lunations there is a full moon nearest to perigee.
Occasionally, a supermoon coincides with a total lunar eclipse . The most recent occurrence of this by any definition was in May 2022 , and the next occurrence will be in October 2032 . [ 13 ]
In the Islamic calendar , the occurrence of full supermoons follows a seven-year cycle. In the first year, the full moon is near perigee in month 1 or 2, the next year in month 3 or 4, and so on. In the seventh year of the cycle the full moons are never very near to perigee. Approximately every 20 years the occurrences move to one month earlier. At present such a transition is occurring, so full supermoons occur twice in succession. For example in Hijri year 1446, they occur both in month 3 ( Rabīʿ al-ʾAwwal , on September 18, 2024) and in month 4 ( Rabīʿ ath-Thānī , on October 17, 2024).
A full moon at perigee appears roughly 14% larger in diameter than at apogee. [ 17 ] Many observers insist that the Moon looks bigger to them. This is likely due to observations shortly after sunset when the Moon appears near the horizon and the Moon illusion is at its most apparent. [ 18 ]
While the Moon's surface luminance remains the same, because it is closer to the Earth the i lluminance is about 30% brighter than at its farthest point, or apogee. This is due to the inverse square law of light which changes the amount of light received on Earth in inverse proportion to the distance from the Moon. But the perceived brightness will be the same, the moon will just be smaller in your field of view. That change in size is exactly proportional to the change in the amount of light. [ 19 ] A supermoon directly overhead could provide up to 0.36 lux . [ 20 ]
Claims that supermoons can cause natural disasters, and the claim of Nolle that supermoons cause "geophysical stress", have been refuted by scientists. [ 2 ] [ 21 ] [ 22 ] [ 23 ]
Despite lack of scientific evidence, there has been media speculation that natural disasters, such as the 2011 Tōhoku earthquake and tsunami and the 2004 Indian Ocean earthquake and tsunami , are causally linked with the 1–2-week period surrounding a supermoon. [ 24 ] A large, 7.5 magnitude earthquake centred 15 km north-east of Culverden, New Zealand at 00:03 NZDT on November 14, 2016, also coincided with a supermoon. [ 25 ] [ 26 ] Tehran earthquake on May 8, 2020, also coincided with a supermoon.
Scientists have confirmed that the combined effect of the Sun and Moon on the Earth's oceans, the tide , [ 27 ] is greatest when the Moon is either new or full . [ 28 ] and that during lunar perigee, the tidal force is somewhat stronger, [ 29 ] resulting in perigean spring tides . However, even at its most powerful, this force is still relatively weak, [ 30 ] causing tidal differences of inches at most. [ 31 ] [ b ]
Total lunar eclipses which fall on supermoon and micromoon days are relatively rare. In the 21st century, there are 87 total lunar eclipses, of which 28 are supermoons and 6 are micromoons. Almost all total lunar eclipses in Lunar Saros 129 are micromoon eclipses. An example of a supermoon lunar eclipse is the September 2015 lunar eclipse .
The Super Blood Moon is an astronomical event that combines two phenomena: a supermoon and a total lunar eclipse , resulting in a larger, brighter, and reddish-colored Moon.
A total lunar eclipse takes place when the Earth aligns between the Sun and the Moon, causing Earth’s shadow to fall on the Moon. As the shadow covers the Moon, sunlight passing through Earth's atmosphere scatters, filtering out most blue light and casting a reddish hue on the Moon. This phenomenon is often called a blood moon because of its striking red or orange color.
When these two events coincide, the Moon appears both larger and redder than usual, leading to the term Super Blood Moon. This unique alignment creates a visually impressive and rare sight that has inspired folklore and intrigue for centuries.
Super Blood Moons are relatively infrequent, occurring about once every few years, making them a notable event for astronomers and skywatchers alike.
Annular solar eclipses occur when the Moon's apparent diameter is smaller than the Sun's. Almost all annular solar eclipses between 1880 and 2060 in Solar Saros 144 and almost all annular solar eclipses between 1940 and 2120 in Solar Saros 128 are micromoon annular solar eclipses. [ 33 ] | https://en.wikipedia.org/wiki/Supermoon |
In mathematics , the supernatural numbers , sometimes called generalized natural numbers or Steinitz numbers , are a generalization of the natural numbers . They were used by Ernst Steinitz [ 1 ] : 249–251 in 1910 as a part of his work on field theory .
A supernatural number ω {\displaystyle \omega } is a formal product :
where p {\displaystyle p} runs over all prime numbers , and each n p {\displaystyle n_{p}} is zero, a natural number or infinity . Sometimes v p ( ω ) {\displaystyle v_{p}(\omega )} is used instead of n p {\displaystyle n_{p}} . If no n p = ∞ {\displaystyle n_{p}=\infty } and there are only a finite number of non-zero n p {\displaystyle n_{p}} then we recover the positive integers. Slightly less intuitively, if all n p {\displaystyle n_{p}} are ∞ {\displaystyle \infty } , we get zero. [ citation needed ] Supernatural numbers extend beyond natural numbers by allowing the possibility of infinitely many prime factors, and by allowing any given prime to divide ω {\displaystyle \omega } "infinitely often," by taking that prime's corresponding exponent to be the symbol ∞ {\displaystyle \infty } .
There is no natural way to add supernatural numbers, but they can be multiplied, with ∏ p p n p ⋅ ∏ p p m p = ∏ p p n p + m p {\displaystyle \prod _{p}p^{n_{p}}\cdot \prod _{p}p^{m_{p}}=\prod _{p}p^{n_{p}+m_{p}}} . Similarly, the notion of divisibility extends to the supernaturals with ω 1 ∣ ω 2 {\displaystyle \omega _{1}\mid \omega _{2}} if v p ( ω 1 ) ≤ v p ( ω 2 ) {\displaystyle v_{p}(\omega _{1})\leq v_{p}(\omega _{2})} for all p {\displaystyle p} . The notion of the least common multiple and greatest common divisor can also be generalized for supernatural numbers, by defining
and
With these definitions, the gcd or lcm of infinitely many natural numbers (or supernatural numbers) is a supernatural number.
We can also extend the usual p {\displaystyle p} -adic order functions to supernatural numbers by defining v p ( ω ) = n p {\displaystyle v_{p}(\omega )=n_{p}} for each p {\displaystyle p} .
Supernatural numbers are used to define orders and indices of profinite groups and subgroups, in which case many of the theorems from finite group theory carry over exactly. They are used to encode the algebraic extensions of a finite field . [ 2 ]
Supernatural numbers also arise in the classification of uniformly hyperfinite algebras . | https://en.wikipedia.org/wiki/Supernatural_number |
A supernetwork , or supernet , is an Internet Protocol (IP) network that is formed by aggregation of multiple networks (or subnets ) into a larger network. The new routing prefix for the aggregate network represents the constituent networks in a single routing table entry. The process of forming a supernet is called supernetting , prefix aggregation , route aggregation , or route summarization .
Supernetting within the Internet serves as a strategy to avoid fragmentation of the IP address space by using a hierarchical allocation system that delegates control of segments of address space to regional Internet registries . [ 1 ] This method facilitates regional route aggregation.
The benefits of supernetting are efficiencies gained in routers in terms of memory storage of route information and processing overhead when matching routes. Supernetting, however, can introduce interoperability issues and other risks. [ 2 ]
In IP networking terminology, a supernet is a block of contiguous subnetworks addressed as a single subnet from the perspective of the larger network. Supernets are always larger than their component networks. Supernetting is the process of aggregating routes to multiple smaller networks, thus saving storage space in the routing table, simplifying routing decisions and reducing route advertisements to neighboring gateways. Supernetting has helped address the increasing size of routing tables as the Internet has expanded.
Supernetting in large, complex networks can isolate topology changes from other routers. This can improve the stability of the network by limiting the propagation of routing changes in the event of a network link failure. If a router only advertises a summary route to the next router, then it does not need to advertise any changes to specific subnets within the summarized range. This can significantly reduce any unnecessary routing updates following a topology change. Hence, it increases the speed of convergence resulting in a more stable environment.
Supernetting requires the use of routing protocols that support Classless Inter-Domain Routing (CIDR). Interior Gateway Routing Protocol , Exterior Gateway Protocol and version 1 of the Routing Information Protocol (RIPv1) assume classful addressing , and therefore cannot transmit the subnet mask information required for supernetting.
Enhanced Interior Gateway Routing Protocol (EIGRP) supports CIDR. By default, EIGRP summarizes the routes within the routing table and forwards these summarized routes to its peers. Other routing protocols with CIDR support include RIPv2, Open Shortest Path First , IS-IS and Border Gateway Protocol .
A company that operates 150 accounting services in each of 50 districts has a router in each office connected with a Frame Relay link to its corporate headquarters. Without supernetting, the routing table on any given router might have to account for 150 routers in each of the 50 districts, or 7500 different networks. However, if a hierarchical addressing system is implemented with supernetting, then each district has a centralized site as an interconnection point. Each route is summarized before being advertised to other districts. Each router now only recognizes its own subnet and the other 49 summarized routes.
The determination of the summary route on a router involves the recognition of the number of highest-order bits that match all addresses. The summary route is calculated as follows. A router has the following networks in its routing table:
Firstly, the addresses are converted to binary format and aligned in a list:
Secondly, the bits at which the common pattern of digits ends are located. These common bits are shown in red. Lastly, the number of common bits is counted. The summary route is found by setting the remaining bits to zero, as shown below. It is followed by a slash and then the number of common bits.
The summarized route is 192.168.96.0/20. The subnet mask is 255.255.240.0. This summarized route also contains networks that were not in the summarized group, namely, 192.168.96.0, 192.168.97.0, 192.168.103.0, 192.168.104.0, 192.168.106.0, 192.168.107.0, 192.168.108.0, 192.168.109.0, 192.168.110.0, and 192.168.111.0. It must be assured that the missing networks do not exist outside of this route.
In another example, an ISP is assigned a block of IP addresses by a regional Internet registry (RIR) of 172.1.0.0 to 172.1.255.255. The ISP might then assign subnetworks to each of their downstream clients, e.g., Customer A will have the range 172.1.1.0 to 172.1.1.255, Customer B would receive the range 172.1.2.0 to 172.1.2.255 and Customer C would receive the range 172.1.3.0 to 172.1.3.255, and so on. Instead of an entry for each of the subnets 172.1.1.x and 172.1.2.x, etc., the ISP could aggregate the entire 172.1.x.x address range and advertise the network 172.1.0.0/16, which would reduce the number of entries in the global routing table.
The following supernetting risks have been identified: [ 2 ] | https://en.wikipedia.org/wiki/Supernetwork |
The Supernova Legacy Survey Program [ 1 ] is a project designed to investigate dark energy , by detecting and monitoring approximately 2000 high- redshift supernovae between 2003 and 2008, using MegaPrime , a large CCD mosaic at the Canada-France-Hawaii Telescope . It also carries out detailed spectroscopy of a subsample of distant supernovae .
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Supernova_Legacy_Survey |
Supernova impostors are stellar explosions that appear at first to be a supernova but do not destroy their progenitor stars. As such, they are a class of extra-powerful novae . They are also known as Type V supernovae , Eta Carinae analogs, and giant eruptions of luminous blue variables (LBV). [ 2 ]
Supernova impostors appear as remarkably faint supernovae of spectral type IIn—which have hydrogen in their spectrum and narrow spectral lines that indicate relatively low gas speeds. These impostors exceed their pre-outburst states by several magnitudes, with typical peak absolute visual magnitudes of −11 to −14, making these outbursts as bright as the most luminous stars . The trigger mechanism of these outbursts remains unexplained, though it is thought to be caused by violating the classical Eddington luminosity limit, initiating severe mass loss. If the ratio of radiated energy to kinetic energy is near unity, as in Eta Carinae , then we might expect an ejected mass of about 0.16 solar masses.
Possible examples of supernova impostors include the Great Eruption of Eta Carinae , P Cygni , SN 1961V , [ 3 ] SN 1954J , SN 1997bs , SN 2008S in NGC 6946 , and SN 2010dn [ 1 ] where detections of the surviving progenitor stars are claimed.
One supernova impostor that made news after the fact was the one observed on October 20, 2004, in the galaxy UGC 4904 by Japanese amateur astronomer Kōichi Itagaki . This LBV star exploded just two years later, on October 11, 2006, as supernova SN 2006jc . [ 4 ] | https://en.wikipedia.org/wiki/Supernova_impostor |
Supernova nucleosynthesis is the nucleosynthesis of chemical elements in supernova explosions.
In sufficiently massive stars, the nucleosynthesis by fusion of lighter elements into heavier ones occurs during sequential hydrostatic burning processes called helium burning , carbon burning , oxygen burning , and silicon burning , in which the byproducts of one nuclear fuel become, after compressional heating, the fuel for the subsequent burning stage. In this context, the word "burning" refers to nuclear fusion and not a chemical reaction.
During hydrostatic burning these fuels synthesize overwhelmingly the alpha nuclides ( A = 2 Z ), nuclei composed of integer numbers of helium-4 nuclei. Initially, two helium-4 nuclei fuse into a single beryllium-8 nucleus. The addition of another helium 4 nucleus to the beryllium yields carbon-12 , followed by oxygen-16 , neon-20 and so on, each time adding 2 protons and 2 neutrons to the growing nucleus. A rapid final explosive burning [ 1 ] is caused by the sudden temperature spike owing to passage of the radially moving shock wave that was launched by the gravitational collapse of the core. W. D. Arnett and his Rice University colleagues [ 2 ] [ 1 ] demonstrated that the final shock burning would synthesize the non-alpha-nucleus isotopes more effectively than hydrostatic burning was able to do, [ 3 ] [ 4 ] suggesting that the expected shock-wave nucleosynthesis is an essential component of supernova nucleosynthesis. Together, shock-wave nucleosynthesis and hydrostatic-burning processes create most of the isotopes of the elements carbon ( Z = 6 ), oxygen ( Z = 8 ), and elements with Z = 10 to 28 (from neon to nickel ). [ 4 ] [ 5 ] As a result of the ejection of the newly synthesized isotopes of the chemical elements by supernova explosions, their abundances steadily increased within interstellar gas. That increase became evident to astronomers from the initial abundances in newly born stars exceeding those in earlier-born stars.
Elements heavier than nickel are comparatively rare owing to the decline with atomic weight of their nuclear binding energies per nucleon, but they too are created in part within supernovae. Of greatest interest historically has been their synthesis by rapid capture of neutrons during the r -process , reflecting the common belief that supernova cores are likely to provide the necessary conditions. However, newer research has proposed a promising alternative (see the r-process below). The r -process isotopes are approximately 100,000 times less abundant than the primary chemical elements fused in supernova shells above. Furthermore, other nucleosynthesis processes in supernovae are thought to be responsible also for some nucleosynthesis of other heavy elements, notably, the proton capture process known as the rp -process , the slow capture of neutrons ( s -process ) in the helium-burning shells and in the carbon-burning shells of massive stars, and a photodisintegration process known as the γ -process (gamma-process). The latter synthesizes the lightest, most neutron-poor, isotopes of the elements heavier than iron from preexisting heavier isotopes.
In 1946, Fred Hoyle proposed that elements heavier than hydrogen and helium would be produced by nucleosynthesis in the cores of massive stars. [ 6 ] It had previously been thought that the elements we see in the modern universe had been largely produced during its formation. At this time, the nature of supernovae was unclear and Hoyle suggested that these heavy elements were distributed into space by rotational instability. In 1954, the theory of nucleosynthesis of heavy elements in massive stars was refined and combined with more understanding of supernovae to calculate the abundances of the elements from carbon to nickel. [ 7 ] Key elements of the theory included:
The theory predicted that silicon burning would happen as the final stage of core fusion in massive stars, although nuclear science could not then calculate exactly how. [ 6 ] Hoyle also predicted that the collapse of the evolved cores of massive stars was "inevitable" owing to their increasing rate of energy loss by neutrinos and that the resulting explosions would produce further nucleosynthesis of heavy elements and eject them into space. [ 7 ]
In 1957, a paper by the authors E. M. Burbidge , G. R. Burbidge , W. A. Fowler , and Hoyle expanded and refined the theory and achieved widespread acclaim. [ 8 ] It became known as the B²FH or BBFH paper, after the initials of its authors. The earlier papers fell into obscurity for decades after the more-famous B²FH paper did not attribute Hoyle's original description of nucleosynthesis in massive stars. Donald D. Clayton has attributed the obscurity also to Hoyle's 1954 paper describing its key equation only in words, [ 9 ] and a lack of careful review by Hoyle of the B²FH draft by coauthors who had themselves not adequately studied Hoyle's paper. [ 10 ] During his 1955 discussions in Cambridge with his co-authors in preparation of the B²FH first draft in 1956 in Pasadena, [ 11 ] Hoyle's modesty had inhibited him from emphasizing to them the great achievements of his 1954 theory.
Thirteen years after the B²FH paper, W.D. Arnett and colleagues [ 2 ] [ 1 ] demonstrated that the final burning in the passing shock wave launched by collapse of the core could synthesize non-alpha-particle isotopes more effectively than hydrostatic burning could, [ 3 ] [ 4 ] suggesting that explosive nucleosynthesis is an essential component of supernova nucleosynthesis. A shock wave rebounded from matter collapsing onto the dense core, if strong enough to lead to mass ejection of the mantle of supernovae, would necessarily be strong enough to provide the sudden heating of the shells of massive stars needed for explosive thermonuclear burning within the mantle. Understanding how that shock wave can reach the mantle in the face of continuing infall onto the shock became the theoretical difficulty. Supernova observations assured that it must occur.
White dwarfs were proposed as possible progenitors of certain supernovae in the late 1960s, [ 12 ] although a good understanding of the mechanism and nucleosynthesis involved did not develop until the 1980s. [ 13 ] This showed that type Ia supernovae ejected very large amounts of radioactive nickel and lesser amounts of other iron-peak elements, with the nickel decaying rapidly to cobalt and then iron. [ 14 ]
The papers of Hoyle (1946) and Hoyle (1954) and of B²FH (1957) were written by those scientists before the advent of the age of computers. They relied on hand calculations, deep thought, physical intuition, and familiarity with details of nuclear physics. Brilliant as these founding papers were, a cultural disconnect soon emerged with a younger generation of scientists who began to construct computer programs [ 15 ] that would eventually yield numerical answers for the advanced evolution of stars [ 16 ] and the nucleosynthesis within them. [ 17 ] [ 18 ] [ 19 ] [ 20 ]
A supernova is a violent explosion of a star that occurs under two principal scenarios. The first is that a white dwarf star , which is the remnant of a low-mass star that has exhausted its nuclear fuel, undergoes a thermonuclear explosion after its mass is increased beyond its Chandrasekhar limit by accreting nuclear-fuel mass from a more diffuse companion star (usually a red giant ) with which it is in binary orbit. The resulting runaway nucleosynthesis completely destroys the star and ejects its mass into space. The second, and about threefold more common, scenario occurs when a massive star (12–35 times more massive than the sun), usually a supergiant at the critical time, reaches nickel-56 in its core nuclear fusion (or burning) processes. Without exothermic energy from fusion, the core of the pre-supernova massive star loses heat needed for pressure support, and collapses owing to the strong gravitational pull. The energy transfer from the core collapse causes the supernova display. [ 21 ]
The nickel-56 isotope has one of the largest binding energies per nucleon of all isotopes, and is therefore the last isotope whose synthesis during core silicon burning releases energy by nuclear fusion , exothermically . The binding energy per nucleon declines for atomic weights heavier than A = 56 , ending fusion's history of supplying thermal energy to the star. The thermal energy released when the infalling supernova mantle hits the semi-solid core is very large, about 10 53 ergs, about a hundred times the energy released by the supernova as the kinetic energy of its ejected mass. Dozens of research papers have been published in the attempt to describe the hydrodynamics of how that small one percent of the infalling energy is transmitted to the overlying mantle in the face of continuous infall onto the core. That uncertainty remains in the full description of core-collapse supernovae. [ citation needed ]
Nuclear fusion reactions that produce elements heavier than iron absorb nuclear energy and are said to be endothermic reactions. When such reactions dominate, the internal temperature that supports the star's outer layers drops. Because the outer envelope is no longer sufficiently supported by the radiation pressure, the star's gravity pulls its mantle rapidly inward. As the star collapses, this mantle collides violently with the growing incompressible stellar core, which has a density almost as great as an atomic nucleus, producing a shockwave that rebounds outward through the unfused material of the outer shell. The increase of temperature by the passage of that shockwave is sufficient to induce fusion in that material, often called explosive nucleosynthesis . [ 2 ] [ 22 ] The energy deposited by the shockwave somehow leads to the star's explosion, dispersing fusing matter in the mantle above the core into interstellar space .
After a star completes the oxygen burning process , its core is composed primarily of silicon and sulfur. [ 23 ] If it has sufficiently high mass, it further contracts until its core reaches temperatures in the range of 2.7–3.5 billion K ( 230–300 keV ). At these temperatures, silicon and other isotopes suffer photoejection of nucleons by energetic thermal photons ( γ ) ejecting especially alpha particles ( 4 He). [ 23 ] The nuclear process of silicon burning differs from earlier fusion stages of nucleosynthesis in that it entails a balance between alpha-particle captures and their inverse photo ejection which establishes abundances of all alpha-particle elements in the following sequence in which each alpha particle capture shown is opposed by its inverse reaction, namely, photo ejection of an alpha particle by the abundant thermal photons:
The alpha-particle nuclei 44 Ti and those more massive in the final five reactions listed are all radioactive, but they decay after their ejection in supernova explosions into abundant isotopes of Ca, Ti, Cr, Fe and Ni. This post-supernova radioactivity became of great importance for the emergence of gamma-ray-line astronomy. [ 24 ]
In these physical circumstances of rapid opposing reactions, namely alpha-particle capture and photo ejection of alpha particles, the abundances are not determined by alpha-particle-capture cross sections; rather they are determined by the values that the abundances must assume in order to balance the speeds of the rapid opposing-reaction currents. Each abundance takes on a stationary value that achieves that balance. This picture is called nuclear quasiequilibrium . [ 25 ] [ 26 ] [ 27 ] Many computer calculations, for example, [ 28 ] using the numerical rates of each reaction and of their reverse reactions have demonstrated that quasiequilibrium is not exact but does characterize well the computed abundances. Thus, the quasiequilibrium picture presents a comprehensible picture of what actually happens. It also fills in an uncertainty in Hoyle's 1954 theory. The quasiequilibrium buildup shuts off after 56 Ni because the alpha-particle captures become slower whereas the photo ejections from heavier nuclei become faster. Non-alpha-particle nuclei also participate, using a host of reactions similar to
and its inverse which set the stationary abundances of the non-alpha-particle isotopes, where the free densities of protons and neutrons are also established by the quasiequilibrium. However, the abundance of free neutrons is also proportional to the excess of neutrons over protons in the composition of the massive star; therefore the abundance of 37 Ar, using it as an example, is greater in ejecta from recent massive stars than it was from those in early stars of only H and He; therefore 37 Cl, to which 37 Ar decays after the nucleosynthesis, is called a "secondary isotope".
In interest of brevity, the next stage, an intricate photo-disintegration rearrangement, and the nuclear quasiequilibrium that it achieves, are referred to as silicon burning .
The silicon burning in the star progresses through a temporal sequence of such nuclear quasiequilibria in which the abundance of 28 Si slowly declines and that of 56 Ni slowly increases. This amounts to a nuclear abundance change 2 28 Si ≫ 56 Ni, which may be thought of as silicon burning into nickel ("burning" in the nuclear sense).
The entire silicon-burning sequence lasts about one day in the core of a contracting massive star and stops after 56 Ni has become the dominant abundance. The final explosive burning caused when the supernova shock passes through the silicon-burning shell lasts only seconds, but its roughly 50% increase in the temperature causes furious nuclear burning, which becomes the major contributor to nucleosynthesis in the mass range 28–60 AMU . [ 1 ] [ 25 ] [ 26 ] [ 29 ]
After the final 56 Ni stage, the star can no longer release energy via nuclear fusion, because a nucleus with 56 nucleons has the lowest mass per nucleon of all the elements in the sequence. The next step up in the alpha-particle chain would be 60 Zn. However 60 Zn has slightly more mass per nucleon than 56 Ni, and thus would require a thermodynamic energy loss rather than a gain as happened in all prior stages of nuclear burning.
56 Ni (which has 28 protons) has a half-life of 6.02 days and decays via β + decay to 56 Co (27 protons), which in turn has a half-life of 77.3 days as it decays to 56 Fe (26 protons). However, only minutes are available for the 56 Ni to decay within the core of a massive star.
This establishes 56 Ni as the most abundant of the radioactive nuclei created in this way. Its radioactivity energizes the late supernova light curve and creates the pathbreaking opportunity for gamma-ray-line astronomy. [ 24 ] See SN 1987A light curve for the aftermath of that opportunity.
Clayton and Meyer [ 28 ] have recently generalized this process still further by what they have named the secondary supernova machine , attributing the increasing radioactivity that energizes late supernova displays to the storage of increasing Coulomb energy within the quasiequilibrium nuclei called out above as the quasiequilibria shift from primarily 28 Si to primarily 56 Ni. The visible displays are powered by the decay of that excess Coulomb energy.
During this phase of the core contraction, the potential energy of gravitational compression heats the interior to roughly three billion kelvins, which briefly maintains pressure support and opposes rapid core contraction. However, since no additional heat energy can be generated via new fusion reactions, the final unopposed contraction rapidly accelerates into a collapse lasting only a few seconds. At that point, the central portion of the star is crushed into either a neutron star or, if the star is massive enough, into a black hole .
The outer layers of the star are blown off in an explosion triggered by the outward moving supernova shock, known as a Type II supernova whose displays last days to months. The escaping portion of the supernova core may initially contain a large density of free neutrons, which may synthesize, in about one second while inside the star, roughly half of the elements in the universe that are heavier than iron via a rapid neutron-capture mechanism known as the r -process . See below.
Stars with initial masses less than about eight times the sun never develop a core large enough to collapse and they eventually lose their atmospheres to become white dwarfs, stable cooling spheres of carbon supported by the pressure of degenerate electrons . Nucleosynthesis within those lighter stars is therefore limited to nuclides that were fused in material located above the final white dwarf. This limits their modest yields returned to interstellar gas to carbon-13 and nitrogen-14 , and to isotopes heavier than iron by slow capture of neutrons (the s -process ).
A significant minority of white dwarfs will explode, however, either because they are in a binary orbit with a companion star that loses mass to the stronger gravitational field of the white dwarf, or because of a merger with another white dwarf. The result is a white dwarf which exceeds its Chandrasekhar limit and explodes as a type Ia supernova , synthesizing about a solar mass of radioactive 56 Ni isotopes, together with smaller amounts of other iron peak elements. The subsequent radioactive decay of the nickel to iron keeps Type Ia optically very bright for weeks and creates more than half of all the iron in the universe. [ 30 ]
Virtually all of the remainder of stellar nucleosynthesis occurs, however, in stars that are massive enough to end as core collapse supernovae . [ 29 ] [ 30 ] In the pre-supernova massive star this includes helium burning, carbon burning, oxygen burning and silicon burning. Much of that yield may never leave the star but instead disappears into its collapsed core. The yield that is ejected is substantially fused in last-second explosive burning caused by the shock wave launched by core collapse . [ 1 ] Prior to core collapse, fusion of elements between silicon and iron occurs only in the largest of stars, and then in limited amounts. Thus, the nucleosynthesis of the abundant primary elements [ 31 ] defined as those that could be synthesized in stars of initially only hydrogen and helium (left by the Big Bang), is substantially limited to core-collapse supernova nucleosynthesis.
During supernova nucleosynthesis, the r -process creates very neutron-rich heavy isotopes, which decay after the event to the first stable isotope , thereby creating the neutron-rich stable isotopes of all heavy elements. This neutron capture process occurs in high neutron density with high temperature conditions.
In the r -process, any heavy nuclei are bombarded with a large neutron flux to form highly unstable neutron rich nuclei which very rapidly undergo beta decay to form more stable nuclei with higher atomic number and the same atomic mass . The neutron density is extremely high, about 10 22–24 neutrons per cubic centimeter.
Initial calculations of an evolving r -process, showing the evolution of calculated results with time, [ 32 ] also suggested that the r -process abundances are a superposition of differing neutron fluences . Small fluence produces the first r -process abundance peak near atomic weight A = 130 but no actinides , whereas large fluence produces the actinides uranium and thorium but no longer contains the A = 130 abundance peak. These processes occur in a fraction of a second to a few seconds, depending on details. Hundreds of subsequent papers published have utilized this time-dependent approach. The only modern nearby supernova, 1987A , has not revealed r -process enrichments. Modern thinking is that the r -process yield may be ejected from some supernovae but swallowed up in others as part of the residual neutron star or black hole.
Entirely new astronomical data about the r -process was discovered in 2017 when the LIGO and Virgo gravitational-wave observatories discovered a merger of two neutron stars that had previously been orbiting one another . [ 33 ] That can happen when both massive stars in orbit with one another become core-collapse supernovae, leaving neutron-star remnants.
The localization on the sky of the source of those gravitational waves radiated by that orbital collapse and merger of the two neutron stars, creating a black hole, but with significant ejected mass of highly neutronized matter , enabled several teams [ 34 ] [ 35 ] [ 36 ] to discover and study the remaining optical counterpart of the merger, finding spectroscopic evidence of r -process material thrown off by the merging neutron stars.
The bulk of this material seems to consist of two types: Hot blue masses of highly radioactive r -process matter of lower-mass-range heavy nuclei ( A < 140 ) and cooler red masses of higher mass-number r -process nuclei ( A > 140 ) rich in actinides (such as uranium, thorium, californium etc.). When released from the huge internal pressure of the neutron star, this neutron-rich spherical ejecta [ 37 ] [ 38 ] expands and radiates detected optical light for about a week. Such duration of luminosity would not be possible without heating by internal radioactive decay, which is provided by r -process nuclei near their waiting points. Two distinct mass regions ( A < 140 and A > 140 ) for the r -process yields have been known since the first time dependent calculations of the r -process. [ 32 ] Because of these spectroscopic features it has been argued that r -process nucleosynthesis in the Milky Way may have been primarily ejecta from neutron-star mergers rather than from supernovae. [ 39 ] | https://en.wikipedia.org/wiki/Supernova_nucleosynthesis |
In physics , a superoperator is a linear operator acting on a vector space of linear operators . [ 1 ]
Sometimes the term refers more specially to a completely positive map which also preserves or does not increase the trace of its argument . This specialized meaning is used extensively in the field of quantum computing , especially quantum programming , as they characterise mappings between density matrices .
The use of the super- prefix here is in no way related to its other use in mathematical physics. That is to say superoperators have no connection to supersymmetry and superalgebra which are extensions of the usual mathematical concepts defined by extending the ring of numbers to include Grassmann numbers . Since superoperators are themselves operators the use of the super- prefix is used to distinguish them from the operators upon which they act.
Fix a choice of basis for the underlying Hilbert space { | i ⟩ } i {\displaystyle \{|i\rangle \}_{i}} .
Defining the left and right multiplication superoperators by L ( A ) [ ρ ] = A ρ {\displaystyle {\mathcal {L}}(A)[\rho ]=A\rho } and R ( A ) [ ρ ] = ρ A {\displaystyle {\mathcal {R}}(A)[\rho ]=\rho A} respectively one can express the commutator as
Next we vectorize the matrix ρ {\displaystyle \rho } which is the mapping
where | ⋅ ⟩ ⟩ {\displaystyle |\cdot \rangle \!\rangle } denotes a vector in the Fock-Liouville space.
The matrix representation of L ( A ) {\displaystyle {\mathcal {L}}(A)} is then calculated by using the same mapping
indicating that L ( A ) = A ⊗ I {\displaystyle {\mathcal {L}}(A)=A\otimes I} . Similarly one can show that R ( A ) = ( I ⊗ A T ) {\displaystyle {\mathcal {R}}(A)=(I\otimes A^{T})} . These representations allows us to calculate things like eigenvalues associated to superoperators. These eigenvalues are particularly useful in the field of open quantum systems, where the real parts of the Lindblad superoperator 's eigenvalues will indicate whether a quantum system will relax or not.
In quantum mechanics the Schrödinger equation ,
expresses the time evolution of the state vector ψ {\displaystyle \psi } by the action of the Hamiltonian H ^ {\displaystyle {\hat {H}}} which is an operator mapping state vectors to state vectors.
In the more general formulation of John von Neumann , statistical states and ensembles are expressed by density operators rather than state vectors.
In this context the time evolution of the density operator is expressed via the von Neumann equation in which density operator is acted upon by a superoperator H {\displaystyle {\mathcal {H}}} mapping operators to operators. It is defined by taking the commutator with respect to the Hamiltonian operator:
where
As commutator brackets are used extensively in quantum mechanics this explicit superoperator presentation of the Hamiltonian's action is typically omitted.
When considering an operator valued function of operators H ^ = H ^ ( P ^ ) {\displaystyle {\hat {H}}={\hat {H}}({\hat {P}})} as for example when we define the quantum mechanical Hamiltonian of a particle as a function of the position and momentum operators, we may (for whatever reason) define an “Operator Derivative” Δ H ^ Δ P ^ {\displaystyle {\frac {\Delta {\hat {H}}}{\Delta {\hat {P}}}}} as a superoperator mapping an operator to an operator.
For example, if H ( P ) = P 3 = P P P {\displaystyle H(P)=P^{3}=PPP} then its operator derivative is the superoperator defined by:
This “operator derivative” is simply the Jacobian matrix of the function (of operators) where one simply treats the operator input and output as vectors and expands the space of operators in some basis. The Jacobian matrix is then an operator (at one higher level of abstraction) acting on that vector space (of operators). | https://en.wikipedia.org/wiki/Superoperator |
Superose is a trade name for a collection of FPLC columns which are used in the automated separation of biological molecules. The different columns provided can separate a variety of macromolecules , ranging from small peptides and polysaccharides to DNA strands and entire viruses. The material inside the column is agarose based, meaning that it consists of sugars that are crosslinked to form a gel-like mass. The pores in this material have different sizes, and if a molecule is too big, it does not fit into the pores, meaning that it follows a shorter way to the end of the column.
The columns are placed in a holder, and a computerized pumping system pumps a watery solution, often a buffer through the column. A special injection loop allows the injection of the desired sample. [ 1 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it .
This article related to chromatography is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Superose |
In chemistry , a superoxide is a compound that contains the superoxide ion , which has the chemical formula O − 2 . [ 1 ] The systematic name of the anion is dioxide(1−) . The reactive oxygen ion superoxide is particularly important as the product of the one-electron reduction of dioxygen O 2 , which occurs widely in nature. [ 2 ] Molecular oxygen (dioxygen) is a diradical containing two unpaired electrons , and superoxide results from the addition of an electron which fills one of the two degenerate molecular orbitals , leaving a charged ionic species with a single unpaired electron and a net negative charge of −1. Both dioxygen and the superoxide anion are free radicals that exhibit paramagnetism . [ 3 ] Superoxide was historically also known as " hyperoxide ". [ 4 ]
Superoxide forms salts with alkali metals and alkaline earth metals . The salts sodium superoxide ( NaO 2 ), potassium superoxide ( KO 2 ), rubidium superoxide ( RbO 2 ) and caesium superoxide ( CsO 2 ) are prepared by the reaction of O 2 with the respective alkali metal. [ 5 ] [ 6 ]
The alkali salts of O − 2 are orange-yellow in color and quite stable, if they are kept dry. Upon dissolution of these salts in water, however, the dissolved O − 2 undergoes disproportionation (dismutation) extremely rapidly (in a pH -dependent manner): [ 7 ]
This reaction (with moisture and carbon dioxide in exhaled air) is the basis of the use of potassium superoxide as an oxygen source in chemical oxygen generators , such as those used on the Space Shuttle and on submarines . Superoxides are also used in firefighters ' oxygen tanks to provide a readily available source of oxygen. In this process, O − 2 acts as a Brønsted base , initially forming the hydroperoxyl radical ( HO 2 ).
The superoxide anion, O − 2 , and its protonated form, hydroperoxyl , are in equilibrium in an aqueous solution : [ 8 ]
Given that the hydroperoxyl radical has a p K a of around 4.8, [ 9 ] superoxide predominantly exists in the anionic form at neutral pH.
Potassium superoxide is soluble in dimethyl sulfoxide (facilitated by crown ethers ) and is stable as long as protons are not available. Superoxide can also be generated in aprotic solvents by cyclic voltammetry .
Superoxide salts also decompose in the solid state, but this process requires heating:
Superoxide is common in biology, reflecting the pervasiveness of O 2 and its ease of reduction. Superoxide is implicated in a number of biological processes, some with negative connotations, and some with beneficial effects. [ 10 ]
Like hydroperoxyl, superoxide is classified as reactive oxygen species . [ 3 ] It is generated by the immune system to kill invading microorganisms . In phagocytes , superoxide is produced in large quantities by the enzyme NADPH oxidase for use in oxygen-dependent killing mechanisms of invading pathogens. Mutations in the gene coding for the NADPH oxidase cause an immunodeficiency syndrome called chronic granulomatous disease , characterized by extreme susceptibility to infection, especially catalase - positive organisms. In turn, micro-organisms genetically engineered to lack the superoxide-scavenging enzyme superoxide dismutase (SOD) lose virulence . Superoxide is also deleterious when produced as a byproduct of mitochondrial respiration (most notably by Complex I and Complex III ), as well as several other enzymes, for example xanthine oxidase , [ 11 ] which can catalyze the transfer of electrons directly to molecular oxygen under strongly reducing conditions.
Because superoxide is toxic at high concentrations, nearly all aerobic organisms express SOD. SOD efficiently catalyzes the disproportionation of superoxide:
Other proteins that can be both oxidized and reduced by superoxide (such as hemoglobin ) have weak SOD-like activity. Genetic inactivation (" knockout ") of SOD produces deleterious phenotypes in organisms ranging from bacteria to mice and have provided important clues as to the mechanisms of toxicity of superoxide in vivo.
Yeast lacking both mitochondrial and cytosolic SOD grow very poorly in air, but quite well under anaerobic conditions. Absence of cytosolic SOD causes a dramatic increase in mutagenesis and genomic instability. Mice lacking mitochondrial SOD (MnSOD) die around 21 days after birth due to neurodegeneration, cardiomyopathy, and lactic acidosis. [ 11 ] Mice lacking cytosolic SOD (CuZnSOD) are viable but suffer from multiple pathologies, including reduced lifespan, liver cancer , muscle atrophy , cataracts , thymic involution, haemolytic anemia, and a very rapid age-dependent decline in female fertility. [ 11 ]
Superoxide may contribute to the pathogenesis of many diseases (the evidence is particularly strong for radiation poisoning and hyperoxic injury), and perhaps also to aging via the oxidative damage that it inflicts on cells. While the action of superoxide in the pathogenesis of some conditions is strong (for instance, mice and rats overexpressing CuZnSOD or MnSOD are more resistant to strokes and heart attacks), the role of superoxide in aging must be regarded as unproven, for now. In model organisms (yeast, the fruit fly Drosophila, and mice), genetically knocking out CuZnSOD shortens lifespan and accelerates certain features of aging: ( cataracts , muscle atrophy , macular degeneration , and thymic involution ). But the converse, increasing the levels of CuZnSOD, does not seem to consistently increase lifespan (except perhaps in Drosophila ). [ 11 ] The most widely accepted view is that oxidative damage (resulting from multiple causes, including superoxide) is but one of several factors limiting lifespan.
The binding of O 2 by reduced ( Fe 2+ ) heme proteins involves formation of Fe(III) superoxide complex. [ 12 ]
The assay of superoxide in biological systems is complicated by its short half-life. [ 13 ] One approach that has been used in quantitative assays converts superoxide to hydrogen peroxide , which is relatively stable. Hydrogen peroxide is then assayed by a fluorimetric method. [ 13 ] As a free radical, superoxide has a strong EPR signal, and it is possible to detect superoxide directly using this method. For practical purposes, this can be achieved only in vitro under non-physiological conditions, such as high pH (which slows the spontaneous dismutation) with the enzyme xanthine oxidase . Researchers have developed a series of tool compounds termed " spin traps " that can react with superoxide, forming a meta-stable radical ( half-life 1–15 minutes), which can be more readily detected by EPR. Superoxide spin-trapping was initially carried out with DMPO , but phosphorus derivatives with improved half-lives, such as DEPPMPO and DIPPMPO , have become more widely used. [ citation needed ]
Superoxides are compounds in which the oxidation number of oxygen is − 1 ⁄ 2 . Whereas molecular oxygen (dioxygen) is a diradical containing two unpaired electrons , the addition of a second electron fills one of its two degenerate molecular orbitals , leaving a charged ionic species with single unpaired electron and a net negative charge of −1. Both dioxygen and the superoxide anion are free radicals that exhibit paramagnetism .
The derivatives of dioxygen have characteristic O–O distances that correlate with the order of the O–O bond. | https://en.wikipedia.org/wiki/Superoxide |
Superparamagnetic relaxometry ( SPMR ) is a technology combining the use of sensitive magnetic sensors and the superparamagnetic properties of magnetite nanoparticles (NP). [ 1 ] [ 2 ] For NP of a sufficiently small size, on the order of tens of nanometers (nm), the NP exhibit paramagnetic properties, i.e., they have little or no magnetic moment . When they are exposed to a small external magnetic field, on the order of a few millitesla (mT), the NP align with that field and exhibit ferromagnetic properties with large magnetic moments. Following removal of the magnetizing field, the NP slowly become thermalized, decaying with a distinct time constant from the ferromagnetic state back to the paramagnetic state. This time constant depends strongly upon the NP diameter and whether they are unbound or bound to an external surface such as a cell. Measurement of this decaying magnetic field is typically done by superconducting quantum interference detectors (SQUIDs). The magnitude of the field during the decay process determines the magnetic moment of the NPs in the source. A spatial contour map of the field distribution determines the location of the source in three dimensions as well as the magnetic moment.
SPMR measurements depend on the characteristics of the nanoparticle (NP) that is used. The NP must have the property that the bulk material is normally ferromagnetic in the bulk. Magnetite (Fe 3 O 4 ) is one such example as it is ferromagnetic when below its Curie temperature . However, if the NPs are single domain, and of a size less than ~ 50 nm, they exhibit paramagnetic properties even below the Curie temperature due to the energy of the NP being dominated by thermal activity rather than magnetic energy. If an external magnetic field is applied, the NPs align with that field and have a magnetic moment now characteristic of ferromagnetic behavior. When this external field is removed, the NPs relax back to their paramagnetic state.
The size of the NP determines the rate of decay of the relaxation process after the extinction of the external magnetization field. The NP decay rate also depends on whether the particle is bound (tethered) to a surface, or is free to rotate. The latter case is dominated by thermal activity, Brownian motion .
For the bound case, the decay rate is given by the Néel equation [ 3 ]
Here the value of τ 0 is normally taken as τ 0 ≈ 10 −10 s, K is the anisotropy energy density of the magnetic material ( 1.35 × 10 4 J/m 3 ), V the magnetic core volume, k B is the Boltzmann constant, and T is the absolute temperature. This exponential relationship between the particle volume and the decay time implies a very strong dependence on the diameter of the NP used in SPMR studies, requiring precise size restrictions on producing these particles.
For magnetite, this requires a particle diameter of ~ 25 nm. [ 4 ] The NP also require high monodispersity around this diameter as NP a few nm below this value will decay too fast and a few nanometres above will decay too slowly to fit into the time window of the measurement.
The value of the time constant, τ N , depends on the method of fabrication of the NP. Different chemical procedures will produce slightly different values as well as different NP magnetic moments. Equally important characteristics of the NP are monodispersity, single domain character, and crystalline structure. [ 5 ]
A system of magnetic coils are used for magnetizing the NP during SPMR measurements such as those used for medical research applications. The subject of investigation may be living cell cultures, animals, or humans. The optimum magnitude of the magnetizing field will saturate the NP magnetic moment, although physical coil size and electrical constraints may be the limiting factor.
The use of magnetizing fields that provide a uniform field across the subject in one direction is desirable, as it reduces the number of variables when solving the inverse electromagnetic problem to determine the coordinates of NP sources in the sample. A uniform magnetizing field may be obtained with the use of Helmholtz coils .
The magnetizing field is applied for a sufficient time to allow the NP dipole moment to reach its maximum value. This field is then rapidly turned off > 1 msec, followed by a short duration to allow for any induced currents from the magnetizing field pulse to die away. Following this, the sensors are turned on and measure the decaying field for a sufficient time to obtain an accurate value of the decay time constant; 1–3 s. Magnetizing fields of ~ 5 mT for a Helmholtz coil of 1 m in diameter are used.
The magnetic sensors that measure the decaying magnetic fields require high magnetic field sensitivity in order to determine magnetic moments of NP with adequate sensitivity. SQUID sensors, similar to those used in magnetoencephalography [ 6 ] are appropriate for this task. Atomic magnetometers also have adequate sensitivity. [ 7 ]
Unshielded environments reduce expense and provide greater flexibility in location of the equipment but limit the sensitivity of the measurement to ~ 1 pT. This is offset by reducing the effect of external electromagnetic noise with noise reduction algorithms. [ 8 ]
A contour map of the decaying magnetic fields is used to localize the sources containing bound NP. This map is produced from the field distribution obtained from an array of SQUID sensors, multiple positions of the sources under the sensors, or a combination of both. The magnetic moments of the sources is obtained during this procedure.
The time of the NP decaying magnetic field for bound particles in SPMR measurements is on the order of seconds. Unbound particles of similar size decay on the order of milliseconds, contributing very little to the results.
The decay curve for bound NP is fit by an equation of the form [ 1 ]
or [ 9 ]
The constants are fit to the experimental data and a particular time point is used to extract the value of the magnetic field. The fields from all the sensor positions are then used to construct a field contour map.
Localization of magnetic sources producing the SPMR fields is done by solving the inverse problem of electromagnetism. The forward electromagnetic problem consists of modeling the sources as magnetic dipoles for each magnetic source or more complex configurations that model each source as a distributed source. Examples of the latter are multiple models, Bayesian models, or distributed dipole models. The magnetic dipole model has the form
where r 0 and p are the location and dipole moment vectors of the magnetic dipole, and μ 0 {\displaystyle \mu _{0}} is the magnetic permeability of free space.
For a subject containing N p sources, a minimum of 4 N p measurements of the magnetic field are required to determine the coordinates and magnetic moment of each source. In the case where the particles have been aligned by the external magnetizing field in a particular orientation, 3 N p measurements are required to obtain solutions. This latter situation leads to increased accuracy for finding the locations of objects as fewer variables are required in the inverse solution algorithm. Increased number of measurements provides an over-determined solution, increasing the localization accuracy.
Solving the inverse problem for magnetic dipole or more complex models is performed with nonlinear algorithms. The Levenberg-Marquardt algorithm is one approach to obtaining solutions to this non-linear problem. More complex methods are available from other biomagnetism programs. [ 6 ] [ 8 ]
Coordinates and magnetic moments, for each source assumed to be present in the sample, are determined from solution of the inverse problem.
One application of SPMR is the detection of disease and cancer. This is accomplished by functionalizing the NP with biomarkers , including cell antibodies (Ab). The functionalized NP+Ab may be subsequently attached to cells targeted by the biomarker in cell cultures, blood and marrow samples, as well as animal models.
A variety of biochemical procedures are used to conjugate the NP with the biomarker. The resulting NP+Ab are either directly mixed with incubated blood [ 10 ] or diseased cells, [ 11 ] or injected into animals. Following injection the functionalized NP reside in the bloodstream until encountering cells that are specific to the biomarker attached to the Ab.
Conjugation of NP with Ab followed by attachment to cells is accomplished by identifying particular cell lines expressing varying levels of the Ab by flow cytometry . The Ab is conjugated to the superparamagnetic iron oxide NP by different methods including the carbodiimide method. [ 11 ] The conjugated NP+Ab are then incubated with the cell lines and may be examined by transmission-electron microscopy (TEM) to confirm that the NP+Ab are attached to the cells. Other methods to determine whether NP are present on the surface of the cell are confocal microscopy , Prussian blue histochemistry , and SPMR. The resulting carboxylate functionality of the polymer-encapsulated NPs by this method allows conjugation of amine groups on the Ab to the carboxylate anions on the surface of the NPs using standard two-step EDC/NHS chemistry. | https://en.wikipedia.org/wiki/Superparamagnetic_relaxometry |
Superparamagnetism is a form of magnetism which appears in small ferromagnetic or ferrimagnetic nanoparticles . In sufficiently small nanoparticles, magnetization can randomly flip direction under the influence of temperature. The typical time between two flips is called the Néel relaxation time . In the absence of an external magnetic field, when the time used to measure the magnetization of the nanoparticles is much longer than the Néel relaxation time, their magnetization appears to be on average zero; they are said to be in the superparamagnetic state. In this state, an external magnetic field is able to magnetize the nanoparticles, similarly to a paramagnet . However, their magnetic susceptibility is much larger than that of paramagnets.
Normally, any ferromagnetic or ferrimagnetic material undergoes a transition to a paramagnetic state above its Curie temperature . Superparamagnetism is different from this standard transition since it occurs below the Curie temperature of the material.
Superparamagnetism occurs in nanoparticles which are single-domain , i.e. composed of a single magnetic domain . This is possible when their diameter is below 3–50 nm, depending on the materials. In this condition, it is considered that the magnetization of the nanoparticles is a single giant magnetic moment, sum of all the individual magnetic moments carried by the atoms of the nanoparticle. Those in the field of superparamagnetism call this "macro-spin approximation".
Because of the nanoparticle’s magnetic anisotropy , the magnetic moment has usually only two stable orientations antiparallel to each other, separated by an energy barrier . The stable orientations define the nanoparticle’s so called “easy axis”. At finite temperature, there is a finite probability for the magnetization to flip and reverse its direction. The mean time between two flips is called the Néel relaxation time τ N {\displaystyle \tau _{\text{N}}} and is given by the following Néel–Arrhenius equation: [ 1 ]
where:
This length of time can be anywhere from a few nanoseconds to years or much longer. In particular, it can be seen that the Néel relaxation time is an exponential function of the grain volume, which explains why the flipping probability becomes rapidly negligible for bulk materials or large nanoparticles.
Let us imagine that the magnetization of a single superparamagnetic nanoparticle is measured and let us define τ m {\displaystyle \tau _{\text{m}}} as the measurement time. If τ m ≫ τ N {\displaystyle \tau _{\text{m}}\gg \tau _{\text{N}}} , the nanoparticle magnetization will flip several times during the measurement, then the measured magnetization will average to zero. If τ m ≪ τ N {\displaystyle \tau _{\text{m}}\ll \tau _{\text{N}}} , the magnetization will not flip during the measurement, so the measured magnetization will be what the instantaneous magnetization was at the beginning of the measurement. In the former case, the nanoparticle will appear to be in the superparamagnetic state whereas in the latter case it will appear to be “blocked” in its initial state.
The state of the nanoparticle (superparamagnetic or blocked) depends on the measurement time. A transition between superparamagnetism and blocked state occurs when τ m = τ N {\displaystyle \tau _{\text{m}}=\tau _{\text{N}}} . In several experiments, the measurement time is kept constant but the temperature is varied, so the transition between superparamagnetism and blocked state is seen as a function of the temperature. The temperature for which τ m = τ N {\displaystyle \tau _{\text{m}}=\tau _{\text{N}}} is called the blocking temperature :
For typical laboratory measurements, the value of the logarithm in the previous equation is in the order of 20–25.
Equivalently, blocking temperature is the temperature below which a material shows slow relaxation of magnetization. [ 2 ]
When an external magnetic field H is applied to an assembly of superparamagnetic nanoparticles, their magnetic moments tend to align along the applied field, leading to a net magnetization. The magnetization curve of the assembly, i.e. the magnetization as a function of the applied field, is a reversible S-shaped increasing function . This function is quite complicated but for some simple cases:
In the above equations:
The initial slope of the M ( H ) {\displaystyle M(H)} function is the magnetic susceptibility of the sample χ {\displaystyle \chi } :
The latter susceptibility is also valid for all temperatures T > T B {\displaystyle T>T_{\text{B}}} if the easy axes of the nanoparticles are randomly oriented.
It can be seen from these equations that large nanoparticles have a larger μ and so a larger susceptibility. This explains why superparamagnetic nanoparticles have a much larger susceptibility than standard paramagnets: they behave exactly as a paramagnet with a huge magnetic moment.
There is no time-dependence of the magnetization when the nanoparticles are either completely blocked ( T ≪ T B {\displaystyle T\ll T_{\text{B}}} ) or completely superparamagnetic ( T ≫ T B {\displaystyle T\gg T_{\text{B}}} ). There is, however, a narrow window around T B {\displaystyle T_{\text{B}}} where the measurement time and the relaxation time have comparable magnitude. In this case, a frequency-dependence of the susceptibility can be observed. For a randomly oriented sample, the complex susceptibility [ 3 ] is:
where
From this frequency-dependent susceptibility, the time-dependence of the magnetization for low-fields can be derived:
A superparamagnetic system can be measured with AC susceptibility measurements, where an applied magnetic field varies in time, and the magnetic response of the system is measured. A superparamagnetic system will show a characteristic frequency dependence: When the frequency is much higher than 1/τ N , there will be a different magnetic response than when the frequency is much lower than 1/τ N , since in the latter case, but not the former, the ferromagnetic clusters will have time to respond to the field by flipping their magnetization. [ 4 ] The precise dependence can be calculated from the Néel–Arrhenius equation, assuming that the neighboring clusters behave independently of one another (if clusters interact, their behavior becomes more complicated). It is also possible to perform magneto-optical AC susceptibility measurements with magneto-optically active superparamagnetic materials such as iron oxide nanoparticles in the visible wavelength range. [ 5 ]
Superparamagnetism sets a limit on the storage density of hard disk drives due to the minimum size of particles that can be used. This limit on areal-density is known as the superparamagnetic limit . | https://en.wikipedia.org/wiki/Superparamagnetism |
Superparasitism is a form of parasitism in which the host (typically an insect larva such as a caterpillar ) is attacked more than once by a single species of parasitoid . Multiparasitism or coinfection , on the other hand, occurs when the host has been parasitized by more than one species. [ 1 ] Host discrimination, whereby parasitoids can identify a host with parasites from an unparasitized host, is present in certain species of parasitoids and is used to avoid superparasitism and thus competition from other parasites. [ 2 ] Superparasitism can result in transmission of viruses , and viruses may influence a parasitoid's behavior in favor of infecting already infected hosts, as is the case with Leptopilina boulardi . [ 3 ]
One example of superparasitism is seen in Rhagoletis juglandis , also known as the walnut husk fly. During oviposition, female flies lacerate the tissue of the inner husk of the walnut and create a cavity for her eggs. The female flies oviposit and reinfest the same walnuts and even the same oviposition sites created by conspecifics. [ 4 ]
This article related to parasites is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Superparasitism |
In particle physics , a superpartner (also sparticle ) is a class of hypothetical elementary particles predicted by supersymmetry , which, among other applications, is one of the well-studied ways to extend the Standard Model of high-energy physics . [ 1 ] [ 2 ]
When considering extensions of the Standard Model , the s- prefix from sparticle is used to form names of superpartners of the Standard Model fermions ( sfermions ), [ 3 ] e.g. the stop squark . The superpartners of Standard Model bosons have an -ino ( bosinos ) [ 3 ] appended to their name, e.g. gluino , the set of all gauge superpartners are called the gauginos .
According to the supersymmetry theory, each fermion should have a partner boson , the fermion's superpartner, and each boson should have a partner fermion. Exact unbroken supersymmetry would predict that a particle and its superpartners would have the same mass. No superpartners of the Standard Model particles have yet been found. This may indicate that supersymmetry is incorrect, or it may also be the result of the fact that supersymmetry is not an exact, unbroken symmetry of nature. If superpartners are found, their masses would indicate the scale at which supersymmetry is broken. [ 1 ] [ 4 ]
For particles that are real scalars (such as an axion ), there is a fermion superpartner as well as a second, real scalar field. For axions, these particles are often referred to as axinos and saxions.
In extended supersymmetry there may be more than one superparticle for a given particle. For instance, with two copies of supersymmetry in four dimensions, a photon would have two fermion superpartners and a scalar superpartner. [ citation needed ]
In zero dimensions it is possible to have supersymmetry, but no superpartners. However, this is the only situation where supersymmetry does not imply the existence of superpartners. [ citation needed ]
If the supersymmetry theory is correct, it should be possible to recreate these particles in high-energy particle accelerators . Doing so will not be an easy task; these particles may have masses up to a thousand times greater than their corresponding "real" particles. [ 1 ]
Some researchers have hoped the Large Hadron Collider at CERN might produce evidence for the existence of superpartner particles. [ 1 ] However, as of 2018, no such evidence has been found. | https://en.wikipedia.org/wiki/Superpartner |
In materials science , superplasticity is a state in which solid crystalline material is deformed well beyond its usual breaking point, usually over about 400% during tensile deformation. [ 1 ] Such a state is usually achieved at high homologous temperature . Examples of superplastic materials are some fine-grained metals and ceramics. Other non-crystalline materials (amorphous) such as silica glass ("molten glass") and polymers also deform similarly, but are not called superplastic, because they are not crystalline; rather, their deformation is often described as Newtonian fluid . Superplastically deformed material gets thinner in a very uniform manner, rather than forming a "neck" (a local narrowing) that leads to fracture. [ 2 ] Also, the formation of microvoids, which is another cause of early fracture, is inhibited. [ citation needed ] Superplasticity must not be confused with superelasticity .
Some evidence of superplastic-like flow in metals has been found in some artifacts, such as in Wootz steels in ancient India, even though superplasticity was first scientific recognition in the twentieth century in the report on 163% elongation in brass by Bengough in 1912. [ 3 ] Later, Jenkins' higher elongation of 300% in Cd–Zn and Pb–Sn alloys in 1928. [ 4 ] However, those works did not go further to set a new phenomenon of mechanical properties of materials. Until the work of Pearson was published in 1934, a significant elongation of 1950% was found in Pb–Sn eutectic alloy. [ 5 ] It was easy to become the most extensive elongation report in scientific investigation at this time. There was no further interest in superplasticity in the Western World for more than 25 years after Pearson's effort. Later, Bochvar and Sviderskaya continued superplasticity in the Soviet Union with many publications on Zn–Al alloys. A research institute focused on superplasticity, the Institute of Metals Superplasticity Problems, was established in 1985 in Ufa City, Russia. This institute has remained the only global institute to work exclusively to research in superplasticity. The interest in superplasticity rose in 1982 when the first major international conference on 'Superplasticity in Structural Materials, edited by Paton and Hamilton, was held in San Diego. [ 6 ] From there, numerous investigations have been published with considerable results. Superplasticity is now the background for superplastic deformation forming as an essential aerospace application technique. [ 7 ]
In metals and ceramics, requirements for it being superplastic include a fine grain size (less than approximately 10 micrometers) and an operating temperature that is often from above a half absolute melting point. Several studies have found superplasticity in coarse-grain materials. [ 8 ] However, the scientific community has agreed the grain size threshold at 10 micrometers is the precondition for activating superplasticity. Generally, grain growth at high-temperature, therefore maintaining the fine grain size structure at homologous temperature, is the main challenge in superplasticity research. The typical microstructure strategy uses a fine dispersion of thermally stable particles, which pin the grain boundaries and maintain the fine grain structure at the high temperatures and existence of multiple phases required for superplastic deformation. The alloy's most typical microstructure for superplasticity is eutectic or eutectoid structure, as found in Sn-Pb, or Zn-Alloy alloys.
Those materials that meet these parameters must still have a strain rate sensitivity (a measurement of the way the stress on a material reacts to changes in strain rate) of >0.3 to be considered superplastic. The ideal strain rate sensitivity is 0.5, typically found in micro duplex alloys.
The mechanisms of superplasticity in metals are determined as the Grain Boundary Sliding (GBS). However, the grain boundary sliding (GBS) can lead to the stress concentration at the triple junction or the grain boundary of the hard phases. Therefore, the GBS in polycrystal structured materials must be accompanied by other accommodation processes such as diffusion or dislocation. The diffusion models proposed by Ashby and Verall explain a gradual change in grain shapes to maintain the compatibility between the grains during the deformation. [ 9 ] The changes in grain shape are operated by diffusion. The grain boundary migrates to form an equiaxed shape with a new orientation compared to the original grains. The dislocation model is explained as the stress concentration by GBS will be relaxed by dislocation motion in the blocking grains. The dislocation piles up, and the climb would allow another dislocation to be emitted. The further detail in dislocation model is still under debate, with several proposed by Crossman and Ashby, Langdon, and Gifkins model. [ 10 ]
In general, superplasticity often occurs at a slow strain rate, in order of 10 −4 s −1 , and can be energy-consuming. In addition, prolonged time exposed to high-operation temperature also degraded the mechanical properties of materials. There is a strong demand to increase the strain rate in superplastic deformation to the order of 10 −2 s −1 , called High strain Rate Superplasticity (HSRS). Increment of strain rate in superplastic deformation is generally achieved by reduction of grain size in the ultrafine range from 100 to less than 500 ums. Further grain refinement to nanocrystalline structure with grain size less than 100 nm is ineffective in raising the deformation rate or improving ductility. [ 11 ] The most common grain refinement process for HSRS research uses Severe Plastic Deformation (SPD). [ 12 ] SPD can fabricate exceptional grain refinement to the sub-micrometer or even the nanometer range. Among many SPD techniques, the two most widely used techniques are equal-channel angular pressing (ECAP) and high-pressure torsion (HPT). Besides producing the ultrafine grain size, these techniques also provide a high fraction of high-angle boundaries. These high-angle grain boundaries are a specific benefit to increase the strain rates of deformation. Of the importance of grain refinement processing to superplasticity research, ECAP and HPT have been devoted to mainstream positions in superplasticity studies in metals.
The process offers a range of important benefits, from both the design and production aspects. To begin with there is the ability to form components with double curvature and smooth contours from single sheet in one operation, with exceptional dimensional accuracy and surface finish, and none of the "spring back" associated with cold forming techniques. Because only single surface tools are employed, lead times are short and prototyping is both rapid and easy, because a range of sheet alloy thicknesses can be tested on the same tool.
There are three forming techniques currently in use to exploit these advantages. The method chosen depends upon design and performance criteria such as size, shape, and alloy characteristics.
A graphite-coated blank is put into a heated hydraulic press. Air pressure is then used to force the sheet into close contact with the mould. At the beginning, the blank is brought into contact with the die cavity, hindering the forming process by the blank/die interface friction . Thus, the contact areas divide the single bulge into a number of bulges, which are undergoing a free bulging process. The procedure allows the production of parts with relatively exact outer contours. This forming process is suitable for the manufacturing of parts with smooth, convex surfaces.
A graphite coated blank is clamped over a 'tray' containing a heated male mould. Air pressure forces the metal into close contact with the mould. The difference between this and the female forming process is that the mould is, as stated, male and the metal is forced over the protruding form. For the female forming the mould is female and the metal is forced into the cavity. [ citation needed ] The tooling consists of two pressure Chambers and a counter punch, which is linearly displaceable. Similar to the cavity forming technology, at the process beginning, the firmly clamped blank is bulged by gas pressure. [ citation needed ]
The second phase of the process involves the material being formed over the punch surface by applying a pressure against the previous forming direction. Due to a better material use, which is caused by process conditions, blanks with a smaller initial thickness compared to cavity forming can be used. Thus, the bubble forming technology is particularly suitable for parts with high forming depths. [ citation needed ]
A graphite coated blank is placed into a heated press . Air pressure is used to force the metal into a bubble shape before the male mold is pushed into the underside of the bubble to make an initial impression. Air pressure is then used from the other direction to final form the metal around the male mould. This process has long cycle times because the superplastic strain rates are low. Product also suffers from poor creep performance due to the small grain sizes and there can be cavitation porosity in some alloys. Surface texture is generally good however. With dedicated tooling, dies and machines are costly. The main advantage of the process is that it can be used to produce large complex components in one operation. This can be useful for keeping the mass down and avoiding the need for assembly work, a particular advantage for aerospace products. For example, the diaphragm-forming method (DFM) can be used to reduce the tensile flow stress generated in a specific alloy matrix composite during deformation .
Superplastically formed (SPF) aluminium alloys have the ability to be stretched to several times their original size without failure when heated to between 470 and 520 °C. These dilute alloys containing zirconium , later known by the trade name SUPRAL, were heavily cold worked to sheet and dynamically crystallized to a fine stable grain size, typically 4–5 μm, during the initial stages of hot deformation. Also superplastic forming is a net-shape processing technology that dramatically decreases fabrication and assembly costs by reducing the number of parts and the assembly requirements. Using SPF technology, it was anticipated that a 50% manufacturing cost reduction can be achieved for many aircraft assemblies, such as the nose cone and nose barrel assemblies. Other spin-offs include weight reduction, elimination of thousands of fasteners, elimination of complex featuring and a significant reduction in the number of parts. The breakthrough for superplastic Al-Cu alloys was made by Stowell, Watts and Grimes in 1969 when the first of several dilute aluminium alloys (Al-6% Cu-0.5%Zr) was rendered superplastic with the introduction of relatively high levels of zirconium in solution using specialized casting techniques and subsequent electrical treatment to create extremely fine ZrAl 3 precipitates.
Some commercial alloys have been thermo-mechanically processed to develop superplasticity. The main effort has been on the Al 7000 series alloys, Al-Li alloys, Al-based metal-matrix composites, and mechanically alloyed materials.
Aluminium alloy and its composites have wide applications in automotive industries. At room temperature, composites usually have higher strength compared to its component alloy. At high temperature, aluminium alloy reinforced by particles or whiskers such as SiO 2 , Si 3 N 4 , and SiC can have tensile elongation more than 700%. The composites are often fabricated by powder metallurgy to ensure fine grain sizes and the good dispersion of reinforcements. [ 13 ] The grain size that allows the optimal superplastic deformation to happen is usually 0.5~1 μm, less than the requirement of conventional superplasticity. Just like other superplastic materials, the strain rate sensitivity m is larger than 0.3, indicating good resistance against local necking phenomenon. A few aluminium alloy composites such as 6061 series and 2024 series have shown high strain rate superplasticity, which happens in a much higher strain rate regime than other superplastic materials. [ 14 ] This property makes aluminium alloy composites potentially suitable for superplastic forming because the whole process can be done in a short time, saving time and energy.
The most common deformation mechanism in aluminium alloy composites is grain boundary sliding (GBS) , which is often accompanied by atom/dislocation diffusion to accommodate deformation. [ 15 ] The GBS mechanism model predicts a strain rate sensitivity of 0.3, which agrees with most of the superplastic aluminium alloy composites. Grain boundary sliding requires the rotation or migration of very fine grains at relatively high temperature. Therefore, the refinement of grain size and the prevention of grain growth at high temperature is of importance.
The very high temperature (close to melting point) is also said to be related to another mechanism, interfacial sliding, because at high temperatures, partial liquids appear in the matrix. The viscosity of the liquid plays the main role to accommodate the sliding of adjacent grain boundaries. The cavitation and stress concentration caused by the addition of second phase reinforcements are inhibited by the flow of liquid phase. However, too much liquid leads to voids thus deteriorating the stability of the materials. So temperature close to but not exceeding too much the initial melting point is often the optimal temperature. The partial melting could lead to the formation of filaments at the fracture surface, which can be observed under scanning electron microscope . [ 16 ] The morphology and chemistry of reinforcements also have influence on the superplasticity of some composites. But no single criterion has yet been proposed to predict their influences. [ 17 ]
A few ways have been suggested to optimize the superplastic deformation of aluminium alloy composites, which are also indicative for other materials:
In the aerospace industry, Titanium alloys such as Ti–6Al–4V find extensive use in aerospace applications, not only because of their specific high temperature strength, but also because a large number of these alloys exhibit superplastic behavior. Superplastic sheet thermoforming has been identified as a standard processing route for the production of complex shapes, especially and are amenable to superplastic forming (SPF). However, in these alloys the additions of vanadium make them considerably expensive and so, there is a need for developing superplastic titanium alloys with cheaper alloying additions. The Ti-Al-Mn alloy could be such a candidate material. This alloy shows significant post-uniform deformation at ambient and near-ambient temperatures.
Ti-Al-Mn (OT4-1) alloy is currently being used for aero engine components as well as other aerospace applications by forming through a conventional route that is typically cost, labour and equipment intensive. The Ti-Al-Mn alloy is a candidate material for aerospace applications. However, there is virtually little or no information available on its superplastic forming behaviour. In this study, the high temperature superplastic bulge forming of the alloy was studied and the superplastic forming capabilities are demonstrated.
The gas pressure bulging of metal sheets has become an important forming method. As the bulging process progresses, significant thinning in the sheet material becomes obvious. Many studies were made to obtain the dome height with respect to the forming time useful to the process designer for the selection of initial blank thickness as well as non-uniform thinning in the dome after forming.
The Ti-Al-Mn (OT4-1) alloy was available in the form of a 1 mm thick cold-rolled sheet. The chemical composition of the alloy. A 35-ton hydraulic press was used for the superplastic bulge forming of a hemisphere. A die set-up was fabricated and assembled with the piping system enabling not only the inert gas flushing of the die- assembly prior to forming, but also for the forming of components under reverse pressure , if needed. The schematic diagram of the superplastic forming set-up used for bulge forming with all necessary attachments and the photograph of the top (left) and bottom (right) die for SPF.
A circular sheet (blank) of 118 mm diameter was cut from the alloy sheet and the cut surfaces polished to remove burrs. The blank was placed on the die and the top chamber brought in contact. The furnace was switched on to the set temperature. Once the set temperature was reached the top chamber was brought down further to effect the required blank holder pressure. About 10 minutes were allowed for thermal equilibration. The argon gas cylinder was opened to the set pressure gradually. Simultaneously, the linear variable differential transformer (LVDT), fitted at the bottom of the die, was set for recording the sheet bulge. Once the LVDT reached 45 mm (radius of bottom die), gas pressure was stopped and the furnace switched off. The formed components were taken out when the temperature of the die set had dropped to 600 °C. Easy removal of the component was possible at this stage. Superplastic bulge forming of hemispheres were carried out at temperatures of 1098, 1123, 1148, 1173, 1198 and 1223 K (825, 850, 875, 900, 925 and 950 °C) at forming pressures of 0.2, 0.4, 0.6 and 0.87 MPa. As the bulge forming process progresses, significant thinning in the sheet material becomes obvious. An ultrasonic technique was used to measure the thickness distribution on the profile of the formed component. The components were analyzed in terms of the thickness distribution, thickness strain and thinning factor. Post deformation micro-structural studies were conducted on the formed components in order to analyze the microstructure in terms of grain growth, grain elongation, cavitations, etc.
The microstructure of the as-received material with a two-dimensional grain size of 14 μm is shown in Fig. 8. [ clarification needed ] The grain size was determined using the linear intercept method in both the longitudinal and transverse directions of the rolled sheet.
Successful superplastic forming of hemispheres were carried out at temperatures of 1098, 1123, 1148, 1173, 1198 and 1223 K and argon gas forming pressures of 0.2, 0.4, 0.6 and 0.8 MPa. A maximum time limit of 250 minutes was given for the complete forming of the hemispheres. This cut-off time of 250 minutes was given for practical reasons. Fig. 9 shows a photo-graph of the blank (specimen) and a bulge formed component (temperature of 1123 K and a forming gas pressure of 0.6 MPa).
The forming times of successfully formed components at different forming temperatures and pressures. From the travel of the LVDT fitted at the bottom of the die (which measured the bulge height/depth) an estimate of the rate of forming was obtained. It was seen that the rate of forming was rapid initially and decreased gradually for all the temperature and pressure ranges as reported in Table 2. At a particular temperature, the forming time reduced as the forming pressure was increased. Similarly at a given forming pressure, forming time decreased with an increase in temperature.
The thickness of the bulge profile was measured at 7 points including the periphery (base) and pole. These points were selected by taking the line between centre of the hemisphere and base point as reference and offsetting by 15° until the pole point was reached. Hence the points 1, 2, 3, 4 and 5 subtend an angle of 15°, 30°, 45°, 60° and 75° respectively with the base of the hemisphere as shown in Fig. 10. The thickness was measured at each of these points on the bulge profile by using an ultrasonic technique. The thickness values for each of the successfully formed hemispherical components.
Fig. 11 shows the pole thickness of fully formed hemispheres as a function of forming pressure at different temperatures. At a particular temperature the pole thickness reduced as the forming pressure was increased. For all the cases studied the pole thickness lay in the range of about 0.3 to 0.4 mm from the original blank thickness of 1 mm.
The thickness strain ln ( S / S 0 ) {\displaystyle {\text{ln}}(S/S_{0})} , where S {\displaystyle S} is the local thickness and S 0 {\displaystyle S_{0}} is the initial thickness, was calculated at different locations for all the successfully formed components. For a particular pressure the thickness strain reduced as the forming temperature was increased. Fig. 12 shows the thickness strain, ln ( S / S 0 ) {\displaystyle {\text{ln}}(S/S_{0})} as a function of position along the dome cross section in case of a component formed at 1123 K at a forming pressure of 0.6 MPa.
The post-formed microstructure revealed that there was no significant change in grain size. Fig. 13 shows the microstructure of the bulge formed component at the base and the pole for a component formed at a temperature of 1148 K and forming pressure of 0.6 MPa. These microstructures show no significant change in grain size.
The high temperature deformation behaviour and superplastic forming capability of a Ti-Al-Mn alloy was studied. Successful forming of 90 mm diameter hemispheres using the superplastic route were carried out at the temperature range of 1098 to 1223 K and forming pressure range of 0.2 to 0.8 MPa. The following conclusions could be drawn:
Mostly on non-qualified materials, such as austenitic steel of the Fe-Mn-Al alloy, which has some of the specific material parameters closely related to microstructural mechanisms. These parameters are used as indicators of material superplastic potentiality. The material was submitted to hot tensile testing, within a temperature range from 600 °C to 1000 °C and strain-rates varying from 10−6 to 1 s−1. The strain rate sensitivity parameter (m) and observed maximum elongation until rupture (εr) could be determined and also obtained from the hot tensile test.
The experiments stated a possibility of superplastic behaviour in a Fe-Mn-Al alloy within a temperature range from 700 °C to 900 °C with grain size around 3 μm (ASTM grain size 12) and average strain rate sensitivity of m ~ 0.54, as well as a maximum elongation at rupture around 600%.
The superplastic behaviour of Fe-28Al, Fe-28Al-2Ti and Fe-28Al-4Ti alloys has been investigated by tensile testing, optical microscopy and transmission electron microscopy. Tensile tests were performed at 700–900 °C under a strain rate range of about 10 −5 to 10 −2 /s. The maximum strain rate sensitivity index m was found to be 0.5 and the largest elongation reached 620%. In Fe3Al and Fe Al alloys with grain sizes of 100 to 600μm exhibit all deformation characteristics of conventional fine grain size superplastic alloys.
However, superplastic behaviour was found in large-grained iron aluminides without the usual requisites for superplasticity of a fine grain size and grain boundary sliding. Metallographic examinations have shown that the average grain size of large-grained iron aluminides decreased during superplastic deformation.
The properties of ceramic materials, like all materials, are dictated by the types of atoms present, the types of bonding between the atoms, and the way the atoms are packed together. This is known as the atomic scale structure. Most ceramics are made up of two or more elements. This is called a compound. For example, alumina ( Al 2 O 3 ), is a compound made up of aluminium atoms and oxygen atoms.
The atoms in ceramic materials are held together by a chemical bond. The two most common chemical bonds for ceramic materials are covalent and ionic. For metals, the chemical bond is called the metallic bond. The bonding of atoms together is much stronger in covalent and ionic bonding than in metallic. That is why, generally speaking, metals are ductile and ceramics are brittle. Due to ceramic materials wide range of properties, they are used for a multitude of applications. In general, most ceramics are:
High-strain-rate superplasticity has been observed in aluminium -based and magnesium-based alloys. But for ceramic materials, superplastic deformation has been restricted to low strain rates for most oxides, and nitrides with the presence of cavities leading to premature failure. Here we show that a composite ceramic material consisting of tetragonal zirconium oxide, magnesium aluminates spinal and alpha-alumina phase exhibit superplasticity at strain rates up to 1.0 s −1 . The composite also exhibits a large tensile elongation, exceeding 1050% or a strain rate of 0.4 s −1 .
Superplastic metals and ceramics have the ability to deform over 100% without fracturing, permitting net-shape forming at high temperatures. These intriguing materials deform primarily by grain boundary sliding, a process accelerated with a fine grain size. However, most ceramics that start with a fine grain size experience rapid grain growth during high temperature deformation, rendering them unsuitable for extended superplastic forming. One can limit grain growth using a minor second phase (Zener pinning) or by making a ceramic with three phases, where grain to grain contact of the same phase is minimized. A research on fine grain three phase alumina-mullite( 3Al 2 O 3 ·2SiO 2 )-zirconia, with approximately equal volume fractions of the three phases, demonstrates that superplastic strain rates as high as 10 −2 /sec at 1500 °C can be reached. These high strain rates put ceramic superplastic forming into the realm of commercial feasibility.
Superplastic forming will only work if cavitations don't occur during grain boundary sliding, those cavitations leaving either diffusion accommodation or dislocation generation as mechanisms for accommodating grain boundary sliding. The applied stresses during ceramic superplastic forming are moderate, usually 20–50 MPa, usually not high enough to generate dislocations in single crystals, so that should rule out dislocation accommodation. Some unusual and unique features of these three phase superplastic ceramics will be revealed, however, indicating that superplastic ceramics may have a lot more in common with metals than previously thought.
Yttrium oxide is used as the stabilizer. This material is predominantly tetragonal in structure. Y-TZP has the highest flexural strength of all the zirconia based materials. The fine grain size of Y-TZP lends itself to be used in cutting tools where a very sharp edge can be achieved and maintained due to its high wear resistance. It is considered to be the first true polycrystalline ceramic shown to be superplastic with a 3-mol %
Y-TZP (3Y-TZP), which is now considered to be the model ceramic system.
The fine grade size leads to a very dense, non-porous ceramic with excellent mechanical strength, corrosion resistance, impact toughness , thermal shock resistance and very low thermal conductivity. Due to its characteristics Y-TZP is used in wear parts, cutting tools and thermal barrier coatings .
Superplastic properties of 3Y-TZP is greatly affected by grain size as displaced in Fig. 3, elongation to failure decreases and flow strength increases while grain size increases. A study was made on the dependence of flow stress on grain size, the result –in summary- shows that the flow stress approximately depends on the grain size squared:
Where:
Alumina is probably one of the most widely used structural ceramics, but superplasticity is difficult to obtain in alumina, as a result of rapid anisotropic grain growth during high-temperature deformation.
Regardless of which, several studies have been performed on superplasticity in doped, fine- grain Al 2 O 3 .Demonstrated that the grain size of Al 2 O 3 containing 500-ppm MgO can be further refined by adding various dopants, such as Cr 2 O 3 , Y 2 O 3 , and TiO 2 . A grain size of about 0.66 μm was obtained in a 500-ppm Y 23 -doped Al 2 O 3 . As a result of this fine grain size, the Al 2 O 3 exhibits a rupture elongation of 65% at 1450 °C under an applied stress of 20 MPa. [ 19 ] | https://en.wikipedia.org/wiki/Superplasticity |
Superplasticizers ( SPs ), also known as high range water reducers , are additives used for making high strength concrete or to place self-compacting concrete . Plasticizers are chemical compounds enabling the production of concrete with approximately 15% less water content . Superplasticizers allow reduction in water content by 30% or more. These additives are employed at the level of a few weight percent. Plasticizers and superplasticizers also retard the setting and hardening of concrete. [ 1 ]
According to their dispersing functionality and action mode, one distinguishes two classes of superplasticizers:
Superplasticizers are used when well-dispersed cement particle suspensions are required to improve the flow characteristics ( rheology ) of concrete. Their addition allows to decrease the water-to-cement ratio of concrete or mortar without negatively affecting the workability of the mixture. It enables the production of self-consolidating concrete and high-performance concrete. The water–cement ratio is the main factor determining the concrete strength and its durability. Superplasticizers greatly improve the fluidity and the rheology of fresh concrete. The concrete strength increases when the water-to-cement ratio decreases because avoiding to add water in excess only for maintaining a better workability of fresh concrete results in a lower porosity of the hardened concrete, and so to a better resistance to compression. [ 3 ]
The addition of SP in the truck during transit is a fairly modern development within the industry. Admixtures added in transit through automated slump management system, [ 4 ] allow to maintain fresh concrete slump until discharge without reducing concrete quality.
Traditional plasticizers are lignosulfonates as their sodium salts . [ 5 ] Super plasticizers are synthetic polymers . Compounds used as superplasticizers include (1) sulfonated naphthalene formaldehyde condensate, sulfonated melamine formaldehyde condensate, acetone formaldehyde condensate and (2) polycarboxylates ethers . Cross-linked melamine - or naphthalene -sulfonates, referred to as PMS (polymelamine sulfonate) and PNS (polynaphthalene sulfonate), respectively, are illustrative. They are prepared by cross-linking of the sulfonated monomers using formaldehyde or by sulfonating the corresponding crosslinked polymer. [ 1 ] [ 6 ]
The polymers used as plasticizers exhibit surfactant properties. They are often ionomers bearing negatively charged groups ( sulfonates , carboxylates , or phosphonates ...). They function as dispersants to minimize particles segregation in fresh concrete (separation of the cement slurry and water from the coarse and fine aggregates such as gravels and sand respectively). The negatively charged polymer backbone adsorbs onto the positively charged colloidal particles of unreacted cement, especially onto the tricalcium aluminate ( C 3 A ) mineral phase of cement.
Melaminesulfonate (PMS) and naphthalenesulfonate (PNS) mainly act by electrostatic interactions with cement particles favoring their electrostatic repulsion while polycarboxylate-ether (PCE) superplasticizers sorb and coat large agglomerates of cement particles, and thanks to their lateral chains, sterically favor the dispersion of large cement agglomerates into smaller ones. [ 7 ]
However, as their working mechanisms are not fully understood, cement-superplasticizer incompatibilities can be observed in certain cases. [ 8 ] | https://en.wikipedia.org/wiki/Superplasticizer |
The superposition calculus is a calculus for reasoning in equational logic . It was developed in the early 1990s and combines concepts from first-order resolution with ordering-based equality handling as developed in the context of (unfailing) Knuth–Bendix completion . It can be seen as a generalization of either resolution (to equational logic) or unfailing completion (to full clausal logic ). Like most first-order calculi, superposition tries to show the unsatisfiability of a set of first-order clauses , i.e. it performs proofs by refutation . Superposition is refutation complete —given unlimited resources and a fair derivation strategy, from any unsatisfiable clause set a contradiction will eventually be derived.
Many (state-of-the-art) theorem provers for first-order logic are based on superposition (e.g. the E equational theorem prover ), although only a few implement the pure calculus.
This mathematical logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Superposition_calculus |
In mathematics , the superquadrics or super-quadrics (also superquadratics ) are a family of geometric shapes defined by formulas that resemble those of ellipsoids and other quadrics , except that the squaring operations are replaced by arbitrary powers. They can be seen as the three-dimensional relatives of the superellipses . The term may refer to the solid object or to its surface , depending on the context. The equations below specify the surface; the solid is specified by replacing the equality signs by less-than-or-equal signs.
The superquadrics include many shapes that resemble cubes , octahedra , cylinders , lozenges and spindles , with rounded or sharp corners. [ 1 ] Because of their flexibility and relative simplicity, they are popular geometric modeling tools, especially in computer graphics . It becomes an important geometric primitive widely used in computer vision , [ 2 ] [ 3 ] robotics, [ 4 ] and physical simulation. [ 5 ]
Some authors, such as Alan Barr , define "superquadrics" as including both the superellipsoids and the supertoroids . [ 1 ] [ 6 ] In modern computer vision literatures, superquadrics and superellipsoids are used interchangeably, since superellipsoids are the most representative and widely utilized shape among all the superquadrics. [ 2 ] [ 3 ] Comprehensive coverage of geometrical properties of superquadrics and methods of their recovery from range images and point clouds are covered in several computer vision literatures. [ 1 ] [ 3 ] [ 7 ] [ 8 ]
The surface of the basic superquadric is given by
where r , s , and t are positive real numbers that determine the main features of the superquadric. Namely:
Each exponent can be varied independently to obtain combined shapes. For example, if r = s =2, and t =4, one obtains a solid of revolution which resembles an ellipsoid with round cross-section but flattened ends. This formula is a special case of the superellipsoid's formula if (and only if) r = s .
If any exponent is allowed to be negative, the shape extends to infinity. Such shapes are sometimes called super-hyperboloids .
The basic shape above spans from -1 to +1 along each coordinate axis. The general superquadric is the result of scaling this basic shape by different amounts A , B , C along each axis. Its general equation is
Parametric equations in terms of surface parameters u and v (equivalent to longitude and latitude if m equals 2) are
where the auxiliary functions are
and the sign function sgn( x ) is
Barr introduces the spherical product which given two plane curves produces a 3D surface. If f ( μ ) = ( f 1 ( μ ) f 2 ( μ ) ) , g ( ν ) = ( g 1 ( ν ) g 2 ( ν ) ) {\displaystyle f(\mu )={\begin{pmatrix}f_{1}(\mu )\\f_{2}(\mu )\end{pmatrix}},\quad g(\nu )={\begin{pmatrix}g_{1}(\nu )\\g_{2}(\nu )\end{pmatrix}}} are two plane curves then the spherical product is h ( μ , ν ) = f ( μ ) ⊗ g ( ν ) = ( g 1 ( ν ) f 1 ( μ ) g 1 ( ν ) f 2 ( μ ) g 2 ( ν ) ) {\displaystyle h(\mu ,\nu )=f(\mu )\otimes g(\nu )={\begin{pmatrix}g_{1}(\nu )\ f_{1}(\mu )\\g_{1}(\nu )\ f_{2}(\mu )\\g_{2}(\nu )\end{pmatrix}}} This is similar to the typical parametric equation of a sphere : x = x 0 + r sin θ cos φ y = y 0 + r sin θ sin φ ( 0 ≤ θ ≤ π , 0 ≤ φ < 2 π ) z = z 0 + r cos θ {\displaystyle {\begin{aligned}x&=x_{0}+r\sin \theta \;\cos \varphi \\y&=y_{0}+r\sin \theta \;\sin \varphi \qquad (0\leq \theta \leq \pi ,\;0\leq \varphi <2\pi )\\z&=z_{0}+r\cos \theta \end{aligned}}} which give rise to the name spherical product.
Barr uses the spherical product to define quadric surfaces, like ellipsoids , and hyperboloids as well as the torus , superellipsoid , superquadric hyperboloids of one and two sheets, and supertoroids. [ 1 ]
The following GNU Octave code generates a mesh approximation of a superquadric: | https://en.wikipedia.org/wiki/Superquadrics |
In quantum optics , a superradiant phase transition is a phase transition that occurs in a collection of fluorescent emitters (such as atoms), between a state containing few electromagnetic excitations (as in the electromagnetic vacuum ) and a superradiant state with many electromagnetic excitations trapped inside the emitters. The superradiant state is made thermodynamically favorable by having strong, coherent interactions between the emitters.
The superradiant phase transition was originally predicted by the Dicke model of superradiance , which assumes that atoms have only two energetic levels and that these interact with only one mode of the electromagnetic field. [ 1 ] [ 2 ] The phase transition occurs when the strength of the interaction between the atoms and the field
is greater than the energy of the non-interacting part of the system. (This is similar to the case of superconductivity in ferromagnetism , which leads
to the dynamic interaction between ferromagnetic atoms and the spontaneous ordering of excitations below the critical temperature.)
The collective Lamb shift , relating to the system of atoms interacting with the vacuum fluctuations , becomes comparable to the energies of atoms alone, and the vacuum
fluctuations cause the spontaneous self-excitation of matter.
The transition can be readily understood by the use of the Holstein-Primakoff transformation [ 3 ] applied to a two-level atom .
As a result of this transformation, the atoms become Lorentz harmonic oscillators with frequencies equal to the difference between the energy levels. The whole system then simplifies to a system of interacting harmonic oscillators of atoms, and the field known as Hopfield dielectric which further predicts in the normal state polarons for photons or polaritons .
If the interaction with the field is so strong that the system collapses in the harmonic approximation
and complex polariton frequencies (soft modes) appear, then the physical system with nonlinear terms of the higher order
becomes the system with the Mexican hat-like potential , and will undergo ferroelectric-like phase transition. [ 4 ] In this model, the system is mathematically equivalent for one mode of excitation to the Trojan wave packet ,
when the circularly polarized field intensity corresponds to the electromagnetic coupling constant. Above the critical value, it changes to the unstable motion of the ionization .
The superradiant phase transition was the subject of a wide discussion as to whether or not it is only a result of the simplified model of the matter-field interaction; and if it can occur for the real physical parameters of physical systems (a no-go theorem ). [ 5 ] [ 6 ] However, both the original derivation and the later corrections leading to nonexistence of the transition – due to Thomas–Reiche–Kuhn sum rule canceling for the harmonic oscillator the needed inequality to impossible negativity of the interaction – were based on the assumption that the quantum field operators are commuting numbers, and the atoms do not interact with the static Coulomb forces. This is generally not true like in case of Bohr–van Leeuwen theorem and the classical non-existence of Landau diamagnetism . The negating results were also the consequence of using the simple Quantum Optics models of the electromagnetic field-matter interaction but
not the more realistic Condensed Matter models like for example the superconductivity model of the BCS but with the phonons replaced by photons to first obtain the collective polaritons . The return of the transition basically occurs because the inter-atom dipole-dipole or generally the electron-electron Coulomb interactions are never negligible in the condensed and even more in the superradiant matter density regime and the Power-Zienau unitary transformation eliminating the quantum vector potential in the minimum-coupling Hamiltonian transforms the Hamiltonian exactly to the form used when it was discovered and without the square of the vector potential which was later claimed to prevent it. Alternatively within the full quantum mechanics including the electromagnetic field the generalized Bohr–van Leeuwen theorem does not work and
the electromagnetic interactions cannot be eliminated while they only change the p ⋅ A {\displaystyle \mathbf {p} \cdot \mathbf {A} } vector potential coupling to the electric field x ⋅ E {\displaystyle \mathbf {x} \cdot \mathbf {E} } coupling and alter the effective electrostatic interactions. It can be observed in model systems like Bose–Einstein condensates [ 7 ] and artificial atoms. [ 8 ] [ 9 ]
A superradiant phase transition is formally predicted by the critical behavior of the resonant Jaynes-Cummings model , describing the interaction of only one atom with one mode of the electromagnetic field.
Starting from the exact Hamiltonian of the Jaynes-Cummings model at resonance
Applying the Holstein-Primakoff transformation for two spin levels,
replacing the spin raising and lowering operators by those for the harmonic oscillators
one gets the Hamiltonian of two coupled harmonic-oscillators:
which readily can be diagonalized.
Postulating its normal form
where
one gets the eigenvalue equation
with the solutions
The system collapses when one of the frequencies becomes imaginary, i.e. when
or when the atom-field coupling is stronger than the frequency of the mode and atom oscillators.
While there are physically higher terms in the true system, the system in this regime will therefore undergo the
phase transition.
The simplified Hamiltonian of the Jaynes-Cummings model, neglecting the counter-rotating terms, is
and the energies for the case of zero detuning are
where Ω {\displaystyle \Omega } is the Rabi frequency .
One can approximately calculate the canonical partition function
where the discrete sum was replaced by the integral.
The normal approach is that the latter integral is calculated by the Gaussian approximation around
the maximum of the exponent:
This leads to the critical equation
This has the solution only if
which means that the normal, and the superradiant phase, exist only if the field-atom coupling is significantly stronger
than the energy difference between the atom levels.
When the condition is fulfilled, the equation gives the solution for the order parameter n {\displaystyle n} depending on the inverse of the
temperature 1 / β {\displaystyle 1/\beta } , which means non-vanishing ordered field mode.
Similar considerations can be done in true thermodynamic limit of the infinite number of atoms.
The better insight on the nature of the superradiant phase transition as well on the physical value of the critical parameter which must be exceeded
in order for the transition to occur may be obtained by studying the classical stability of the system of the charged classical harmonic oscillators in the 3D space interacting only with the electrostatic repulsive forces for example between electrons in the locally harmonic oscillator potential. Despite the original model of the superradiance the quantum electromagnetic field is totally neglected here. The oscillators may be assumed to be placed for example on the cubic lattice with the lattice constant a {\displaystyle a} in the analogy to the crystal system of the condensed matter.
The worse scenario of the defect of the absence of the two out-of-the-plane motion-stabilizing electrons from the 6-th nearest neighbors of a chosen electron is assumed while the four nearest electrons are first assumed to be rigid in space and producing the anti-harmonic potential in the direction perpendicular to the plane of the all five electrons. The condition of the instability of motion of the chosen electron is that the net potential being the superposition of the
harmonic oscillator potential and the quadratically expanded Coulomb potential from the four electrons is negative i.e.
or
Making it artificially quantum by multiplying the numerator and the denominator of the fraction by the ℏ {\displaystyle \hbar } one obtains
the condition
where
is the square of the dipole transition strength between the ground state and the first excited state of the quantum harmonic oscillator ,
is the energy gap between consecutive levels and it is also noticed that
is the spatial density of the oscillators.
The condition is almost identical to this obtained in the original discovery of the superradiant phase transition when replacing the harmonic oscillators with two level atoms with the same distance between the energy levels, dipole transition strength, and the density which means that it occurs in the regime
when the Coulomb interactions between electrons dominate over locally harmonic oscillatory influence of the atoms. It that sense the free electron gas with ω = 0 {\displaystyle \omega =0} is also purely superradiant.
The critical inequality rewritten yet differently
expresses the fact that superradiant phase transition occurs when the frequency of the binding atomic oscillators is lower than
so called electron gas plasma frequency . | https://en.wikipedia.org/wiki/Superradiant_phase_transition |
In economics and game theory , a participant is considered to have superrationality (or renormalized rationality ) if they have perfect rationality (and thus maximize their utility ) but assume that all other players are superrational too and that a superrational individual will always come up with the same strategy as any other superrational thinker when facing the same problem. Applying this definition, a superrational player who assumes they are playing against a superrational opponent in a prisoner's dilemma will cooperate while a rationally self-interested player would defect.
This decision rule is not a mainstream model in game theory and was suggested by Douglas Hofstadter in his article, series, and book Metamagical Themas [ 1 ] as an alternative type of rational decision making different from the widely accepted game-theoretic one. Hofstadter provided this definition: "Superrational thinkers, by recursive definition, include in their calculations the fact that they are in a group of superrational thinkers." [ 1 ]
Unlike the supposed " reciprocating human ", the superrational thinker will not always play the equilibrium that maximizes the total social utility and is thus not a philanthropist .
The idea of superrationality is that two logical thinkers analyzing the same problem will think of the same correct answer. For example, if two people are both good at math and both have been given the same complicated problem to do, both will get the same right answer. In math, knowing that the two answers are going to be the same doesn't change the value of the problem, but in the game theory, knowing that the answer will be the same might change the answer itself.
The prisoner's dilemma is usually framed in terms of jail sentences for criminals, but it can be stated equally well with cash prizes instead. Two players are each given the choice to cooperate (C) or to defect (D). The players choose without knowing what the other is going to do. If both cooperate, each will get $100. If they both defect, they each get $1. If one cooperates and the other defects, then the defecting player gets $150, while the cooperating player gets nothing.
The four outcomes and the payoff to each player are listed below.
One valid way for the players to reason is as follows:
The conclusion is that the rational thing to do is to defect. This type of reasoning defines game-theoretic rationality and two game-theoretic rational players playing this game both defect and receive a dollar each.
Superrationality is an alternative method of reasoning. First, it is assumed that the answer to a symmetric problem will be the same for all the superrational players. Thus the sameness is taken into account before knowing what the strategy will be. The strategy is found by maximizing the payoff to each player, assuming that they all use the same strategy. Since the superrational player knows that the other superrational player will do the same thing, whatever that might be, there are only two choices for two superrational players. Both will cooperate or both will defect depending on the value of the superrational answer. Thus the two superrational players will both cooperate since this answer maximizes their payoff. Two superrational players playing this game will each walk away with $100.
A superrational player playing against a game-theoretic rational player will defect, since the strategy only assumes that the superrational players will agree.
Although standard game theory assumes common knowledge of rationality, it does so in a different way. The game-theoretic analysis maximizes payoffs by allowing each player to change strategies independently of the others, even though in the end, it assumes that the answer in a symmetric game will be the same for all. This is the definition of a game-theoretic Nash equilibrium , which defines a stable strategy as one where no player can improve the payoffs by unilaterally changing course. The superrational equilibrium in a symmetric game is one where all the players' strategies are forced to be the same before the maximization step. (Although there is no agreed-upon extension of the concept of superrationality to asymmetric games, see § Asymmetric games for more.)
Some argue [ who? ] that superrationality implies a kind of magical thinking in which each player supposes that their decision to cooperate will cause the other player to cooperate, even though there is no communication. Hofstadter points out that the concept of "choice" doesn't apply when the player's goal is to figure something out, and that the decision does not cause the other player to cooperate, but rather the same logic leads to the same answer independent of communication or cause and effect. This debate is over whether it is reasonable for human beings to act in a superrational manner, not over what superrationality means, and is similar to arguments about whether it is reasonable for humans to act in a 'rational' manner, as described by game theory (wherein they can figure out what other players will or have done by asking themselves, what would I do if I was them, and applying backward induction and iterated elimination of dominated strategies ).
For simplicity, the foregoing account of superrationality ignored mixed strategies : the possibility that the best choice could be to flip a coin, or more generally to choose different outcomes with some probability . In the prisoner's dilemma , it is superrational to cooperate with probability 1 even when mixed strategies are admitted, because the average payoff when one player cooperates and the other defects are the same as when both cooperate and so defecting increases the risk of both defecting, which decreases the expected payout. But in some cases, the superrational strategy is mixed.
For example, if the payoffs in are as follows:
So that defecting has a huge reward, the superrational strategy is defecting with a probability of 499,900/999,899 or a little over 49.995%. As the reward increases to infinity, the probability only approaches 1/2 further, and the losses for adopting the simpler strategy of 1/2 (which are already minimal) approach 0. In a less extreme example, if the payoff for one cooperator and one defector was $400 and $0, respectively, the superrational mixed strategy world be defecting with probability 100/299 or about 1/3.
In similar situations with more players, using a randomising device can be essential. One example discussed by Hofstadter is the platonia dilemma : an eccentric trillionaire contacts 20 people, and tells them that if one and only one of them send him or her a telegram (assumed to cost nothing) by noon the next day, that person will receive a billion dollars. If they receive more than one telegram or none at all, no one will get any money, and communication between players is forbidden. In this situation, the superrational thing to do (if it is known that all 20 are superrational) is to send a telegram with probability p=1/20—that is, each recipient essentially rolls a 20-sided die and only sends a telegram if it comes up "1". This maximizes the probability that exactly one telegram is received.
Notice though that this is not the solution in the conventional game-theoretical analysis. Twenty game-theoretically rational players would each send in telegrams and therefore receive nothing. This is because sending telegrams is the dominant strategy ; if an individual player sends telegrams they have a chance of receiving money, but if they send no telegrams they cannot get anything. (If all telegrams were guaranteed to arrive, they would only send one, and no one would expect to get any money).
Academic work extending the concept of superrationality to asymmetric games is still incipient.
One such work, developed by Ghislain Fourny, [ 2 ] proposes a decision algorithm which, when executed by a set of agents, will lead to what he called a Perfectly Transparent Equilibrium:
The generalized equilibrium is called the Perfectly Transparent Equilibrium (PTE). [...] while it does not always exist, when it does exist, it is always unique, is always Pareto-optimal , and coincides with Hofstadter’s
equilibrium on symmetric games.
This algorithm can informally be understood as the following sequence of steps:
The outcome that survives this elimination process, if any, will be the PTE.
The question of whether to cooperate in a one-shot Prisoner's Dilemma in some circumstances has also come up in the decision theory literature sparked by Newcomb's problem . Causal decision theory suggests that superrationality is irrational, while evidential decision theory endorses lines of reasoning similar to superrationality and recommends cooperation in a Prisoner's Dilemma against a similar opponent. [ 3 ] [ 4 ]
Program equilibrium has been proposed as a mechanistic model of superrationality. [ 5 ] [ 6 ] [ 7 ] | https://en.wikipedia.org/wiki/Superrationality |
In physical chemistry , supersaturation occurs with a solution when the concentration of a solute exceeds the concentration specified by the value of solubility at equilibrium . Most commonly the term is applied to a solution of a solid in a liquid , but it can also be applied to liquids and gases dissolved in a liquid. A supersaturated solution is in a metastable state; it may return to equilibrium by separation of the excess of solute from the solution, by dilution of the solution by adding solvent, or by increasing the solubility of the solute in the solvent.
Early studies of the phenomenon were conducted with sodium sulfate , also known as Glauber's Salt because, unusually, the solubility of this salt in water may decrease with increasing temperature. Early studies have been summarised by Tomlinson. [ 1 ] It was shown that the crystallization of a supersaturated solution does not simply come from its agitation, (the previous belief) but from solid matter entering and acting as a "starting" site for crystals to form, now called "seeds" (for more information, see nucleation ). Expanding upon this, Gay-Lussac brought attention to the kinematics of salt ions and the characteristics of the container having an impact on the supersaturation state. He was also able to expand upon the number of salts with which a supersaturated solution can be obtained. Later Henri Löwel came to the conclusion that both nuclei of the solution and the walls of the container have a catalyzing effect on the solution that cause crystallization. Explaining and providing a model for this phenomenon has been a task taken on by more recent research. Désiré Gernez contributed to this research by discovering that nuclei must be of the same salt that is being crystallized in order to promote crystallization.
Furthermore, in 1950, Victor K. LaMer proposed another theory for nucleation , [ 2 ] in which he described the nucleation and growth of sulfur nuclei in a solution where a chemical reaction provided a constant inflow of molecularly dissolved sulfur. This theory, however, is not confined to this specific case and can be generalised as shown in LaMer’s diagram, [ 3 ] [ 4 ] [ 5 ] provided in the second figure of this section.
In section (I), the concentration of solute grows linearly, as it is formed (or added) to the solution. Upon reaching c L e q {\displaystyle c_{L}^{eq}\,\!} , it will become saturated, but it won’t start depositing solute right away. Instead, it will keep absorbing it, becoming supersaturated.
In section (II), concentration reaches critical saturation levels, c m i n {\displaystyle c_{min}\,\!} , when solute crystals begin nucleating. The appearance of nuclei partially relieves the supersaturation, at least rapidly enough that the rate of nucleation falls almost immediately to zero. The system rapidly reaches a balance between the solute supply and the consumption rate for the nucleation and its growth, slowing down the increase in its concentration. After reaching the peak, the curve declines owing to the increasing consumption of the solute for the growth of nuclei and reaches again the critical level of nucleation, c m i n {\displaystyle c_{min}\,\!} , ending the nucleation stage. Given optimal conditions, having the solute be introduced to the solution very steadily while keeping the system free from perturbations and nucleation seeds, the maximum concentration that can be achieved in this way is defined as c m a x {\displaystyle c_{max}\,\!} .
In section (III), the supersaturation becomes too low for any more crystals to nucleate, so no new crystals are formed. However, as the solution is still supersaturated, the existing crystals grow by solute diffusion . As time passes by, the growth rate of the crystal equals the rate of solute supply, so the concentration converges to the saturation value c L e q {\displaystyle c_{L}^{eq}\,\!} .
A solution of a chemical compound in a liquid will become supersaturated when the temperature of the saturated solution is changed. In most cases solubility decreases with decreasing temperature; in such cases the excess of solute will rapidly separate from the solution as crystals or an amorphous powder. [ 6 ] [ 7 ] [ 8 ] In a few cases the opposite effect occurs. The example of sodium sulfate in water is well-known and this was why it was used in early studies of solubility.
Recrystallization [ 9 ] [ 10 ] is a process used to purify chemical compounds. A mixture of the impure compound and solvent is heated until the compound has dissolved. If there is some solid impurity remaining it is removed by filtration . When the temperature of the solution is subsequently lowered it briefly becomes supersaturated and then the compound crystallizes out until chemical equilibrium at the lower temperature is achieved. Impurities remain in the supernatant liquid. In some cases crystals do not form quickly and the solution remains supersaturated after cooling. This is because there is a thermodynamic barrier to the formation of a crystal in a liquid medium. Commonly this is overcome by adding a tiny crystal of the solute compound to the supersaturated solution, a process known as "seeding". Another process in common use is to rub a rod on the side of a glass vessel containing the solution to release microscopic glass particles which can act as nucleation centres. In industry, centrifugation is used to separate the crystals from the supernatant liquid.
Some compounds and mixtures of compounds can form long-living supersaturated solutions. Carbohydrates are a class of such compounds; The thermodynamic barrier to formation of crystals is rather high because of extensive and irregular hydrogen bonding with the solvent, water. For example, although sucrose can be recrystallised easily, its hydrolysis product, known as " invert sugar " or "golden syrup" is a mixture of glucose and fructose that exists as a viscous, supersaturated, liquid. Clear honey contains carbohydrates which may crystallize over a period of weeks.
Supersaturation may be encountered when attempting to crystallize a protein. [ 11 ]
The solubility of a gas in a liquid increases with increasing gas pressure. When the external pressure is reduced, the excess gas comes out of solution.
Fizzy drinks are made by subjecting the liquid to carbon dioxide , under pressure. In champagne the CO 2 is produced naturally in the final stage of fermentation . When the bottle or can is opened some gas is released in the form of bubbles.
Release of gas from supersaturated tissues can cause an underwater diver to suffer from decompression sickness (a.k.a. the bends) when returning to the surface. This can be fatal if the released gas obstructs critical blood supplies causing ischaemia in vital tissues. [ 12 ]
Dissolved gases can be released during oil exploration when a strike is made. This occurs because the oil in oil-bearing rock is under considerable pressure from the over-lying rock, allowing the oil to be supersaturated with respect to dissolved gases.
A cloudburst is an extreme form of production of liquid water from a supersaturated mixture of air and water vapour in the atmosphere . Supersaturation in the vapour phase is related to the surface tension of liquids through the Kelvin equation , the Gibbs–Thomson effect and the Poynting effect . [ 13 ]
The International Association for the Properties of Water and Steam ( IAPWS ) provides a special equation for the Gibbs free energy in the metastable-vapor region of water in its Revised Release on the IAPWS Industrial Formulation 1997 for the Thermodynamic Properties of Water and Steam . All thermodynamic properties for the metastable-vapor region of water can be derived from this equation by means of the appropriate relations of thermodynamic properties to the Gibbs free energy. [ 14 ]
When measuring the concentration of a solute in a supersaturated gaseous or liquid mixture it is obvious that the pressure inside the cuvette may be greater than the ambient pressure. When this is so a specialized cuvette must be used. The choice of analytical technique to use will depend on the characteristics of the analyte. [ 15 ]
The characteristics of supersaturation have practical applications in terms of pharmaceuticals . By creating a supersaturated solution of a certain drug, it can be ingested in liquid form. The drug can be made driven into a supersaturated state through any normal mechanism and then prevented from precipitating out by adding precipitation inhibitors. [ 16 ] Drugs in this state are referred to as "supersaturating drug delivery services," or "SDDS." [ 17 ] Oral consumption of a drug in this form is simple and allows for the measurement of very precise dosages. Primarily, it provides a means for drugs with very low solubility to be made into aqueous solutions . [ 18 ] [ 19 ] In addition, some drugs can undergo supersaturation inside the body despite being ingested in a crystalline form. [ 20 ] This phenomenon is known as in vivo supersaturation .
The identification of supersaturated solutions can be used as a tool for marine ecologists to study the activity of organisms and populations. Photosynthetic organisms release O 2 gas into the water. Thus, an area of the ocean supersaturated with O 2 gas can likely determined to be rich with photosynthetic activity. Though some O 2 will naturally be found in the ocean due to simple physical chemical properties, upwards of 70% of all oxygen gas found in supersaturated regions can be attributed to photosynthetic activity. [ 21 ]
Supersaturation in vapor phase is usually present in the expansion process through steam nozzles that operate with superheated steam at the inlet, which transitions to saturated state at the outlet. Supersaturation thus becomes an important factor to be taken into account in the design of steam turbines , as this results in an actual mass flow of steam through the nozzle being about 1 to 3% greater than the theoretically calculated value that would be expected if the expanding steam underwent a reversible adiabatic process through equilibrium states. In these cases supersaturation occurs due to the fact that the expansion process develops so rapidly and in such a short time, that the expanding vapor cannot reach its equilibrium state in the process, behaving as if it were superheated . Hence the determination of the expansion ratio, relevant to the calculation of the mass flow through the nozzle, must be done using an adiabatic index of approximately 1.3, like that of the superheated steam, instead of 1.135, which is the value that should have to be used for a quasi-static adiabatic expansion in the saturated region. [ 22 ]
The study of supersaturation is also relevant to atmospheric studies. Since the 1940s, the presence of supersaturation in the atmosphere has been known. When water is supersaturated in the troposphere , the formation of ice lattices is frequently observed. In a state of saturation, the water particles will not form ice under tropospheric conditions. It is not enough for molecules of water to form an ice lattice at saturation pressures; they require a surface to condense on to or conglomerations of liquid water molecules of water to freeze. For these reasons, relative humidities over ice in the atmosphere can be found above 100%, meaning supersaturation has occurred. Supersaturation of water is actually very common in the upper troposphere, occurring between 20% and 40% of the time. [ 23 ] This can be determined using satellite data from the Atmospheric Infrared Sounder . [ 24 ] | https://en.wikipedia.org/wiki/Supersaturation |
In mathematics, the supersilver ratio is a geometrical proportion equal to the unique real solution of the equation x 3 = 2 x 2 + 1 . The decimal expansion of the root begins as 2.205 569 430 400 590 ... (sequence A356035 in the OEIS ).
The name supersilver ratio results from analogy with the silver ratio , the positive solution of the equation x 2 = 2 x + 1 , and the supergolden ratio .
Two quantities a > b > 0 are in the supersilver ratio-squared if ( 2 a + b a ) 2 = a b . {\displaystyle \left({\frac {2a+b}{a}}\right)^{2}={\frac {a}{b}}.} The ratio 2 a + b a {\displaystyle {\frac {2a+b}{a}}} is here denoted ς . {\displaystyle \varsigma .}
Based on this definition, one has 1 = ( 2 a + b a ) 2 b a = ( 2 a + b a ) 2 ( 2 a + b a − 2 ) ⟹ ς 2 ( ς − 2 ) = 1 {\displaystyle {\begin{aligned}1&=\left({\frac {2a+b}{a}}\right)^{2}{\frac {b}{a}}\\&=\left({\frac {2a+b}{a}}\right)^{2}\left({\frac {2a+b}{a}}-2\right)\\&\implies \varsigma ^{2}\left(\varsigma -2\right)=1\end{aligned}}}
It follows that the supersilver ratio is found as the unique real solution of the cubic equation ς 3 − 2 ς 2 − 1 = 0. {\displaystyle \varsigma ^{3}-2\varsigma ^{2}-1=0.}
The minimal polynomial for the reciprocal root is the depressed cubic x 3 + 2 x − 1 , {\displaystyle x^{3}+2x-1,} thus the simplest solution with Cardano's formula , w 1 , 2 = ( 1 ± 1 3 59 3 ) / 2 1 / ς = w 1 3 + w 2 3 {\displaystyle {\begin{aligned}w_{1,2}&=\left(1\pm {\frac {1}{3}}{\sqrt {\frac {59}{3}}}\right)/2\\1/\varsigma &={\sqrt[{3}]{w_{1}}}+{\sqrt[{3}]{w_{2}}}\end{aligned}}} or, using the hyperbolic sine ,
1 / ς {\displaystyle 1/\varsigma } is the superstable fixed point of the iteration x ← ( 2 x 3 + 1 ) / ( 3 x 2 + 2 ) . {\displaystyle x\gets (2x^{3}+1)/(3x^{2}+2).}
Rewrite the minimal polynomial as ( x 2 + 1 ) 2 = 1 + x {\displaystyle (x^{2}+1)^{2}=1+x} , then the iteration x ← − 1 + 1 + x {\displaystyle x\gets {\sqrt {-1+{\sqrt {1+x}}}}} results in the continued radical
Dividing the defining trinomial x 3 − 2 x 2 − 1 {\displaystyle x^{3}-2x^{2}-1} by x − ς {\displaystyle x-\varsigma } one obtains x 2 + x / ς 2 + 1 / ς {\displaystyle x^{2}+x/\varsigma ^{2}+1/\varsigma } , and the conjugate elements of ς {\displaystyle \varsigma } are x 1 , 2 = ( − 1 ± i 8 ς 2 + 3 ) / 2 ς 2 , {\displaystyle x_{1,2}=\left(-1\pm i{\sqrt {8\varsigma ^{2}+3}}\right)/2\varsigma ^{2},} with x 1 + x 2 = 2 − ς {\displaystyle x_{1}+x_{2}=2-\varsigma \;} and x 1 x 2 = 1 / ς . {\displaystyle \;x_{1}x_{2}=1/\varsigma .}
Good approximations for the supersilver ratio come from its continued fraction expansion , [2; 4, 1, 6, 2, 1, 1, 1, 1, 1, 1, 2, 2, 1, 2, 1, 2, 1, 27, ...] . [ 2 ] The first few are:
Also see the #Third-order Pell sequences section below.
The growth rate of the average value of the n-th term of a random Fibonacci sequence is ς − 1 {\displaystyle \varsigma -1} . [ 3 ]
The defining equation can be written 1 = 1 ς − 1 + 1 ς 2 + 1 = 1 ς + ς − 1 ς + 1 + ς − 2 ς − 1 . {\displaystyle {\begin{aligned}1&={\frac {1}{\varsigma -1}}+{\frac {1}{\varsigma ^{2}+1}}\\&={\frac {1}{\varsigma }}+{\frac {\varsigma -1}{\varsigma +1}}+{\frac {\varsigma -2}{\varsigma -1}}.\end{aligned}}}
The supersilver ratio can be expressed in terms of itself as fractions ς = ς ς − 1 + ς − 1 ς + 1 ς 2 = 1 ς − 2 . {\displaystyle {\begin{aligned}\varsigma &={\frac {\varsigma }{\varsigma -1}}+{\frac {\varsigma -1}{\varsigma +1}}\\\varsigma ^{2}&={\frac {1}{\varsigma -2}}.\end{aligned}}}
Similarly as the infinite geometric series ς = 2 ∑ n = 0 ∞ ς − 3 n ς 2 = − 1 + ∑ n = 0 ∞ ( ς − 1 ) − n , {\displaystyle {\begin{aligned}\varsigma &=2\sum _{n=0}^{\infty }\varsigma ^{-3n}\\\varsigma ^{2}&=-1+\sum _{n=0}^{\infty }(\varsigma -1)^{-n},\end{aligned}}}
in comparison to the silver ratio identities σ = 2 ∑ n = 0 ∞ σ − 2 n σ 2 = − 1 + 2 ∑ n = 0 ∞ ( σ − 1 ) − n . {\displaystyle {\begin{aligned}\sigma &=2\sum _{n=0}^{\infty }\sigma ^{-2n}\\\sigma ^{2}&=-1+2\sum _{n=0}^{\infty }(\sigma -1)^{-n}.\end{aligned}}}
For every integer n {\displaystyle n} one has ς n = 2 ς n − 1 + ς n − 3 = 4 ς n − 2 + ς n − 3 + 2 ς n − 4 = ς n − 1 + 2 ς n − 2 + ς n − 3 + ς n − 4 {\displaystyle {\begin{aligned}\varsigma ^{n}&=2\varsigma ^{n-1}+\varsigma ^{n-3}\\&=4\varsigma ^{n-2}+\varsigma ^{n-3}+2\varsigma ^{n-4}\\&=\varsigma ^{n-1}+2\varsigma ^{n-2}+\varsigma ^{n-3}+\varsigma ^{n-4}\end{aligned}}} From this an infinite number of further relations can be found.
Continued fraction pattern of a few low powers ς − 2 = [ 0 ; 4 , 1 , 6 , 2 , 1 , 1 , 1 , 1 , 1 , 1 , . . . ] ≈ 0.2056 ( 5 / 24 ) ς − 1 = [ 0 ; 2 , 4 , 1 , 6 , 2 , 1 , 1 , 1 , 1 , 1 , . . . ] ≈ 0.4534 ( 5 / 11 ) ς 0 = [ 1 ] ς 1 = [ 2 ; 4 , 1 , 6 , 2 , 1 , 1 , 1 , 1 , 1 , 1 , . . . ] ≈ 2.2056 ( 53 / 24 ) ς 2 = [ 4 ; 1 , 6 , 2 , 1 , 1 , 1 , 1 , 1 , 1 , 2 , . . . ] ≈ 4.8645 ( 73 / 15 ) ς 3 = [ 10 ; 1 , 2 , 1 , 2 , 4 , 4 , 2 , 2 , 6 , 2 , . . . ] ≈ 10.729 ( 118 / 11 ) {\displaystyle {\begin{aligned}\varsigma ^{-2}&=[0;4,1,6,2,1,1,1,1,1,1,...]\approx 0.2056\;(5/24)\\\varsigma ^{-1}&=[0;2,4,1,6,2,1,1,1,1,1,...]\approx 0.4534\;(5/11)\\\varsigma ^{0}&=[1]\\\varsigma ^{1}&=[2;4,1,6,2,1,1,1,1,1,1,...]\approx 2.2056\;(53/24)\\\varsigma ^{2}&=[4;1,6,2,1,1,1,1,1,1,2,...]\approx 4.8645\;(73/15)\\\varsigma ^{3}&=[10;1,2,1,2,4,4,2,2,6,2,...]\approx 10.729\;(118/11)\end{aligned}}}
The supersilver ratio is a Pisot number . [ 4 ] Because the absolute value 1 / ς {\displaystyle 1/{\sqrt {\varsigma }}} of the algebraic conjugates is smaller than 1, powers of ς {\displaystyle \varsigma } generate almost integers . For example: ς 10 = 2724.00146856... ≈ 2724 + 1 / 681. {\displaystyle \varsigma ^{10}=2724.00146856...\approx 2724+1/681.} After ten rotation steps the phases of the inward spiraling conjugate pair – initially close to ± 45 π / 82 {\displaystyle \pm 45\pi /82} – nearly align with the imaginary axis.
The minimal polynomial of the supersilver ratio m ( x ) = x 3 − 2 x 2 − 1 {\displaystyle m(x)=x^{3}-2x^{2}-1} has discriminant Δ = − 59 {\displaystyle \Delta =-59} and factors into ( x − 21 ) 2 ( x − 19 ) ( mod 59 ) ; {\displaystyle (x-21)^{2}(x-19){\pmod {59}};\;} the imaginary quadratic field K = Q ( Δ ) {\displaystyle K=\mathbb {Q} ({\sqrt {\Delta }})} has class number h = 3. {\displaystyle h=3.} Thus, the Hilbert class field of K {\displaystyle K} can be formed by adjoining ς . {\displaystyle \varsigma .} [ 5 ] With argument τ = ( 1 + Δ ) / 2 {\displaystyle \tau =(1+{\sqrt {\Delta }})/2\,} a generator for the ring of integers of K {\displaystyle K} , the real root j ( τ ) of the Hilbert class polynomial is given by ( ς − 6 − 27 ς 6 − 6 ) 3 . {\displaystyle (\varsigma ^{-6}-27\varsigma ^{6}-6)^{3}.} [ 6 ] [ 7 ]
The Weber-Ramanujan class invariant is approximated with error < 3.5 ∙ 10 −20 by
while its true value is the single real root of the polynomial
The elliptic integral singular value [ 8 ] k r = λ ∗ ( r ) for r = 59 {\displaystyle k_{r}=\lambda ^{*}(r){\text{ for }}r=59} has closed form expression
(which is less than 1/294 the eccentricity of the orbit of Venus).
These numbers are related to the supersilver ratio as the Pell numbers and Pell-Lucas numbers are to the silver ratio .
The fundamental sequence is defined by the third-order recurrence relation S n = 2 S n − 1 + S n − 3 for n > 2 , {\displaystyle S_{n}=2S_{n-1}+S_{n-3}{\text{ for }}n>2,} with initial values S 0 = 1 , S 1 = 2 , S 2 = 4. {\displaystyle S_{0}=1,S_{1}=2,S_{2}=4.}
The first few terms are 1, 2, 4, 9, 20, 44, 97, 214, 472, 1041, 2296, 5064,... (sequence A008998 in the OEIS ).
The limit ratio between consecutive terms is the supersilver ratio: lim n → ∞ S n + 1 / S n = ς . {\displaystyle \lim _{n\rightarrow \infty }S_{n+1}/S_{n}=\varsigma \,.}
The first 8 indices n for which S n {\displaystyle S_{n}} is prime are n = 1, 6, 21, 114, 117, 849, 2418, 6144. The last number has 2111 decimal digits.
The sequence can be extended to negative indices using S n = S n + 3 − 2 S n + 2 . {\displaystyle S_{n}=S_{n+3}-2S_{n+2}.}
The generating function of the sequence is given by
The third-order Pell numbers are related to sums of binomial coefficients by
The characteristic equation of the recurrence is x 3 − 2 x 2 − 1 = 0. {\displaystyle x^{3}-2x^{2}-1=0.} If the three solutions are real root α {\displaystyle \alpha } and conjugate pair β {\displaystyle \beta } and γ {\displaystyle \gamma } , the supersilver numbers can be computed with the Binet formula
Since | b β n + c γ n | < 1 / α n / 2 {\displaystyle \left\vert b\beta ^{n}+c\gamma ^{n}\right\vert <1/\alpha ^{n/2}} and α = ς , {\displaystyle \alpha =\varsigma ,} the number S n {\displaystyle S_{n}} is the nearest integer to a ς n + 2 , {\displaystyle a\,\varsigma ^{n+2},} with n ≥ 0 and a = ς / ( 2 ς 2 + 3 ) = {\displaystyle a=\varsigma /(2\varsigma ^{2}+3)=} 0.17327 02315 50408 18074 84794...
Coefficients a = b = c = 1 {\displaystyle a=b=c=1} result in the Binet formula for the related sequence A n = S n + 2 S n − 3 . {\displaystyle A_{n}=S_{n}+2S_{n-3}.}
The first few terms are 3, 2, 4, 11, 24, 52, 115, 254, 560, 1235, 2724, 6008,... (sequence A332647 in the OEIS ).
This third-order Pell-Lucas sequence has the Fermat property : if p is prime, A p ≡ A 1 mod p . {\displaystyle A_{p}\equiv A_{1}{\bmod {p}}.} The converse does not hold, but the small number of odd pseudoprimes n ∣ ( A n − 2 ) {\displaystyle \,n\mid (A_{n}-2)} makes the sequence special. The 14 odd composite numbers below 10 8 to pass the test are n = 3 2 , 5 2 , 5 3 , 315, 99297, 222443, 418625, 9122185, 3257 2 , 11889745, 20909625, 24299681, 64036831, 76917325. [ 11 ]
The third-order Pell numbers are obtained as integral powers n > 3 of a matrix with real eigenvalue ς {\displaystyle \varsigma } Q = ( 2 0 1 1 0 0 0 1 0 ) , {\displaystyle Q={\begin{pmatrix}2&0&1\\1&0&0\\0&1&0\end{pmatrix}},}
Q n = ( S n S n − 2 S n − 1 S n − 1 S n − 3 S n − 2 S n − 2 S n − 4 S n − 3 ) {\displaystyle Q^{n}={\begin{pmatrix}S_{n}&S_{n-2}&S_{n-1}\\S_{n-1}&S_{n-3}&S_{n-2}\\S_{n-2}&S_{n-4}&S_{n-3}\end{pmatrix}}}
The trace of Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Q^{n} } gives the above A n . {\displaystyle A_{n}.}
Alternatively, Q {\displaystyle Q} can be interpreted as incidence matrix for a D0L Lindenmayer system on the alphabet { a , b , c } {\displaystyle \{a,b,c\}} with corresponding substitution rule { a ↦ a a b b ↦ c c ↦ a {\displaystyle {\begin{cases}a\;\mapsto \;aab\\b\;\mapsto \;c\\c\;\mapsto \;a\end{cases}}} and initiator w 0 = b {\displaystyle w_{0}=b} . The series of words w n {\displaystyle w_{n}} produced by iterating the substitution have the property that the number of c's, b's and a's are equal to successive third-order Pell numbers. The lengths of these words are given by l ( w n ) = S n − 2 + S n − 3 + S n − 4 . {\displaystyle l(w_{n})=S_{n-2}+S_{n-3}+S_{n-4}.} [ 12 ]
Associated to this string rewriting process is a compact set composed of self-similar tiles called the Rauzy fractal , that visualizes the combinatorial information contained in a multiple-generation three-letter sequence. [ 13 ]
Given a rectangle of height 1 , length ς {\displaystyle \varsigma } and diagonal length ς ς − 1 . {\displaystyle \varsigma {\sqrt {\varsigma -1}}.} The triangles on the diagonal have altitudes 1 / ς − 1 ; {\displaystyle 1/{\sqrt {\varsigma -1}}\,;} each perpendicular foot divides the diagonal in ratio ς 2 {\displaystyle \varsigma ^{2}} .
On the right-hand side, cut off a square of side length 1 and mark the intersection with the falling diagonal. The remaining rectangle now has aspect ratio 1 + 1 / ς 2 : 1 {\displaystyle 1+1/\varsigma ^{2}:1} (according to ς = 2 + 1 / ς 2 {\displaystyle \varsigma =2+1/\varsigma ^{2}} ). Divide the original rectangle into four parts by a second, horizontal cut passing through the intersection point. [ 14 ]
The parent supersilver rectangle and the two scaled copies along the diagonal have linear sizes in the ratios ς : ς − 1 : 1. {\displaystyle \varsigma :\varsigma -1:1.} The areas of the rectangles opposite the diagonal are both equal to ( ς − 1 ) / ς , {\displaystyle (\varsigma -1)/\varsigma ,} with aspect ratios ς ( ς − 1 ) {\displaystyle \varsigma (\varsigma -1)} (below) and ς / ( ς − 1 ) {\displaystyle \varsigma /(\varsigma -1)} (above).
If the diagram is further subdivided by perpendicular lines through the feet of the altitudes, the lengths of the diagonal and its seven distinct subsections are in ratios ς 2 + 1 : ς 2 : ς 2 − 1 : ς + 1 : {\displaystyle \varsigma ^{2}+1:\varsigma ^{2}:\varsigma ^{2}-1:\varsigma +1:} ς ( ς − 1 ) : ς : 2 / ( ς − 1 ) : 1. {\displaystyle \,\varsigma (\varsigma -1):\varsigma :2/(\varsigma -1):1.}
A supersilver spiral is a logarithmic spiral that gets wider by a factor of ς {\displaystyle \varsigma } for every quarter turn. It is described by the polar equation r ( θ ) = a exp ( k θ ) , {\displaystyle r(\theta )=a\exp(k\theta ),} with initial radius a {\displaystyle a} and parameter k = 2 ln ( ς ) π . {\displaystyle k={\frac {2\ln(\varsigma )}{\pi }}.} If drawn on a supersilver rectangle, the spiral has its pole at the foot of altitude of a triangle on the diagonal and passes through vertices of rectangles with aspect ratio ς ( ς − 1 ) {\displaystyle \varsigma (\varsigma -1)} which are perpendicularly aligned and successively scaled by a factor 1 / ς . {\displaystyle 1/\varsigma .} | https://en.wikipedia.org/wiki/Supersilver_ratio |
Superslow processes are processes in which values change so little that their capture is very difficult because of their smallness in comparison with the measurement error. [ 1 ]
Most of the time, the superslow processes lie beyond the scope of investigation due to the reason of their superslowness. Multiple gaps can be easily detected in biology , astronomy , physics , mechanics , economics , linguistics , ecology , gerontology , etc. [ 1 ] | https://en.wikipedia.org/wiki/Superslow_process |
In condensed matter physics , a supersolid is a spatially ordered (i.e. solid ) material with superfluid properties. In the case of helium-4 , it has been conjectured since the 1960s that it might be possible to create a supersolid. [ 1 ] Starting from 2017, a definitive proof for the existence of this state was provided by several experiments using atomic Bose–Einstein condensates . [ 2 ] The general conditions required for supersolidity to emerge in a certain substance are a topic of ongoing research.
A supersolid is a special quantum state of matter where particles form a rigid, spatially ordered structure, but also flow with zero viscosity . This is in contradiction to the intuition that flow, and in particular superfluid flow with zero viscosity, is a property exclusive to the fluid state, e.g., superconducting electron and neutron fluids, gases with Bose–Einstein condensates , or unconventional liquids such as helium-4 or helium-3 at sufficiently low temperature. For more than 50 years it was thus unclear whether the supersolid state can exist. [ 3 ]
While several experiments yielded negative results, in the 1980s, John Goodkind discovered the first anomaly in a solid by using ultrasound . [ 4 ] Inspired by his observation, in 2004 Eun-Seong Kim and Moses Chan at Pennsylvania State University saw phenomena which were interpreted as supersolid behavior. [ 5 ] Specifically, they observed a non-classical rotational moment of inertia [ 6 ] of a torsional oscillator. This observation could not be explained by classical models but was consistent with superfluid-like behavior of a small percentage of the helium atoms contained within the oscillator.
This observation triggered a large number of follow-up studies to reveal the role played by crystal defects or helium-3 impurities. Further experimentation has cast some doubt on the existence of a true supersolid in helium. Most importantly, it was shown that the observed phenomena could be largely explained due to changes in the elastic properties of the helium. [ 7 ] In 2012, Chan repeated his original experiments with a new apparatus that was designed to eliminate any such contributions. In this experiment, Chan and his coauthors found no evidence of supersolidity. [ 8 ]
In 2017, two research groups from ETH Zurich and from MIT reported on the creation of an ultracold quantum gas with supersolid properties. The Zurich group placed a Bose–Einstein condensate inside two optical resonators, which enhanced the atomic interactions until they started to spontaneously crystallize and form a solid that maintains the inherent superfluidity of Bose–Einstein condensates. [ 9 ] [ 10 ] This setting realises a special form of a supersolid, the so-called lattice supersolid, where atoms are pinned to the sites of an externally imposed lattice structure. The MIT group exposed a Bose–Einstein condensate in a double-well potential to light beams that created an effective spin–orbit coupling . The interference between the atoms on the two spin–orbit coupled lattice sites gave rise to a characteristic density modulation. [ 11 ] [ 12 ]
In 2019, three groups from Stuttgart, Florence, and Innsbruck observed supersolid properties in dipolar Bose–Einstein condensates [ 13 ] formed from lanthanide atoms. In these systems, supersolidity emerges directly from the atomic interactions, without the need for an external optical lattice. This facilitated also the direct observation of superfluid flow and hence the definitive proof for the existence of the supersolid state of matter. [ 14 ] [ 15 ]
In 2021, confocal cavity quantum electrodynamics with a Bose–Einstein condensate was used to create a supersolid that possesses a key property of solids, vibration. That is, a supersolid was created that possesses lattice phonons with a Goldstone mode dispersion exhibiting a 16 cm/s speed of sound. [ 16 ]
In 2021, dysprosium was used to create a 2-dimensional supersolid quantum gas, [ 17 ] in 2022, the same team created a supersolid disk in a round trap [ 18 ] and in 2024 they reported the observation of quantum vortices in the supersolid phase. [ 19 ] [ 20 ]
In most theories of this state, it is supposed that vacancies – empty sites normally occupied by particles in an ideal crystal – lead to supersolidity. These vacancies are caused by zero-point energy , which also causes them to move from site to site as waves . Because vacancies are bosons , if such clouds of vacancies can exist at very low temperatures, then a Bose–Einstein condensation of vacancies could occur at temperatures less than a few tenths of a Kelvin. A coherent flow of vacancies is equivalent to a "superflow" (frictionless flow) of particles in the opposite direction. Despite the presence of the gas of vacancies, the ordered structure of a crystal is maintained, although with less than one particle on each lattice site on average. Alternatively, a supersolid can also emerge from a superfluid. In this situation, which is realised in the experiments with atomic Bose–Einstein condensates, the spatially ordered structure is a modulation on top of the superfluid density distribution. [ citation needed ] | https://en.wikipedia.org/wiki/Supersolid |
A supersonic airfoil is a cross-section geometry designed to generate lift efficiently at supersonic speeds. The need for such a design arises when an aircraft is required to operate consistently in the supersonic flight regime.
Supersonic airfoils generally have a thin section formed of either angled planes or opposed arcs (called "double wedge airfoils" and "biconvex airfoils" respectively), with very sharp leading and trailing edges. The sharp edges prevent the formation of a detached bow shock in front of the airfoil as it moves through the air. [ 1 ] This shape is in contrast to subsonic airfoils, which often have rounded leading edges to reduce flow separation over a wide range of angle of attack . [ 2 ] A rounded edge would behave as a blunt body in supersonic flight and thus would form a bow shock, which greatly increases wave drag. The airfoils' thickness, camber, and angle of attack are varied to achieve a design that will cause a slight deviation in the direction of the surrounding airflow. [ 3 ]
At supersonic conditions, aircraft drag is originated due to:
Therefore, the Drag coefficient on a supersonic airfoil is described by the following expression:
C D = C D,friction + C D,thickness + C D,lift
Experimental data allow us to reduce this expression to:
C D = C D,O + KC L 2 Where C DO is the sum of C (D,friction ) and C D,thickness , and k for supersonic flow is a function of the Mach number. [ 3 ] The skin-friction component is derived from the presence of a viscous boundary layer which is infinitely close to the surface of the aircraft body. At the boundary wall, the normal component of velocity is zero; therefore an infinitesimal area exists where there is no slip . The zero-lift wave drag component can be obtained based on the supersonic area rule which tells us that the wave-drag of an aircraft in a steady supersonic flow is identical to the average of a series of equivalent bodies of revolution. The bodies of revolution are defined by the cuts through the aircraft made by the tangent to the fore Mach cone from a distant point of the aircraft at an azimuthal angle. This average is over all azimuthal angles. [ 4 ] The drag due-to lift component is calculated using lift-analysis programs. The wing design and the lift-analysis programs are separate lifting-surfaces methods that solve the direct or inverse problem of design and lift analysis.
Years of research and experience with the unusual conditions of supersonic flow have led to some interesting conclusions about airfoil design. Considering a rectangular wing, the pressure at a point P with coordinates (x,y) on the wing is defined only by the pressure disturbances originated at points within the upstream Mach cone emanating from point P. [ 3 ] As result, the wing tips modify the flow within their own rearward Mach cones. The remaining area of the wing does not suffer any modification by the tips and can be analyzed with two-dimensional theory. For an arbitrary planform the supersonic leading and trailing are those portions of the wing edge where the components of the freestream velocity normal to the edge are supersonic. Similarly the subsonic leading and trailing are those portions of the wing edge where the components of the free stream velocity normal to the edge are subsonic.
Delta wings have supersonic leading and trailing edges; in contrast arrow wings have a subsonic leading edge and a supersonic trailing edge.
When designing a supersonic airfoil two factors that must be considered are shock and expansion waves. [ 5 ] Whether a shock or expansion wave is generated at different locations along an airfoil depends on the local flow speed and direction along with the geometry of the airfoil.
Aerodynamic efficiency for supersonic aircraft increases with thin section airfoils with sharp leading and trailing edges. Swept wings where the leading edge is subsonic have the advantage of reducing the wave drag component at supersonic speeds; however experiments show that the theoretical benefits are not always attained due to separation of the flow over the surface of the wing; however this can be corrected with design factors. Double-Wedge and Bi-convex airfoils are the most common airfoils used for supersonic aircraft. | https://en.wikipedia.org/wiki/Supersonic_airfoils |
Supersonic flow over a flat plate is a classical fluid dynamics problem. There is no exact solution to it.
When a fluid flow at the speed of sound over a thin sharp flat plate over the leading edge at low incident angle at low Reynolds Number . Then a laminar boundary layer will be developed at the leading edge of the plate. And as there are viscous boundary layer, the plate will have a fictitious boundary layer so that a curved induced shock wave will be generated at the leading edge of the plate.
The shock layer is the region between the plate surface and the boundary layer. This shock layer be further subdivided into layer of viscid and inviscid flow, according to the values of Mach number , Reynolds Number and Surface Temperature. However, if the entire layer is viscous, it is called as merged shock layer.
This Fluid dynamics problem can be solved by different Numerical Methods. However, to solve it with Numerical Methods several assumptions have to be considered. And as a result shock layer properties and shock location is determined. Results vary with one or more than one of viscosity of the fluid, Mach number and angle of incidence changes. Generally for large angles of incidences, the variation of Reynold's Number has significant effects on the change of the flow variables, whereas the viscous effects are dominant on the upper surface of the plate as well as behind the trailing edge of the plate.
Different experimenters get different result as per the assumptions they have made to solve the problem.
The primary method which is generally used to this problem:
This method involves using time-dependent Navier-Stokes equation which is advantageous because of its inherent ability to evolve to the correct steady state solution.
The continuity, momentum and energy equations and some other situational equations are needed to solve the problem. MacCormack's time marching technique is applied and then using Taylor series expansion the flow field variables are advanced at each grid point. Then, initial boundary conditions are applied and solving equations will converge to approximated results.
These equations can be solved by using different algorithms to get better and efficient results with minimum errors. | https://en.wikipedia.org/wiki/Supersonic_flow_over_a_flat_plate |
Supersonic fractures are fractures where the fracture propagation velocity is higher than the speed of sound in the material. This phenomenon was first discovered by scientists from the Max Planck Institute for Metals Research in Stuttgart ( Markus J. Buehler and Huajian Gao ) and IBM Almaden Research Center in San Jose, California ( Farid F. Abraham ). [ 1 ]
The issues of intersonic and supersonic fracture become the frontier of dynamic fracture mechanics . The work of Burridge initiated the exploration for intersonic crack growth (when the crack tip velocity V is between the shear in wave speed C^8 and the longitudinal wave speed C^1. [ 2 ]
Supersonic fracture was a phenomenon totally unexplained by the classical theories of fracture. Molecular dynamics simulations by the group around Abraham and Gao have shown the existence of intersonic mode I and supersonic mode II cracks. This motivated a continuum mechanics analysis of supersonic mode III cracks by Yang. Recent progress in the theoretical understanding of hyperelasticity in dynamic fracture has shown that supersonic crack propagation can only be understood by introducing a new length scale, called χ; which governs the process of energy transport near a crack tip. The crack dynamics is completely dominated by material properties inside a zone surrounding the crack tip with characteristic size equal to χ. When the material inside this characteristic zone is stiffened due to hyperelastic properties, cracks propagate faster than the longitudinal wave speed. The research group of Gao has used this concept to simulate the Broberg problem of crack propagation inside a stiff strip embedded in a soft elastic matrix. These simulations confirmed the existence of an energy characteristic length. This study also had implications for dynamic crack propagation in composite materials. If the characteristic size of the composite microstructure is larger than the energy characteristic length, χ; models that homogenize the materials into an effective continuum would be in significant error. The challenge arises of designing experiments and interpretative simulations to verify the energy characteristic length. Confirmation of the concept must be sought in the comparison of experiments on supersonic cracks and the predictions of the simulations and analysis. While much excitement rightly centres on the relatively new activity related to intersonic cracking, an old but interesting possibility remains to be incorporated in the modern work: for an interface between elastically dissimilar materials, crack propagation that is subsonic but exceeds the Rayleigh wave speed has been predicted for at least some combinations of the elastic properties of the two materials.
This classical mechanics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Supersonic_fracture |
Supersonic gas separation is a technology to remove one or several gaseous components out of a mixed gas (typically raw natural gas ). The process condensates the target components by cooling the gas through expansion in a Laval nozzle and then separates the condensates from the dried gas through an integrated cyclonic gas/liquid separator . The separator is only using a part of the field pressure as energy and has technical and commercial advantages when compared to commonly used conventional technologies.
Raw natural gas out of a well is usually not a salable product but a mix of various hydro-carbonic gases with other gases, liquids and solid contaminants. This raw gas needs gas conditioning to get it ready for pipeline transport and processing in a gas processing plant to separate it into its components. Some of the common processing steps are CO 2 removal, dehydration , LPG extraction, dew-pointing. Technologies used to achieve these steps are adsorption , absorption , membranes and low temperature systems achieved by refrigeration or expansion through a Joule Thomson Valve or a Turboexpander . If such expansion is done through the Supersonic Gas Separator instead, frequently mechanical, economical and operational advantages can be gained as detailed below.
A supersonic gas separator consists of several consecutive sections in tubular form, usually designed as flanged pieces of pipe.
The feed gas (consisting of at least two components) first enters a section with an arrangement of static blades or wings, which induce a fast swirl in the gas. Thereafter the gas stream flows through a Laval nozzle , where it accelerates to supersonic speeds and undergoes a deep pressure drop to about 30% of feed pressure. This is a near isentropic process and the corresponding temperature reduction leads to condensation of target components of the mixed feed gas, which form a fine mist. The droplets agglomerate to larger drops, and the swirl of the gas causes cyclonic separation . [ 1 ] The dry gas continues forward, while the liquid phase together with some slip gas (about 30% of the total stream) is separated by a concentric divider and exits the device as a separate stream. The final section are diffusers for both streams, where the gas is slowed down and about 80% of the feed pressure (depending on application) is recovered. This section might also include another set of static devices to undo the swirling motion. [ 2 ]
The supersonic separator requires a certain process scheme, which includes further auxiliary equipment and often forms a skid or processing block. The typical basic scheme for supersonic separation is an arrangement where the feed gas is pre-cooled in a heat exchanger by the dry stream of the separator unit.
The liquid phase from the supersonic separator goes into a 2-phase or 3-phase separator , where the slip gas is separated from water and/or from liquid hydrocarbons. The gaseous phase of this secondary separator joins the dry gas of the supersonic separator, the liquids go for transport, storage or further processing and the water for treatment and disposal.
Depending on the task at hand other schemes are possible and for certain cases have advantages. Those variations are very much part of the supersonic gas separation process to achieve thermodynamic efficiency and several of them are protected by patents. [ 3 ]
The supersonic gas separator recovers part of the pressure drop needed for cooling and as such has a higher efficiency than a JT valve in all conditions of operation.
The supersonic gas separator can in many cases have a 10–20% higher efficiency than a turboexpander.
The supersonic separator has a smaller footprint and a lower weight than a turboexpander or contactor columns. This is of particular advantage for platforms, FPSOs and crowded installations. It needs a lower capital investment and lower operating expenditure as it is completely static. Very little maintenance is required and no (or greatly reduced) amounts of chemicals.
The fact that no operational or maintenance personnel is required might enable decrewing of usually crewed platforms with the associated large savings in capital and operational expenditure.
The fields of application commercially developed until today on an industrial scale are:
Applications in the development stage for near term commercialization are:
There are several patents on supersonic gas separation, relating to features of the device as well as methods.
The technology has been researched and proven in laboratory installations since about 1998, special HYSYS modules have been developed as well as 3D gas computer modeling. The supersonic gas separation technology has meanwhile moved successfully into industrial applications (e.g. in Nigeria, Malaysia and Russia) for dehydration as well as for LPG extraction.
Consultancy, engineering and equipment for supersonic gas separation are being offered by ENGO Engineering Ltd. under the brand "3S". [ 4 ] They are also provided by Twister BV, a Dutch firm affiliated with Royal Dutch Shell, under the brand "Twister Supersonic Separator". [ 5 ] | https://en.wikipedia.org/wiki/Supersonic_gas_separation |
Supersonic speed is the speed of an object that exceeds the speed of sound ( Mach 1). For objects traveling in dry air of a temperature of 20 °C (68 °F) at sea level , this speed is approximately 343.2 m/s (1,126 ft/s; 768 mph; 667.1 kn; 1,236 km/h). Speeds greater than five times the speed of sound (Mach 5) are often referred to as hypersonic . Flights during which only some parts of the air surrounding an object, such as the ends of rotor blades, reach supersonic speeds are called transonic . This occurs typically somewhere between Mach 0.8 and Mach 1.2.
Sounds are traveling vibrations in the form of pressure waves in an elastic medium. Objects move at supersonic speed when the objects move faster than the speed at which sound propagates through the medium. In gases, sound travels longitudinally at different speeds, mostly depending on the molecular mass and temperature of the gas, and pressure has little effect. Since air temperature and composition varies significantly with altitude, the speed of sound, and Mach numbers for a steadily moving object may change. In water at room temperature supersonic speed means any speed greater than 1,440 m/s (4,724 ft/s). In solids, sound waves can be polarized longitudinally or transversely and have higher velocities.
Supersonic fracture is crack formation faster than the speed of sound in a brittle material.
The word supersonic comes from two Latin derived words ; 1) super : above and 2) sonus : sound, which together mean above sound, or faster than sound.
At the beginning of the 20th century, the term "supersonic" was used as an adjective to describe sound whose frequency is above the range of normal human hearing. The modern term for this meaning is " ultrasonic ", but the older meaning sometimes still lives on, as in the word superheterodyne
The tip of a bullwhip is generally seen as the first object designed to reach the speed of sound. This action results in its telltale "crack", which is actually just a sonic boom . The first human-made supersonic boom was likely caused by a piece of common cloth, leading to the whip's eventual development. [ 3 ] It is the wave motion travelling through the bullwhip that makes it capable of achieving supersonic speeds. [ 4 ] [ 5 ]
Most modern firearm bullets are supersonic, with rifle projectiles often travelling at speeds approaching and in some cases [ 6 ] well exceeding Mach 3 .
Most spacecraft are supersonic at least during portions of their reentry, though the effects on the spacecraft are reduced by low air densities. During ascent, launch vehicles generally avoid going supersonic below 30 km (~98,400 feet) to reduce air drag.
Note that the speed of sound decreases somewhat with altitude, due to lower temperatures found there (typically up to 25 km). At even higher altitudes the temperature starts increasing, with the corresponding increase in the speed of sound.
When an inflated balloon is burst, the torn pieces of latex contract at supersonic speed, which contributes to the sharp and loud popping noise.
To date, only one land vehicle has officially travelled at supersonic speed, the ThrustSSC . The vehicle, driven by Andy Green , holds the world land speed record, having achieved an average speed on its bi-directional run of 1,228 km/h (763 mph) in the Black Rock Desert on 15 October 1997.
The Bloodhound LSR project planned an attempt on the record in 2020 at Hakskeenpan in South Africa with a combination jet and hybrid rocket propelled car. The aim was to break the existing record, then make further attempts during which (the members of) the team hoped to reach speeds of up to 1,600 km/h (1,000 mph). The effort was originally run by Richard Noble who was the leader of the ThrustSSC project, however following funding issues in 2018, the team was bought by Ian Warhurst and renamed Bloodhound LSR. Later the project was indefinitely delayed due to the COVID-19 pandemic and the vehicle was put up for sale.
Most modern fighter aircraft are supersonic aircraft. No modern-day passenger aircraft are capable of supersonic speed, but there have been supersonic passenger aircraft , namely Concorde and the Tupolev Tu-144 . Both of these passenger aircraft and some modern fighters are also capable of supercruise , a condition of sustained supersonic flight without the use of an afterburner . Due to its ability to supercruise for several hours and the relatively high frequency of flight over several decades, Concorde spent more time flying supersonically than all other aircraft combined by a considerable margin. Since Concorde's final retirement flight on November 26, 2003, there are no supersonic passenger aircraft left in service. Some large bombers , such as the Tupolev Tu-160 and Rockwell B-1 Lancer are also supersonic-capable.
The aerodynamics of supersonic aircraft is simpler than subsonic aerodynamics because the airsheets at different points along the plane often cannot affect each other. Supersonic jets and rocket vehicles require several times greater thrust to push through the extra aerodynamic drag experienced within the transonic region (around Mach 0.85–1.2). At these speeds aerospace engineers can gently guide air around the fuselage of the aircraft without producing new shock waves , but any change in cross area farther down the vehicle leads to shock waves along the body. Designers use the Supersonic area rule and the Whitcomb area rule to minimize sudden changes in size.
However, in practical applications, a supersonic aircraft must operate stably in both subsonic and supersonic profiles, hence aerodynamic design is more complex.
The main key to having low supersonic drag is to properly shape the overall aircraft to be long and thin, and close to a "perfect" shape, the von Karman ogive or Sears-Haack body . This has led to almost every supersonic cruising aircraft looking very similar to every other, with a very long and slender fuselage and large delta wings, cf. SR-71 , Concorde , etc. Although not ideal for passenger aircraft, this shaping is quite adaptable for bomber use. | https://en.wikipedia.org/wiki/Supersonic_speed |
A supersonic wind tunnel is a wind tunnel that produces supersonic speeds (1.2< M <5)
The Mach number and flow are determined by the nozzle geometry. The Reynolds number is varied by changing the density level (pressure in the settling chamber). Therefore, a high pressure ratio is required (for a supersonic regime at M=4, this ratio is of the order of 10). Apart from that, condensation of moisture or even gas liquefaction can occur if the static temperature becomes cold enough. This means that a supersonic wind tunnel usually needs a drying or a pre-heating facility.
A supersonic wind tunnel has a large power demand, so most are designed for intermittent instead of continuous operation.
The first supersonic wind tunnel (with a cross section of 2 cm) was built in National Physical Laboratory in England, and started working in 1922.
The power required to run a supersonic wind tunnel is enormous, of the order of 50 MW per square meter of test section cross-sectional area. For this reason most wind tunnels operate intermittently using energy stored in high-pressure tanks. These wind tunnels are also called intermittent supersonic blowdown wind tunnels (of which a schematic preview is given below). Another way of achieving the huge power output is with the use of a vacuum storage tank. These tunnels are called indraft supersonic wind tunnels, and are seldom used because they are restricted to low Reynolds numbers. Some large countries have built major supersonic tunnels that run continuously; one is shown in the photo.
Other problems operating a supersonic wind tunnel include:
Tunnels such as a Ludwieg tube have short test times (usually less than one second), relatively high Reynolds number, and low power requirements. | https://en.wikipedia.org/wiki/Supersonic_wind_tunnel |
Superspace is the coordinate space of a theory exhibiting supersymmetry . In such a formulation, along with ordinary space dimensions x , y , z , ..., there are also "anticommuting" dimensions whose coordinates are labeled in Grassmann numbers rather than real numbers. The ordinary space dimensions correspond to bosonic degrees of freedom, the anticommuting dimensions to fermionic degrees of freedom.
The word "superspace" was first used by John Wheeler in an unrelated sense to describe the configuration space of general relativity ; for example, this usage may be seen in his 1973 textbook Gravitation .
There are several similar, but not equivalent, definitions of superspace that have been used, and continue to be used in the mathematical and physics literature. One such usage is as a synonym for super Minkowski space . [ 1 ] In this case, one takes ordinary Minkowski space , and extends it with anti-commuting fermionic degrees of freedom, taken to be anti-commuting Weyl spinors from the Clifford algebra associated to the Lorentz group . Equivalently, the super Minkowski space can be understood as the quotient of the super Poincaré algebra modulo the algebra of the Lorentz group. A typical notation for the coordinates on such a space is ( x , θ , θ ¯ ) {\displaystyle (x,\theta ,{\bar {\theta }})} with the overline being the give-away that super Minkowski space is the intended space.
Superspace is also commonly used as a synonym for the super vector space . This is taken to be an ordinary vector space , together with additional coordinates taken from the Grassmann algebra , i.e. coordinate directions that are Grassmann numbers. There are several conventions for constructing a super vector space in use; two of these are described by Rogers [ 2 ] and DeWitt. [ 3 ]
A third usage of the term "superspace" is as a synonym for a supermanifold : a supersymmetric generalization of a manifold . Note that both super Minkowski spaces and super vector spaces can be taken as special cases of supermanifolds.
A fourth, and completely unrelated meaning saw a brief usage in general relativity; this is discussed in greater detail at the bottom.
Several examples are given below. The first few assume a definition of superspace as a super vector space . This is denoted as R m | n , the Z 2 - graded vector space with R m as the even subspace and R n as the odd subspace. The same definition applies to C m|n .
The four-dimensional examples take superspace to be super Minkowski space. Although similar to a vector space, this has many important differences: First of all, it is an affine space , having no special point denoting the origin. Next, the fermionic coordinates are taken to be anti-commuting Weyl spinors from the Clifford algebra , rather than being Grassmann numbers . The difference here is that the Clifford algebra has a considerably richer and more subtle structure than the Grassmann numbers. So, the Grassmann numbers are elements of the exterior algebra , and the Clifford algebra has an isomorphism to the exterior algebra, but its relation to the orthogonal group and the spin group , used to construct the spin representations , give it a deep geometric significance. (For example, the spin groups form a normal part of the study of Riemannian geometry , [ 4 ] quite outside the ordinary bounds and concerns of physics.)
The smallest superspace is a point which contains neither bosonic nor fermionic directions. Other trivial examples include the n -dimensional real plane R n , which is a vector space extending in n real, bosonic directions and no fermionic directions. The vector space R 0|n , which is the n -dimensional real Grassmann algebra . The space R 1|1 of one even and one odd direction is known as the space of dual numbers , introduced by William Clifford in 1873.
Supersymmetric quantum mechanics with N supercharges is often formulated in the superspace R 1|2 N , which contains one real direction t identified with time and N complex Grassmann directions which are spanned by Θ i and Θ * i , where i runs from 1 to N .
Consider the special case N = 1. The superspace R 1|2 is a 3-dimensional vector space. A given coordinate therefore may be written as a triple ( t , Θ, Θ * ). The coordinates form a Lie superalgebra , in which the gradation degree of t is even and that of Θ and Θ * is odd. This means that a bracket may be defined between any two elements of this vector space, and that this bracket reduces to the commutator on two even coordinates and on one even and one odd coordinate while it is an anticommutator on two odd coordinates. This superspace is an abelian Lie superalgebra, which means that all of the aforementioned brackets vanish
where [ a , b ] {\displaystyle [a,b]} is the commutator of a and b and { a , b } {\displaystyle \{a,b\}} is the anticommutator of a and b .
One may define functions from this vector space to itself, which are called superfields . The above algebraic relations imply that, if we expand our superfield as a power series in Θ and Θ * , then we will only find terms at the zeroeth and first orders, because Θ 2 = Θ *2 = 0. Therefore, superfields may be written as arbitrary functions of t multiplied by the zeroeth and first order terms in the two Grassmann coordinates
Superfields, which are representations of the supersymmetry of superspace, generalize the notion of tensors , which are representations of the rotation group of a bosonic space.
One may then define derivatives in the Grassmann directions, which take the first order term in the expansion of a superfield to the zeroeth order term and annihilate the zeroeth order term. One can choose sign conventions such that the derivatives satisfy the anticommutation relations
These derivatives may be assembled into supercharges
whose anticommutators identify them as the fermionic generators of a supersymmetry algebra
where i times the time derivative is the Hamiltonian operator in quantum mechanics . Both Q and its adjoint anticommute with themselves. The supersymmetry variation with supersymmetry parameter ε of a superfield Φ is defined to be
We can evaluate this variation using the action of Q on the superfields
Similarly one may define covariant derivatives on superspace
which anticommute with the supercharges and satisfy a wrong sign supersymmetry algebra
The fact that the covariant derivatives anticommute with the supercharges means the supersymmetry transformation of a covariant derivative of a superfield is equal to the covariant derivative of the same supersymmetry transformation of the same superfield. Thus, generalizing the covariant derivative in bosonic geometry which constructs tensors from tensors, the superspace covariant derivative constructs superfields from superfields.
Perhaps the most studied concrete superspace in physics is d = 4 , N = 1 {\displaystyle d=4,{\mathcal {N}}=1} super Minkowski space R 4 | 4 {\displaystyle \mathbb {R} ^{4|4}} or sometimes written R 1 , 3 | 4 {\displaystyle \mathbb {R} ^{1,3|4}} , which is the direct sum of four real bosonic dimensions and four real Grassmann dimensions (also known as fermionic dimensions or spin dimensions ). [ 5 ]
In supersymmetric quantum field theories one is interested in superspaces which furnish representations of a Lie superalgebra called a supersymmetry algebra . The bosonic part of the supersymmetry algebra is the Poincaré algebra , while the fermionic part is constructed using spinors with Grassmann number valued components.
For this reason, in physical applications one considers an action of the supersymmetry algebra on the four fermionic directions of R 4 | 4 {\displaystyle \mathbb {R} ^{4|4}} such that they transform as a spinor under the Poincaré subalgebra. In four dimensions there are three distinct irreducible 4-component spinors. There is the Majorana spinor , the left-handed Weyl spinor and the right-handed Weyl spinor. The CPT theorem implies that in a unitary , Poincaré invariant theory, which is a theory in which the S-matrix is a unitary matrix and the same Poincaré generators act on the asymptotic in-states as on the asymptotic out-states, the supersymmetry algebra must contain an equal number of left-handed and right-handed Weyl spinors. However, since each Weyl spinor has four components, this means that if one includes any Weyl spinors one must have 8 fermionic directions. Such a theory is said to have extended supersymmetry , and such models have received a lot of attention. For example, supersymmetric gauge theories with eight supercharges and fundamental matter have been solved by Nathan Seiberg and Edward Witten , see Seiberg–Witten gauge theory . However, in this subsection we are considering the superspace with four fermionic components and so no Weyl spinors are consistent with the CPT theorem.
Note : There are many sign conventions in use and this is only one of them.
Therefore, the four fermionic directions transform as a Majorana spinor θ α {\displaystyle \theta _{\alpha }} . We can also form a conjugate spinor
where C {\displaystyle C} is the charge conjugation matrix, which is defined by the property that when it conjugates a gamma matrix , the gamma matrix is negated and transposed. The first equality is the definition of θ ¯ {\displaystyle {\bar {\theta }}} while the second is a consequence of the Majorana spinor condition θ ∗ = i γ 0 C θ {\displaystyle \theta ^{*}=i\gamma _{0}C\theta } . The conjugate spinor plays a role similar to that of θ ∗ {\displaystyle \theta ^{*}} in the superspace R 1 | 2 {\displaystyle \mathbb {R} ^{1|2}} , except that the Majorana condition, as manifested in the above equation, imposes that θ {\displaystyle \theta } and θ ∗ {\displaystyle \theta ^{*}} are not independent.
In particular we may construct the supercharges
which satisfy the supersymmetry algebra
where P = i ∂ μ {\displaystyle P=i\partial _{\mu }} is the 4- momentum operator. Again the covariant derivative is defined like the supercharge but with the second term negated and it anticommutes with the supercharges. Thus the covariant derivative of a supermultiplet is another supermultiplet.
It is possible to have N {\displaystyle {\mathcal {N}}} sets of supercharges Q I {\displaystyle Q^{I}} with I = 1 , ⋯ , N {\displaystyle I=1,\cdots ,{\mathcal {N}}} , although this is not possible for all values of N {\displaystyle {\mathcal {N}}} .
These supercharges generate translations in a total of 4 N {\displaystyle 4{\mathcal {N}}} spin dimensions, hence forming the superspace R 4 | 4 N {\displaystyle \mathbb {R} ^{4|4{\mathcal {N}}}} .
The word "superspace" is also used in a completely different and unrelated sense, in the book Gravitation by Misner, Thorne and Wheeler. There, it refers to the configuration space of general relativity , and, in particular, the view of gravitation as geometrodynamics , an interpretation of general relativity as a form of dynamical geometry. In modern terms, this particular idea of "superspace" is captured in one of several different formalisms used in solving the Einstein equations in a variety of settings, both theoretical and practical, such as in numerical simulations. This includes primarily the ADM formalism , as well as ideas surrounding the Hamilton–Jacobi–Einstein equation and the Wheeler–DeWitt equation . | https://en.wikipedia.org/wiki/Superspace |
In biology, a species complex is a group of closely related organisms that are so similar in appearance and other features that the boundaries between them are often unclear. The taxa in the complex may be able to hybridize readily with each other, further blurring any distinctions. Terms that are sometimes used synonymously but have more precise meanings are cryptic species for two or more species hidden under one species name, sibling species for two (or more) species that are each other's closest relative, and species flock for a group of closely related species that live in the same habitat. As informal taxonomic ranks , species group , species aggregate , macrospecies , and superspecies are also in use.
Two or more taxa that were once considered conspecific (of the same species) may later be subdivided into infraspecific taxa (taxa within a species, such as plant varieties ), which may be a complex ranking but it is not a species complex. In most cases, a species complex is a monophyletic group of species with a common ancestor, but there are exceptions. It may represent an early stage after speciation in which the species were separated for a long time period without evolving morphological differences. Hybrid speciation can be a component in the evolution of a species complex.
Species complexes are ubiquitous and are identified by the rigorous study of differences between individual species that uses minute morphological details, tests of reproductive isolation , or DNA -based methods, such as molecular phylogenetics and DNA barcoding . The existence of extremely similar species may cause local and global species diversity to be underestimated. The recognition of similar-but-distinct species is important for disease and pest control and in conservation biology although the drawing of dividing lines between species can be inherently difficult .
A species complex is typically considered as a group of close, but distinct species. [ 5 ] Obviously, the concept is closely tied to the definition of a species. Modern biology understands a species as "separately evolving metapopulation lineage " but acknowledges that the criteria to delimit species may depend on the group studied. [ 6 ] Thus, many traditionally defined species, based only on morphological similarity, have been found to be several distinct species when other criteria, such as genetic differentiation or reproductive isolation , are applied. [ 7 ]
A more restricted use applies the term to a group of species among which hybridisation has occurred or is occurring, which leads to intermediate forms and blurred species boundaries. [ 8 ] The informal classification, superspecies, can be exemplified by the grizzled skipper butterfly, which is a superspecies that is further divided into three subspecies. [ 9 ]
Some authors apply the term to a species with intraspecific variability , which might be a sign of ongoing or incipient speciation . Examples are ring species [ 10 ] [ 11 ] or species with subspecies , in which it is often unclear if they should be considered separate species. [ 12 ]
Several terms are used synonymously for a species complex, but some of them may also have slightly different or narrower meanings. In the nomenclature codes of zoology and bacteriology, no taxonomic ranks are defined at the level between subgenus and species, [ 13 ] [ 14 ] but the botanical code defines four ranks below subgenus (section, subsection, series, and subseries). [ 15 ] Different informal taxonomic solutions have been used to indicate a species complex.
Distinguishing close species within a complex requires the study of often very small differences. Morphological differences may be minute and visible only by the use of adapted methods, such as microscopy . However, distinct species sometimes have no morphological differences. [ 21 ] In those cases, other characters, such as in the species' life history , behavior , physiology , and karyology , may be explored. For example, territorial songs are indicative of species in the treecreepers , a bird genus with few morphological differences. [ 22 ] Mating tests are common in some groups such as fungi to confirm the reproductive isolation of two species. [ 23 ]
Analysis of DNA sequences is becoming increasingly standard for species recognition and may, in many cases, be the only useful method. [ 21 ] Different methods are used to analyse such genetic data, such as molecular phylogenetics or DNA barcoding . Such methods have greatly contributed to the discovery of cryptic species, [ 21 ] [ 24 ] including such emblematic species as the fly agaric , [ 2 ] the water fleas , [ 25 ] or the African elephants . [ 3 ]
Species forming a complex have typically diverged very recently from each other, which sometimes allows the retracing of the process of speciation . Species with differentiated populations, such as ring species , are sometimes seen as an example of early, ongoing speciation: a species complex in formation. Nevertheless, similar but distinct species have sometimes been isolated for a long time without evolving differences, a phenomenon known as "morphological stasis". [ 21 ] For example, the Amazonian frog Pristimantis ockendeni is actually at least three different species that diverged over 5 million years ago. [ 27 ]
Stabilizing selection has been invoked as a force maintaining similarity in species complexes, especially when they adapted to special environments (such as a host in the case of symbionts or extreme environments). [ 21 ] This may constrain possible directions of evolution; in such cases, strongly divergent selection is not to be expected. [ 21 ] Also, asexual reproduction, such as through apomixis in plants, may separate lineages without producing a great degree of morphological differentiation.
A species complex is usually a group that has one common ancestor (a monophyletic group), but closer examination can sometimes disprove that. For example, yellow-spotted "fire salamanders" in the genus Salamandra , formerly all classified as one species S. salamandra , are not monophyletic: the Corsican fire salamander 's closest relative has been shown to be the entirely black Alpine salamander . [ 26 ] In such cases, similarity has arisen from convergent evolution .
Hybrid speciation can lead to unclear species boundaries through a process of reticulate evolution , in which species have two parent species as their most recent common ancestors . In such cases, the hybrid species may have intermediate characters, such as in Heliconius butterflies. [ 28 ] Hybrid speciation has been observed in various species complexes, such as insects, fungi, and plants. In plants, hybridization often takes place through polyploidization , and hybrid plant species are called nothospecies .
Sources differ on whether or not members of a species group share a range . A source from Iowa State University Department of Agronomy states that members of a species group usually have partially overlapping ranges but do not interbreed with one another. [ 29 ] A Dictionary of Zoology ( Oxford University Press 1999) describes a species group as complex of related species that exist allopatrically and explains that the "grouping can often be supported by experimental crosses in which only certain pairs of species will produce hybrids ." [ 30 ] The examples given below may support both uses of the term "species group."
Often, such complexes do not become evident until a new species is introduced into the system, which breaks down existing species barriers. An example is the introduction of the Spanish slug in Northern Europe , where interbreeding with the local black slug and red slug , which were traditionally considered clearly separate species that did not interbreed, shows that they may be actually just subspecies of the same species. [ 31 ] [ 32 ]
Where closely related species co-exist in sympatry , it is often a particular challenge to understand how the similar species persist without outcompeting each other. Niche partitioning is one mechanism invoked to explain that. Indeed, studies in some species complexes suggest that species divergence have gone in par with ecological differentiation, with species now preferring different microhabitats. [ 33 ] Similar methods also found that the Amazonian frog Eleutherodactylus ockendeni is actually at least three different species that diverged over 5 million years ago. [ 27 ]
A species flock may arise when a species penetrates a new geographical area and diversifies to occupy a variety of ecological niches , a process known as adaptive radiation . The first species flock to be recognized as such was the 13 species of Darwin's finches on the Galápagos Islands described by Charles Darwin .
It has been suggested that cryptic species complexes are very common in the marine environment. [ 34 ] That suggestion came before the detailed analysis of many systems using DNA sequence data but has been proven to be correct. [ 35 ] The increased use of DNA sequence in the investigation of organismal diversity (also called phylogeography and DNA barcoding ) has led to the discovery of a great many cryptic species complexes in all habitats. In the marine bryozoan Celleporella hyalina , [ 36 ] detailed morphological analyses and mating compatibility tests between the isolates identified by DNA sequence analysis were used to confirm that these groups consisted of more than 10 ecologically distinct species, which had been diverging for many millions of years.
Pests, species that cause diseases and their vectors, have direct importance for humans. When they are found to be cryptic species complexes, the ecology and the virulence of each of these species need to be re-evaluated to devise appropriate control strategies as their diversity increases the capacity for more dangerous strains to develop. Examples are cryptic species in the malaria vector genus of mosquito, Anopheles , the fungi causing cryptococcosis , and sister species of Bactrocera tryoni , or the Queensland fruit fly. That pest is indistinguishable from two sister species except that B. tryoni inflicts widespread, devastating damage to Australian fruit crops, but the sister species do not. [ 38 ]
When a species is found to be several phylogenetically distinct species, each typically has smaller distribution ranges and population sizes than had been reckoned. The different species can also differ in their ecology, such as by having different breeding strategies or habitat requirements, which must be taken into account for appropriate management. For example, giraffe populations and subspecies differ genetically to such an extent that they may be considered species. Although the giraffe, as a whole, is not considered to be threatened, if each cryptic species is considered separately, there is a much higher level of threat. [ 39 ] | https://en.wikipedia.org/wiki/Superspecies |
Superstatistics [ 1 ] [ 2 ] is a branch of statistical mechanics or statistical physics devoted to the study of non-linear and non- equilibrium systems . It is characterized by using the superposition of multiple differing statistical models to achieve the desired non-linearity. In terms of ordinary statistical ideas, this is equivalent to compounding the distributions of random variables and it may be considered a simple case of a doubly stochastic model .
Consider [ 3 ] an extended thermodynamical system which is locally in equilibrium and has a Boltzmann distribution , that is the probability of finding the system in a state with energy E {\displaystyle E} is proportional to exp ( − β E ) {\displaystyle \exp(-\beta E)} . Here β {\displaystyle \beta } is the local inverse temperature. A non-equilibrium thermodynamical system is modeled by considering macroscopic fluctuations of the local inverse temperature. These fluctuations happen on time scales which are much larger than the microscopic relaxation times to the Boltzmann distribution. If the fluctuations of β {\displaystyle \beta } are characterized by a distribution f ( β ) {\displaystyle f(\beta )} , the superstatistical Boltzmann factor of the system is given by
This defines the superstatistical partition function
for system that can assume discrete energy states { E i } i = 1 W {\displaystyle \{E_{i}\}_{i=1}^{W}} . The probability of finding the system in state E i {\displaystyle E_{i}} is then given by
Modeling the fluctuations of β {\displaystyle \beta } leads to a description in terms of statistics of Boltzmann statistics, or "superstatistics". For example, if β {\displaystyle \beta } follows a Gamma distribution, the resulting superstatistics corresponds to Tsallis statistics. [ 4 ] Superstatistics can also lead to other statistics such as power-law distributions or stretched exponentials. [ 5 ] [ 6 ] One needs to note here that the word super here is short for superposition of the statistics.
This branch is highly related to the exponential family and Mixing . These concepts are used in many approximation approaches, like particle filtering (where the distribution is approximated by delta functions) for example.
This article about statistical mechanics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Superstatistics |
A superstructure is an upward extension of an existing structure above a baseline. This term is applied to various kinds of physical structures such as buildings , bridges , or ships . [ 1 ]
On water craft, the superstructure consists of the parts of the ship or a boat , including sailboats , fishing boats , passenger ships , and submarines , that project above her main deck. This does not usually include its masts or any armament turrets . Note that, in modern times, turrets do not always carry naval artillery . They can also carry missile launchers and/or antisubmarine warfare weapons.
The size of a watercraft's superstructure can have many implications in the performance of ships and boats, since these structures can alter their structural rigidity, their displacements, and/or stability. These can be detrimental to any vessel's performance if they are taken into consideration incorrectly.
The height and the weight of superstructure on board a ship or a boat also affects the amount of freeboard that such a vessel requires along its sides, down to her waterline . In broad terms, the more and heavier superstructure that a ship possesses (as a fraction of her length), the less the freeboard that is needed.
The span of a bridge, the portion that directly receives the live load, is referred to as the superstructure. In contrast, the abutment , piers , and other support structures are called the ' substructure '. [ 2 ]
In order to improve the response during earthquakes of buildings and bridges, the superstructure may be separated from its foundation by various civil engineering mechanisms or machinery . All together, these implement the system of earthquake protection called base isolation . | https://en.wikipedia.org/wiki/Superstructure |
In solid state physics , a superstructure is some additional structure that is superimposed on a higher symmetry crystalline structure. [ 1 ] A typical and important example is ferromagnetic ordering.
In a wider sense, the term "superstructure" is applied to polymers and proteins to describe ordering on a length scale larger than that of monomeric segments.
In a crystal, a superstructure manifests itself through additional reflections in diffraction patterns, e.g., in low energy electron diffraction ( LEED ) or X-ray diffraction experiments . Often a set of weak diffraction spots appears between the stronger spots belonging to what is referred to as the substructure. In some cases a phase transition occurs, e.g., at higher temperatures, where the superstructure disappears and the material reverts to the simpler substructure. Not all compounds exhibit a superstructure.
The superspots in diffraction patterns represent a modulation of the substructure that causes the inherent translation symmetry of the (substructure) lattice to be violated slightly or the size of the repeat motif of the structure to be increased. One could speak of symmetry breaking of the translation symmetry of the lattice, although rotational symmetry may be lost simultaneously.
If the superspots are located at simple fractions of the vectors of the reciprocal lattice of the substructure, e.g., at q=(½,0,0), the resulting broken symmetry is a multiple of the unit cell along that axis. Such a modulation is called a commensurate superstructure.
In some materials, superspots will occur at positions that do not represent a simple fraction, say q=(0.5234,0,0). In this case the structure strictly speaking has lost all translational symmetry in a particular direction. This is called an incommensurate structure. [ 2 ]
There are basically three types of superstructures in crystals:
When a crystalline material that contains atoms with uncompensated electron spins is cooled down, ordering of these spins generally occurs once the thermal energy is small enough not to overrule the interactions between neighboring spins. If the ordering does not exhibit the same symmetry as the original unit cell of the crystallographic lattice, a superstructure will result. In this case, the superspots are typically only visible in neutron diffraction patterns, because the neutron is scattered both by the nucleus and by the magnetic moments of the electron spins.
Many alloys of elements that resemble each other chemically will form a structure at higher temperatures where the two elements occupy similar positions in the lattice at random. At lower temperatures ordering may occur where crystallographic positions are no longer equivalent because one element preferentially occupies one site and the other the other. This partial ordering process may lower the translation symmetry and result in a different, larger unit cell.
In some transitions a number of atoms occupying crystallographic positions that were originally equivalent will move away slightly from their ideal positions according to a certain pattern. This pattern or repeat motif may span multiple unit cells. The cause of this phenomenon is the small changes in chemical bonding that favor formations of semi-regular and larger clusters of atoms. Although having the undistorted substructure, these materials are typically 'unsaturated' in the sense that one of the bands in the band structure is only partially filled. The distortion changes the band structure, in part splitting the bands up into smaller bands that can be more completely filled or emptied to lower the energy of the system. This process may not go to completion, however, because the substructure only allows for a certain amount of distortion. Superstructures of this type are often incommensurate. A good example is found in the structural transitions of 1T- TaS 2 , a compound with a partially filled, narrow d band (Ta(IV) has a d 1 configuration). | https://en.wikipedia.org/wiki/Superstructure_(condensed_matter) |
In physics, the supersymmetric WKB (SWKB) approximation [ 1 ] is an extension of the WKB approximation that uses principles from supersymmetric quantum mechanics to provide estimations on energy eigenvalues in quantum-mechanical systems. Using the supersymmetric method, there are potentials V ( x ) {\displaystyle V(x)} that can be expressed in terms of a superpotential, W ( x ) {\displaystyle W(x)} , such that
The SWKB approximation then writes the Born–Sommerfeld quantization condition from the WKB approximation in terms of W ( x ) {\displaystyle W(x)} .
The SWKB approximation for unbroken supersymmetry, to first order in ℏ {\displaystyle \hbar } is given by
where E n {\displaystyle E_{n}} is the estimate of the energy of the n {\displaystyle n} -th excited state, and a {\displaystyle a} and b {\displaystyle b} are the classical turning points, given by
The addition of the supersymmetric method provides several appealing qualities to this method. First, it is known that, by construction, the ground state energy will be exactly estimated. This is an improvement over the standard WKB approximation, which often has weaknesses at lower energies. Another property is that a class of potentials known as shape invariant potentials have their energy spectra estimated exactly by this first-order condition.
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it .
This mathematical physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Supersymmetric_WKB_approximation |
In theoretical physics , supersymmetric quantum mechanics is an area of research where supersymmetry are applied to the simpler setting of plain quantum mechanics , rather than quantum field theory . Supersymmetric quantum mechanics has found applications outside of high-energy physics , such as providing new methods to solve quantum mechanical problems, providing useful extensions to the WKB approximation , and statistical mechanics .
Understanding the consequences of supersymmetry (SUSY) has proven mathematically daunting, and it has likewise been difficult to develop theories that could account for symmetry breaking, i.e. , the lack of observed partner particles of equal mass. To make progress on these problems, physicists developed supersymmetric quantum mechanics , an application of the supersymmetry superalgebra to quantum mechanics as opposed to quantum field theory. It was hoped that studying SUSY's consequences in this simpler setting would lead to new understanding; remarkably, the effort created new areas of research in quantum mechanics itself.
For example, students are typically taught to "solve" the hydrogen atom by a process that begins by inserting the Coulomb potential into the Schrödinger equation . Following use of multiple differential equations, the analysis produces a recursion relation for the Laguerre polynomials . The outcome is the spectrum of hydrogen-atom energy states (labeled by quantum numbers n and l ). Using ideas drawn from SUSY, the final result can be derived with greater ease, in much the same way that operator methods are used to solve the harmonic oscillator . [ 1 ] A similar supersymmetric approach can also be used to more accurately find the hydrogen spectrum using the Dirac equation. [ 2 ] Oddly enough, this approach is analogous to the way Erwin Schrödinger first solved the hydrogen atom. [ 3 ] [ 4 ] He did not call his solution supersymmetric, as SUSY was thirty years in the future.
The SUSY solution of the hydrogen atom is only one example of the very general class of solutions which SUSY provides to shape-invariant potentials , a category which includes most potentials taught in introductory quantum mechanics courses.
SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians . (The potential energy terms which occur in the Hamiltonians are then called partner potentials .) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy (except possibly for zero energy eigenstates). This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a "bosonic Hamiltonian", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be "fermionic", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy—but, in the relativistic world, energy and mass are interchangeable, so we can just as easily say that the partner particles have equal mass.
SUSY concepts have provided useful extensions to the WKB approximation in the form of a modified version of the Bohr-Sommerfeld quantization condition. In addition, SUSY has been applied to non-quantum statistical mechanics through the Fokker–Planck equation , showing that even if the original inspiration in high-energy particle physics turns out to be a blind alley, its investigation has brought about many useful benefits.
The Schrödinger equation for the harmonic oscillator takes the form
where ψ n ( x ) {\displaystyle \psi _{n}(x)} is the n {\displaystyle n} th energy eigenstate of H HO {\displaystyle H^{\text{HO}}} with energy E n HO {\displaystyle E_{n}^{\text{HO}}} . We want to find an expression for E n H O {\displaystyle E_{n}^{\rm {HO}}} in terms of n {\displaystyle n} . We define the operators
and
where W ( x ) {\displaystyle W(x)} , which we need to choose, is called the superpotential of H H O {\displaystyle H^{\rm {HO}}} . We also define the aforementioned partner Hamiltonians H ( 1 ) {\displaystyle H^{(1)}} and H ( 2 ) {\displaystyle H^{(2)}} as
A zero energy ground state ψ 0 ( 1 ) ( x ) {\displaystyle \psi _{0}^{(1)}(x)} of H ( 1 ) {\displaystyle H^{(1)}} would satisfy the equation
Assuming that we know the ground state of the harmonic oscillator ψ 0 ( x ) {\displaystyle \psi _{0}(x)} , we can solve for W ( x ) {\displaystyle W(x)} as
We then find that
We can now see that
This is a special case of shape invariance, discussed below. Taking without proof the introductory theorem mentioned above, it is apparent that the spectrum of H ( 1 ) {\displaystyle H^{(1)}} will start with E 0 = 0 {\displaystyle E_{0}=0} and continue upwards in steps of ℏ ω . {\displaystyle \hbar \omega .} The spectra of H ( 2 ) {\displaystyle H^{(2)}} and H H O {\displaystyle H^{\rm {HO}}} will have the same even spacing, but will be shifted up by amounts ℏ ω {\displaystyle \hbar \omega } and ℏ ω / 2 {\displaystyle \hbar \omega /2} , respectively. It follows that the spectrum of H H O {\displaystyle H^{\rm {HO}}} is therefore the familiar E n H O = ℏ ω ( n + 1 / 2 ) {\displaystyle E_{n}^{\rm {HO}}=\hbar \omega (n+1/2)} .
In fundamental quantum mechanics, we learn that an algebra of operators is defined by commutation relations among those operators. For example, the canonical operators of position and momentum have the commutator [ x , p ] = i {\displaystyle [x,p]=i} . (Here, we use " natural units " where the Planck constant is set equal to 1.) A more intricate case is the algebra of angular momentum operators; these quantities are closely connected to the rotational symmetries of three-dimensional space. To generalize this concept, we define an anticommutator , which relates operators the same way as an ordinary commutator , but with the opposite sign:
If operators are related by anticommutators as well as commutators, we say they are part of a Lie superalgebra . Let's say we have a quantum system described by a Hamiltonian H {\displaystyle {\mathcal {H}}} and a set of N {\displaystyle N} operators Q i {\displaystyle Q_{i}} . We shall call this system supersymmetric if the following anticommutation relation is valid for all i , j = 1 , … , N {\displaystyle i,j=1,\ldots ,N} :
If this is the case, then we call Q i {\displaystyle Q_{i}} the system's supercharges .
Let's look at the example of a one-dimensional nonrelativistic particle with a 2D ( i.e., two states) internal degree of freedom called "spin" (it's not really spin because "real" spin is a property of 3D particles). Let b {\displaystyle b} be an operator which transforms a "spin up" particle into a "spin down" particle. Its adjoint b † {\displaystyle b^{\dagger }} then transforms a spin down particle into a spin up particle; the operators are normalized such that the anticommutator { b , b † } = 1 {\displaystyle \{b,b^{\dagger }\}=1} . And b 2 = 0 {\displaystyle b^{2}=0} . Let p {\displaystyle p} be the momentum of the particle and x {\displaystyle x} be its position with [ x , p ] = i {\displaystyle [x,p]=i} . Let W {\displaystyle W} (the " superpotential ") be an arbitrary complex analytic function of x {\displaystyle x} and define the supersymmetric operators
Note that Q 1 {\displaystyle Q_{1}} and Q 2 {\displaystyle Q_{2}} are self-adjoint. Let the Hamiltonian
where W ′ is the derivative of W . Also note that { Q 1 , Q 2 } = 0. This is nothing other than N = 2 supersymmetry. Note that ℑ { W } {\displaystyle \Im \{W\}} acts like an electromagnetic vector potential .
Let's also call the spin down state "bosonic" and the spin up state "fermionic". This is only in analogy to quantum field theory and should not be taken literally. Then, Q 1 and Q 2 maps "bosonic" states into "fermionic" states and vice versa.
Reformulating this a bit:
Define
and,
and
An operator is "bosonic" if it maps "bosonic" states to "bosonic" states and "fermionic" states to "fermionic" states. An operator is "fermionic" if it maps "bosonic" states to "fermionic" states and vice versa. Any operator can be expressed uniquely as the sum of a bosonic operator and a fermionic operator. Define the supercommutator [,} as follows: Between two bosonic operators or a bosonic and a fermionic operator, it is none other than the commutator but between two fermionic operators, it is an anticommutator .
Then, x and p are bosonic operators and b , b † {\displaystyle b^{\dagger }} , Q and Q † {\displaystyle Q^{\dagger }} are fermionic operators.
Let's work in the Heisenberg picture where x , b and b † {\displaystyle b^{\dagger }} are functions of time.
Then,
This is nonlinear in general: i.e. , x(t), b(t) and b † ( t ) {\displaystyle b^{\dagger }(t)} do not form a linear SUSY representation because ℜ { W } {\displaystyle \Re \{W\}} isn't necessarily linear in x . To avoid this problem, define the self-adjoint operator F = ℜ { W } {\displaystyle F=\Re \{W\}} . Then,
and we see that we have a linear SUSY representation.
Now let's introduce two "formal" quantities, θ {\displaystyle \theta } ; and θ ¯ {\displaystyle {\bar {\theta }}} with the latter being the adjoint of the former such that
and both of them commute with bosonic operators but anticommute with fermionic ones.
Next, we define a construct called a superfield :
f is self-adjoint. Then,
Incidentally, there's also a U(1) R symmetry, with p and x and W having zero R-charges and b † {\displaystyle b^{\dagger }} having an R-charge of 1 and b having an R-charge of −1.
Suppose W {\displaystyle W} is real for all real x {\displaystyle x} . Then we can simplify the expression for the Hamiltonian to
There are certain classes of superpotentials such that both the bosonic and fermionic Hamiltonians have similar forms. Specifically
where the a {\displaystyle a} 's are parameters. For example, the hydrogen atom potential with angular momentum l {\displaystyle l} can be written this way.
This corresponds to V − {\displaystyle V_{-}} for the superpotential
This is the potential for l + 1 {\displaystyle l+1} angular momentum shifted by a constant. After solving the l = 0 {\displaystyle l=0} ground state, the supersymmetric operators can be used to construct the rest of the bound state spectrum.
In general, since V − {\displaystyle V_{-}} and V + {\displaystyle V_{+}} are partner potentials, they share the same energy spectrum except the one extra ground energy. We can continue this process of finding partner potentials with the shape invariance condition, giving the following formula for the energy levels in terms of the parameters of the potential
where a i {\displaystyle a_{i}} are the parameters for the multiple partnered potentials. | https://en.wikipedia.org/wiki/Supersymmetric_quantum_mechanics |
Supersymmetric theory of stochastic dynamics ( STS ) is a multidisciplinary approach to stochastic dynamics on the intersection of dynamical systems theory , statistical physics , stochastic differential equations (SDE), topological field theories ,
and the theory of pseudo-Hermitian operators. It can be seen as an algebraic dual to the traditional set-theoretic framework of the dynamical systems theory, with its added algebraic structure and an inherent topological supersymmetry (TS) enabling the generalization of certain concepts from deterministic to stochastic models. It identifies the spontaneous breakdown of TS as the stochastic generalization of chaos and associates the emergence of the corresponding long-range phenomena such as 1/f noise and self-organized criticality with the Goldstone theorem .
The traditional approach to stochastic dynamics focuses on the temporal evolution of probability distributions. At any moment, the distribution encodes the information or the memory of the system's past, much like wavefunctions in quantum theory. STS uses generalized probability distributions, or "wavefunctions", that depend not only on the original variables of the model but also on their "superpartners", [ 1 ] whose evolution determines Lyapunov exponents . [ 2 ] This structure enables an extended form of memory that includes also the memory of initial conditions/perturbations known in the context of dynamical chaos as the butterfly effect .
From an algebraic topology perspective, the wavefunctions are differential forms [ 3 ] and dynamical systems theory defines their dynamics by the generalized transfer operator (GTO) [ 4 ] [ 5 ] -- the pullback averaged over noise. GTO commutes with the exterior derivative , which is the topological supersymmetry (TS) of STS.
The presence of TS arises from the fact that continuous-time dynamics preserves the topology of the phase / state space: trajectories originating from close initial conditions remain close over time for any noise configuration. If TS is spontaneously broken, this property no longer holds on average in the limit of infinitely long evolution, meaning the system exhibits a stochastic variant of the butterfly effect. The Goldstone theorem necessitates the long-range response, which may account for 1/f noise . The Edge of Chaos is interpreted as noise-induced chaos -- a distinct phase where TS is broken in a specific manner and dynamics is dominated by noise-induced instantons. In the deterministic limit, this phase collapses onto the critical boundary of conventional chaos.
The first relation between supersymmetry and stochastic dynamics was established in two papers in 1979 and 1982 by Giorgio Parisi and Nicolas Sourlas, [ 6 ] [ 1 ] where Langevin SDEs -- SDEs with linear phase spaces, gradient flow vector fields, and additive noises -- were given supersymmetric representation with the help of the BRST gauge fixing procedure. While the original goal of their work was dimensional reduction , [ 7 ] the so-emerged supersymmetry of Langevin SDEs has since been addressed from a few different angles [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] including the fluctuation-dissipation theorems , [ 11 ] Jarzynski equality , [ 13 ] Onsager principle of microscopic reversibility , [ 14 ] solutions of Fokker–Planck equations , [ 15 ] self-organization , [ 16 ] etc.
The Parisi-Sourlas method has been extended to several other classes of dynamical systems, including classical mechanics , [ 17 ] [ 18 ] its stochastic generalization, [ 19 ] and higher-order Langevin SDEs. [ 12 ] The theory of pseudo-Hermitian supersymmetric operators [ 20 ] and the relation between the Parisi-Sourlas method and Lyapunov exponents [ 2 ] further enabled the extension of the theory to SDEs of arbitrary form and the identification of the spontaneous BRST supersymmetry breaking as a stochastic generalization of chaos. [ 21 ]
In parallel, the concept of the generalized transfer operator have been introduced in the dynamical systems theory . [ 4 ] [ 5 ] This concept underlies the stochastic evolution operator of STS and provides it with a solid mathematical meaning. Similar constructions were studied in the theory of SDEs. [ 22 ] [ 23 ]
The Parisi-Sourlas method has been recognized [ 24 ] [ 17 ] as a member of Witten-type or cohomological topological field theory , [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 3 ] [ 29 ] [ 30 ] [ 31 ] a class of models to which STS also belongs.
The physicist's way to look at a stochastic differential equation is essentially a continuous-time non-autonomous dynamical system that can be defined as: x ˙ ( t ) = F ( x ( t ) ) + ( 2 Θ ) 1 / 2 G a ( x ( t ) ) ξ a ( t ) ≡ F ( ξ ( t ) ) , {\displaystyle {\dot {x}}(t)=F(x(t))+(2\Theta )^{1/2}G_{a}(x(t))\xi ^{a}(t)\equiv {\mathcal {F}}(\xi (t)),} where x ∈ X {\textstyle x\in X} is a point in a closed smooth manifold , X {\textstyle X} , called in dynamical systems theory a state space while in physics, where X {\displaystyle X} is often a symplectic manifold with half of variables having the meaning of momenta, it is called the phase space . Further, F ( x ) ∈ T X x {\displaystyle F(x)\in TX_{x}} is a sufficiently smooth flow vector field from the tangent space of X {\displaystyle X} having the meaning of deterministic law of evolution, and G a ∈ T X , a = 1 , … , D , D = dim X {\displaystyle G_{a}\in TX,a=1,\ldots ,D,D=\dim X} is a set of sufficiently smooth vector fields that specify how the system is coupled to the time-dependent noise, ξ ( t ) ∈ R D {\displaystyle \xi (t)\in \mathbb {R} ^{D}} , which is called additive / multiplicative depending on whether G a {\displaystyle G_{a}} 's are independent/dependent on the position on X {\displaystyle X} .
The randomness of the noise will be introduced later. For now, the noise is a deterministic function of time and the equation above is an ordinary differential equation (ODE) with a time-dependent flow vector field, F {\displaystyle {\mathcal {F}}} . The solutions/trajectories of this ODE are differentiable with respect to initial conditions even for non-differentiable ξ ( t ) {\displaystyle \xi (t)} 's. [ 32 ] In other words, there exists a two-parameter family of noise-configuration-dependent diffeomorphisms : M ( ξ ) t t ′ : X → X , M ( ξ ) t t ′ ∘ M ( ξ ) t ′ t ″ = M ( ξ ) t t ″ , M ( ξ ) t t ′ | t = t ′ = Id X , {\displaystyle M(\xi )_{tt'}:X\to X,M(\xi )_{tt'}\circ M(\xi )_{t't''}=M(\xi )_{tt''},\left.M(\xi )_{tt'}\right|_{t=t'}={\text{Id}}_{X},} such that the solution of the ODE with initial condition x ( t ′ ) = x ′ {\displaystyle x(t')=x'} can be expressed as x ( t ) = M ( ξ ) t t ′ ( x ′ ) {\displaystyle x(t)=M(\xi )_{tt'}(x')} .
The dynamics can now be defined as follows: if at time t ′ {\displaystyle t'} , the system is described by the probability distribution P ( x ) {\displaystyle P(x)} , then the average value of some function f : X → R {\displaystyle f:X\to \mathbb {R} } at a later time t {\displaystyle t} is given by: f ¯ ( t ) = ∫ X f ( M ( ξ ) t t ′ ( x ) ) P ( x ) d x 1 ∧ . . . ∧ d x D = ∫ X f ( x ) M ^ ( ξ ) t ′ t ∗ ( P ( x ) d x 1 ∧ . . . ∧ d x D ) . {\displaystyle {\bar {f}}(t)=\int _{X}f\left(M(\xi )_{tt'}(x)\right)P(x)dx^{1}\wedge ...\wedge dx^{D}=\int _{X}f(x){\hat {M}}(\xi )_{t't}^{*}\left(P(x)dx^{1}\wedge ...\wedge dx^{D}\right).} Here M ^ ( ξ ) t ′ t ∗ {\displaystyle {\hat {M}}(\xi )_{t't}^{*}} is action or pullback induced by the inverse map, M ( ξ ) t t ′ − 1 = M ( ξ ) t ′ t {\displaystyle M(\xi )_{tt'}^{-1}=M(\xi )_{t't}} , on the probability distribution understood in a coordinate-free setting as a top-degree differential form .
Pullbacks are a wider concept, defined also for k-forms, i.e. , differential forms of other possible degrees k, 0 ≤ k ≤ D {\displaystyle 0\leq k\leq D} , ψ ( x ) = ψ i 1 . . . . i k ( x ) d x 1 ∧ . . . ∧ d x k ∈ Ω ( k ) ( x ) {\displaystyle \psi (x)=\psi _{i_{1}....i_{k}}(x)dx^{1}\wedge ...\wedge dx^{k}\in \Omega ^{(k)}(x)} , where Ω ( k ) ( x ) {\displaystyle \Omega ^{(k)}(x)} is the space all k-forms at point x.
According to the example above, the temporal evolution of k-forms is given by, | ψ ( t ) ⟩ = M ^ ( ξ ) t ′ t ∗ | ψ ( t ′ ) ⟩ , {\displaystyle |\psi (t)\rangle ={\hat {M}}(\xi )_{t't}^{*}|\psi (t')\rangle ,} where | ψ ⟩ ∈ Ω ( X ) = ⨁ k = 0 D Ω ( k ) ( X ) {\displaystyle |\psi \rangle \in \Omega (X)=\bigoplus \nolimits _{k=0}^{D}\Omega ^{(k)}(X)} is a time-dependent "wavefunction", adopting the terminology of quantum theory.
Unlike, say, trajectories in X {\displaystyle X} , pullbacks are linear objects even for nonlinear X {\displaystyle X} . As a linear object, the pullback can be averaged over the noise configurations leading to the generalized transfer operator (GTO) [ 4 ] [ 5 ] -- the dynamical systems theory counterpart of the stochastic evolution operator of the theory of SDEs and/or the Parisi-Sourlas approach.
With the help of the concept of Lie derivative , L ^ F ( τ ) = L ^ F + ( 2 Θ ) 1 / 2 ξ a ( τ ) L ^ G a {\displaystyle {\hat {L}}_{{\mathcal {F}}(\tau )}={\hat {L}}_{F}+(2\Theta )^{1/2}\xi ^{a}(\tau ){\hat {L}}_{G_{a}}} , which is essentially the infinitesimal pullback satisfying, in particular, the following equation, ∂ t M ^ ( ξ ) t ′ t ∗ = − L ^ F ( τ ) M ^ ( ξ ) t ′ t ∗ {\displaystyle \partial _{t}{\hat {M}}(\xi )_{t't}^{*}=-{\hat {L}}_{{\mathcal {F}}(\tau )}{\hat {M}}(\xi )_{t't}^{*}} , which integrates to M ^ ( ξ ) t ′ t ∗ = 1 ^ − ∫ t ′ t L ^ F ( τ ) d τ + ∫ t ′ t L ^ F ( τ 1 ) d τ 1 ∫ t ′ τ 1 L ^ F ( τ 2 ) d τ 2 + . . . , {\displaystyle {\hat {M}}(\xi )_{t't}^{*}={\hat {1}}-\int _{t'}^{t}{\hat {L}}_{{\mathcal {F}}(\tau )}d\tau +\int _{t'}^{t}{\hat {L}}_{{\mathcal {F}}(\tau _{1})}d\tau _{1}\int _{t'}^{\tau _{1}}{\hat {L}}_{{\mathcal {F}}(\tau _{2})}d\tau _{2}+...,} for the initial condition M ^ ( ξ ) t ′ t ∗ | t = t ′ = 1 ^ {\displaystyle {\hat {M}}(\xi )_{t't}^{*}|_{t=t'}={\hat {1}}} , and assuming Gaussian white noise , ⟨ ξ a ( t ) ⟩ noise = 0 , ⟨ ξ a ( t ) ξ b ( t ′ ) ⟩ noise = δ a b δ ( t − t ′ ) {\displaystyle \langle \xi ^{a}(t)\rangle _{\text{noise}}=0,\langle \xi ^{a}(t)\xi ^{b}(t')\rangle _{\text{noise}}=\delta ^{ab}\delta (t-t')} ..., the GTO can be derived as M ^ t t ′ = ⟨ M ( ξ ) t ′ t ∗ ⟩ noise = 1 ^ − ( t − t ′ ) H ^ + . . . = e − ( t − t ′ ) H ^ . {\displaystyle {\hat {\mathcal {M}}}_{tt'}=\langle M(\xi )_{t't}^{*}\rangle _{\text{noise}}={\hat {1}}-(t-t'){\hat {H}}+...=e^{-(t-t'){\hat {H}}}.} Here, the infinitesimal GTO, H ^ = L ^ F − Θ L ^ G a L ^ G a = [ d ^ , d ¯ ^ ] , {\displaystyle {\hat {H}}={\hat {L}}_{F}-\Theta {\hat {L}}_{G_{a}}{\hat {L}}_{G_{a}}=[{\hat {d}},{\hat {\bar {d}}}],} and d ¯ ^ = ı ^ F − Θ ı ^ G a L ^ G a {\displaystyle {\hat {\bar {d}}}={\hat {\imath }}_{\mathcal {F}}-\Theta {\hat {\imath }}_{G_{a}}{\hat {L}}_{G_{a}}} which follows from Cartan formula for Lie derivative, e.g., L ^ F = [ d ^ , ı ^ F ] {\displaystyle {\hat {L}}_{F}=[{\hat {d}},{\hat {\imath }}_{F}]} with the square brackets denoting bi-graded commutator and d ^ {\displaystyle {\hat {d}}} and ı ^ F {\displaystyle {\hat {\imath }}_{F}} being, respectively, the exterior derivative and interior multiplication , along with the nilpotency of the exterior differentiation suggesting, particularly, that [ d ^ , A ^ ] [ d ^ , B ^ ] = [ d ^ , A ^ [ d ^ , B ^ ] ] {\displaystyle [{\hat {d}},{\hat {A}}][{\hat {d}},{\hat {B}}]=[{\hat {d}},{\hat {A}}[{\hat {d}},{\hat {B}}]]} .
Any pullback by a diffeomorphism commutes with d ^ {\displaystyle {\hat {d}}} and the same holds for the GTO. In physical terms, this indicates the presence of a symmetry or, more precisely, a supersymmetry due to the nilpotency of the exterior derivative : d ^ 2 = 0 {\displaystyle {\hat {d}}^{2}=0} . This supersymmetry is referred to as topological supersymmetry (TS), as the exterior derivative plays a fundamental role in algebraic topology .
Symmetries suggest degeneracy of eigenstates of evolution operators. In case of TS, if | α ⟩ {\displaystyle |\alpha \rangle } is an eigenstate of H ^ {\displaystyle {\hat {H}}} , then | α ′ ⟩ = d ^ | α ⟩ {\displaystyle |\alpha '\rangle ={\hat {d}}|\alpha \rangle } is also an eigenstate with the same eigenvalue, provided that | α ′ ⟩ ≠ 0 {\displaystyle |\alpha '\rangle \neq 0} .
The GTO is a pseudo-Hermitian operator. [ 20 ] It has a complete bi-orthogonal eigensystem with the left and right eigenvectors, or the bras and the kets, related nontrivially. The eigensystems of GTO have a certain set of universal properties that limit the possible spectra of the physically meaningful models -- the ones with discrete spectra and with real parts of eigenvalues limited from below -- to the three major types presented in the figure on the right. [ 33 ] These properties include: the eigenvalues are either real or come in complex conjugate pairs called in dynamical systems theory Reulle-Pollicott resonances; each eigenstate has a well-defined degree; H ^ ( 0 , D ) {\displaystyle {\hat {H}}^{(0,D)}} do not break TS, min Re ( spec H ^ ( 0 , D ) ) = 0 {\displaystyle {\text{min Re}}(\operatorname {spec} {\hat {H}}^{(0,D)})=0} ; each De Rham cohomology provides one zero-eigenvalue supersymmetric "singlet" such that d ^ | θ ⟩ = 0 , ⟨ θ | d ^ = 0 {\displaystyle {\hat {d}}|\theta \rangle =0,\langle \theta |{\hat {d}}=0} and the singlet from H ^ ( D ) {\displaystyle {\hat {H}}^{(D)}} is the stationary probability distribution known as "ergodic zero"; all the other eigenstates are non-supersymmetric "doublets" related by TS: H ^ | α ⟩ = H α | α ⟩ , H ^ | α ′ ⟩ = H α | α ′ ⟩ {\displaystyle {\hat {H}}|\alpha \rangle =H_{\alpha }|\alpha \rangle ,\;{\hat {H}}|\alpha '\rangle =H_{\alpha }|\alpha '\rangle } and ⟨ α | H ^ = ⟨ α | H α , ⟨ α ′ | H ^ = ⟨ α ′ | H α {\displaystyle \langle \alpha |{\hat {H}}=\langle \alpha |H_{\alpha },\langle \alpha '|{\hat {H}}=\langle \alpha '|H_{\alpha }} , where H α {\displaystyle H_{\alpha }} is the corresponding eigenvalue, and | α ′ ⟩ = d ^ | α ⟩ , ⟨ α | = ⟨ α ′ | d ^ {\displaystyle |\alpha '\rangle ={\hat {d}}|\alpha \rangle ,\;\langle \alpha |=\langle \alpha '|{\hat {d}}} .
In dynamical systems theory, a system can be characterized as chaotic if the spectral radius of the finite-time GTO is larger than unity. Under this condition, the partition function, Z t t ′ = T r M ^ t t ′ = ∑ α e − ( t − t ′ ) H α , {\displaystyle Z_{tt'}=Tr{\hat {\mathcal {M}}}_{tt'}=\sum \nolimits _{\alpha }e^{-(t-t')H_{\alpha }},} grows exponentially in the limit of infinitely long evolution signaling the exponential growth of the number of closed solutions -- the hallmark of chaotic dynamics. In terms of the infinitesimal GTO, this condition reads, Δ = − min α Re H α > 0 , {\displaystyle \Delta =-\min _{\alpha }{\text{Re }}H_{\alpha }>0,} where Δ {\displaystyle \Delta } is the rate of the exponential growth which is known as "pressure", a member of the family of dynamical entropies such as topological entropy . Spectra b and c in the figure satisfy this condition.
One notable advantage of defining stochastic chaos in this way, compared to other possible approaches, is its equivalence to the spontaneous breakdown of topological supersymmetry (see below). Consequently, through the Goldstone theorem, it has the potential to explain the experimental signature of chaotic behavior, commonly known as 1/f noise .
It is easily to see that the properties of GTO spectra imply that stochastic chaos is only possible when dim X ≥ 3 {\displaystyle {\text{dim }}X\geq 3} . This can be viewed as a stochastic generalization of the Poincaré–Bendixson theorem .
Another object of interest is the sharp trace of the GTO, W = T r ( − 1 ) k ^ M ^ t t ′ = ∑ α ( − 1 ) k α e − ( t − t ′ ) H α , {\displaystyle W=Tr(-1)^{\hat {k}}{\hat {\mathcal {M}}}_{tt'}=\sum \nolimits _{\alpha }(-1)^{k_{\alpha }}e^{-(t-t')H_{\alpha }},} where k ^ | ψ α ⟩ = k α | ψ α ⟩ {\displaystyle {\hat {k}}|\psi _{\alpha }\rangle =k_{\alpha }|\psi _{\alpha }\rangle } with k ^ {\displaystyle {\hat {k}}} being the operator of the degree of the differential form. This is a fundamental object of topological nature known in physics as the Witten index . From the properties of the eigensystem of GTO, only supersymmetric singlets contribute to the Witten index, W = ∑ k = 0 D ( − 1 ) k B k = E u . C h ( X ) {\displaystyle W=\sum \nolimits _{k=0}^{D}(-1)^{k}B_{k}=Eu.Ch(X)} , where E u . C h . {\displaystyle Eu.Ch.} is the Euler characteristic and B 's are Betti numbers that equal the numbers of supersymmetric singlets of the corresponding degree.
The idea of the Parisi–Sourlas method is to rewrite the partition function of the noise in terms of the dynamical variables of the model using BRST gauge-fixing procedure. [ 24 ] [ 25 ] The resulting expression is the Witten index, whose physical meaning is (up to a topological factor) the partition function of the noise.
As the first step toward the pathintegral representation of the Witten index, the pathintegration over dynamical variables are formally introduced into the expression of the partition function of the noise: ⟨ 1 ⟩ noise = ∬ P ( ξ ) D ξ → ∬ p . b . c D x P ( ξ ) D ξ , {\displaystyle \langle 1\rangle _{\text{noise}}=\iint {\mathcal {P}}(\xi ){\mathcal {D}}\xi \to \iint _{p.b.c}{\mathcal {D}}x{\mathcal {P}}(\xi ){\mathcal {D}}\xi ,} where the noise is again assumed Gaussian white with the normalized probability functional, P ( ξ ) ∝ e − ∫ d τ ( ξ ( τ ) ) 2 / 2 , ⟨ 1 ⟩ noise = 1 , {\displaystyle {\mathcal {P}}(\xi )\propto e^{-\int d\tau (\xi (\tau ))^{2}/2},\langle 1\rangle _{\text{noise}}=1,} and the functional integration over dynamical variables goes over closed paths, i.e., paths with periodic boundary conditions (p.b.c). The expression in the r.h.s. can be viewed as a redundant theory of the noise. Its "action" is independent of the dynamical variables. This independence can be interpreted as a local symmetry of the model with respect to all possible continuous deformations of the paths. This local symmetry can be gauge-fixed using the SDE as a gauge condition, which leads to the following representation of the Witten index: W = ∬ p . b . c J ( ξ ) ( ∏ τ δ D ( x ˙ ( τ ) − F ( x ( τ ) , ξ ( τ ) ) ) ) D x P ( ξ ) D ξ = ∬ p . b . c . e ( Q , Ψ ( ξ , Φ ) ) D Φ P ( ξ ) D ξ = ∬ p . b . c . e ( Q , Ψ ( Φ ) ) D Φ , {\displaystyle W=\iint _{p.b.c}J(\xi )\left(\prod \nolimits _{\tau }\delta ^{D}({\dot {x}}(\tau )-{\mathcal {F}}(x(\tau ),\xi (\tau )))\right){\mathcal {D}}x{\mathcal {P}}(\xi ){\mathcal {D}}\xi =\iint _{p.b.c.}e^{(Q,\Psi (\xi ,\Phi ))}{\mathcal {D}}\Phi {\mathcal {P}}(\xi ){\mathcal {D}}\xi =\iint _{p.b.c.}e^{(Q,\Psi (\Phi ))}{\mathcal {D}}\Phi ,} where the δ {\displaystyle \delta } -functional limits the integration only to solutions of SDE, which can be understood in this context as Gribov copies , J ( ξ ) {\displaystyle \textstyle J(\xi )} is the Jacobian compensating (up to a sign) the Jacobian from the δ {\displaystyle \delta } -functional, Φ = x B χ χ ¯ {\displaystyle \Phi =xB\chi {\bar {\chi }}} is the collection of fields that includes, besides the original field x {\displaystyle x} , the Faddeev–Popov ghosts χ , χ ¯ {\displaystyle \chi ,{\bar {\chi }}} and the Lagrange multiplier, B {\displaystyle B} , and Ψ ( ξ , Φ ) = ∫ d τ ı x ˙ ( τ ) − F ( τ ) {\displaystyle \Psi (\xi ,\Phi )=\int d\tau \imath _{{\dot {x}}(\tau )-{\mathcal {F}}(\tau )}} , with ı x ˙ = i χ ¯ j x ˙ j {\displaystyle \textstyle \imath _{\dot {x}}=i{\bar {\chi }}_{j}{\dot {x}}^{j}} being the pathintegral version of the interior multiplication, the topological and/or BRST supersymmetry is, Q = ∫ d τ ( χ i ( τ ) δ / δ x i ( τ ) + B i ( τ ) δ / δ χ ¯ i ( τ ) ) , {\displaystyle Q=\textstyle \int d\tau (\chi ^{i}(\tau )\delta /\delta x^{i}(\tau )+B_{i}(\tau )\delta /\delta {\bar {\chi }}_{i}(\tau )),} and, in the last equality, the noise is integrated out and Ψ = ∫ t ′ t d τ ( ı x ˙ − d ¯ ) ( τ ) {\displaystyle \textstyle \Psi =\int _{t'}^{t}d\tau (\imath _{\dot {x}}-{\bar {d}})(\tau )} is the gauge fermion with d ¯ = ı F − Θ ı G a L G a , and L G a = ( Q , ı G a ) {\textstyle \textstyle {\bar {d}}=\textstyle \imath _{F}-\Theta \imath _{G_{a}}L_{G_{a}},{\text{ and }}L_{G_{a}}=(Q,\imath _{G_{a}})} being the pathintegral version of Lie derivative.
The Parisi-Sourlas method is peculiar in that sense that it looks like gauge fixing of an empty theory -- the gauge fixing term is the only part of the action. This is a definitive feature of Witten-type topological field theories . Therefore, the Parisi-Sourlas method is a TFT [ 25 ] [ 24 ] [ 26 ] [ 28 ] [ 3 ] [ 29 ] and as a TFT it has got objects that are topological invariants.
The Parisi-Sourlas functional is one of them. It is essentially a pathintegral representation of the Witten index. The topological character of W {\displaystyle W} is seen by noting that the gauge-fixing character of the functional ensures that only solutions of the SDE contribute. Each solution provides either positive or negative unity: W = ⟨ ∬ p . b . c D x J ( ξ ) ( ∏ τ δ D ( x ˙ ( τ ) − F ( x ( τ ) , ξ ( τ ) ) ) ) ⟩ noise = ⟨ I N ( ξ ) ⟩ noise , with I N ( ξ ) = ∑ solutions sign J ( ξ ) , {\displaystyle W=\langle \iint _{p.b.c}{\mathcal {D}}xJ(\xi )\left(\prod \nolimits _{\tau }\delta ^{D}({\dot {x}}(\tau )-{\mathcal {F}}(x(\tau ),\xi (\tau )))\right)\rangle _{\text{noise}}=\textstyle \left\langle I_{N}(\xi )\right\rangle _{\text{noise}},{\text{ with }}I_{N}(\xi )=\sum _{\text{solutions}}\operatorname {sign} J(\xi ),} being the index of the so-called Nicolai map, the map from the space of closed paths to the noise configurations making these closed paths solutions of the SDE, ξ a ( x ) = G i a ( x ˙ i − F i ) / ( 2 Θ ) 1 / 2 {\textstyle \xi ^{a}(x)=G_{i}^{a}({\dot {x}}^{i}-F^{i})/(2\Theta )^{1/2}} . The index of the map can be viewed as a realization of Poincaré–Hopf theorem on the infinite-dimensional space of close paths with the SDE playing the role of the vector field and with the solutions of the SDE playing the role of the critical points with index sign J ( ξ ) = sign Det δ ξ / δ x . {\displaystyle \operatorname {sign} J(\xi )=\operatorname {sign} {\text{Det }}\delta \xi /\delta x.} I N ( ξ ) {\textstyle I_{N}(\xi )} is a topological object independent of the noise configuration. It equals its own stochastic average which, in turn, equals the Witten index .
There are other classes of topological objects in TFTs including matrix elements on instantons. In fact, cohomological TFTs are often called intersection theory on instantons. From the STS viewpoint, instantons refers to quanta of transient dynamics, such as neuronal avalanches or solar flares, and complex or composite instantons represent nonlinear dynamical processes that occur in response to quenches -- external changes in parameters -- such as paper crumpling, protein folding etc. The application of the TFT aspect of STS to instantons remains largely unexplored.
Just like the partition function of the noise that it represents, the Witten index contains no information about the system's dynamics and cannot be used directly to investigate the dynamics in the system. The information on the dynamics is contained in the stochastic evolution operator (SEO) -- the Parisi-Sourlas path integral with open boundary conditions. Using the explicit form of the action ( Q , Ψ ( Φ ) ) = ∫ t ′ t d τ ( i B x ˙ + i χ ˙ χ ¯ − H ) {\displaystyle (Q,\Psi (\Phi ))=\int _{t'}^{t}d\tau (iB{\dot {x}}+i{\dot {\chi }}{\bar {\chi }}-H)} , where H = ( Q , d ¯ ) {\displaystyle H=(Q,{\bar {d}})} , the operator representation of the SEO can be derived as ∬ x χ ( t ′ ) = x i χ i x χ ( t ) = x f χ f e ∫ t ′ t d τ ( i B x ˙ + i χ ˙ χ ¯ − H ) D Φ = ⟨ x f χ f | e − ( t − t ′ ) H ^ | x i χ i ⟩ , {\displaystyle \iint _{{x\chi (t')=x_{i}\chi _{i}} \atop {x\chi (t)=x_{f}\chi _{f}}}e^{\int _{t'}^{t}d\tau (iB{\dot {x}}+i{\dot {\chi }}{\bar {\chi }}-H)}{\mathcal {D}}\Phi =\langle x_{f}\chi _{f}|e^{-(t-t'){\hat {H}}}|x_{i}\chi _{i}\rangle ,} where the infinitesimal SEO H ^ = H ( x B χ χ ¯ ) | B , χ ¯ → B ^ , χ ¯ ^ {\displaystyle {\hat {H}}=\left.H(xB\chi {\bar {\chi }})\right|_{B,{\bar {\chi }}\to {\hat {B}},{\hat {\bar {\chi }}}}} , with i B ^ i = ∂ / ∂ x i , i χ ¯ ^ i = ∂ / ∂ χ i {\displaystyle i{\hat {B}}_{i}=\partial /\partial x^{i},i{\hat {\bar {\chi }}}_{i}=\partial /\partial \chi ^{i}} . The explicit form of the SEO contains an ambiguity arising from the non-commutativity of momentum and position operators: B x {\displaystyle Bx} in the path integral representation admits an entire α {\displaystyle \alpha } -family of interpretations in the operator representation: α B ^ x ^ + ( 1 − α ) x ^ B ^ . {\displaystyle \alpha {\hat {B}}{\hat {x}}+(1-\alpha ){\hat {x}}{\hat {B}}.} The same ambiguity arises in the theory of SDEs, where different choices of α {\displaystyle \alpha } are referred to as different interpretations of SDEs with α = 1 , 1 / 2 , 0 {\displaystyle \alpha =1,1/2,0} being respectively the Ito , Stratonovich , and Kolmogorov interpretations.
This ambiguity can be removed by additional conditions. In quantum theory, the condition is Hermiticity of Hamiltonian, which is satisfied by the Weyl symmetrization rule corresponding to α = 1 / 2 {\displaystyle \alpha =1/2} . In STS, the condition is that the SEO equals the GTO, which is also achieved at α = 1 / 2 {\displaystyle \alpha =1/2} . In other words, only the Stratonovich interpretation of SDEs is consistent with the dynamical systems theory approach . Other interpretations differ by the shifted flow vector field in the corresponding SEO, F α = F − Θ ( 2 α − 1 ) ( G a ⋅ ∂ ) G a {\displaystyle F_{\alpha }=F-\Theta (2\alpha -1)(G_{a}\cdot \partial )G_{a}} .
The wavefunctions of STS depend not only on the original dynamical variables but also on their supersymmetric partners χ {\displaystyle \chi } . These Grassmann numbers, or fermions, represent the differentials of the wavefunctions understood as differential forms. [ 3 ] The fermions are intrinsically linked to stochastic Lyapunov exponents [ 2 ] that define the butterfly effect . Therefore, it is believed that the effective field theory for these fermions -- referred to as goldstinos in the context of the spontaneous TS breaking -- is essentially a field theory of the butterfly effect.
The response of the model can be analyzed using the concept of generating functional : G ( η ) = − log lim T → ∞ ⟨ g | M ^ T / 2 , − T / 2 ( η ) | g ⟩ , {\displaystyle G(\eta )=-\log \lim _{T\to \infty }\langle g|{\hat {M}}_{T/2,-T/2}(\eta )|g\rangle ,} where η {\displaystyle \eta } denotes external probing fields, M ^ T / 2 , − T / 2 ( η ) {\displaystyle {\hat {M}}_{T/2,-T/2}(\eta )} is the perturbed SEO/GTO, and | g ⟩ {\displaystyle |g\rangle } is the ground state . The ground state must be selected from the eigenstates with the smallest real part of the eigenvalue to ensure the stability of the model's response, Re H g = min α Re H α . {\displaystyle {\text{Re }}H_{g}=\min \nolimits _{\alpha }{\text{Re }}H_{\alpha }.}
The functional dependence of the generating functional on the probing fields describes how the ground state reacts to external perturbations. Under conditions of spontaneously broken TS, there exists another eigenstate with the same eigenvalue, H g {\displaystyle H_{g}} . In line with the Goldstone theorem , this degeneracy of the ground state implies the presence of a gapless excitation that must mediate long-range response. This picture qualitatively explains the widespread occurrence of long-range behavior in chaotic dynamics known as 1/f noise . A more rigorous theoretical explanation of 1/f noise remains an open problem.
When H g {\displaystyle H_{g}} is complex, pseudo-time-reversal symmetry is also spontaneously broken. In the context of kinematic dynamo , this situation corresponds to rotation of the galactic magnetic field. [ 33 ] The implications of complex H g {\displaystyle H_{g}} in a more general setting remain unexplored.
Since the late 80's, [ 34 ] [ 35 ] the concept of the Edge of chaos has emerged -- a finite-width phase at the boundary of conventional chaos, where dynamics is often dominated by power-law distributed instantonic processes such as solar flares, earthquakes, and neuronal avalanches. [ 36 ] This phase has also been recognized as potentially significant for information processing. [ 37 ] [ 38 ] Its phenomenological understanding is largely based on the concepts of self-adaptation and self-organization . [ 39 ] [ 40 ]
STS offers the following explanation for the Edge of chaos (see figure on the right). [ 41 ] In the presence of noise, the TS can be spontaneously broken not only by the non-integrability of the flow vector field, as in deterministic chaos, but also by noise-induced instantons. [ 42 ] Under this condition, the dynamics must be dominated by instantons with power-law distributions, as dictated by the Goldstone theorem. In the deterministic limit, the noise-induced instantons vanish, causing the phase hosting this type of noise-induced dynamics to collapse onto the boundary of the deterministic chaos. | https://en.wikipedia.org/wiki/Supersymmetric_theory_of_stochastic_dynamics |
Supersymmetry is a theoretical framework in physics that suggests the existence of a symmetry between particles with integer spin ( bosons ) and particles with half-integer spin ( fermions ). It proposes that for every known particle, there exists a partner particle with different spin properties. [ 1 ] There have been multiple experiments on supersymmetry that have failed to provide evidence that it exists in nature . [ 2 ] If evidence is found, supersymmetry could help explain certain phenomena, such as the nature of dark matter and the hierarchy problem in particle physics.
A supersymmetric theory is a theory in which the equations for force and the equations for matter are identical. In theoretical and mathematical physics , any theory with this property has the principle of supersymmetry (SUSY). Dozens of supersymmetric theories exist. [ 3 ] In theory, supersymmetry is a type of spacetime symmetry between two basic classes of particles: bosons , which have an integer-valued spin and follow Bose–Einstein statistics , and fermions , which have a half-integer-valued spin and follow Fermi–Dirac statistics . [ 4 ] The names of bosonic partners of fermions are prefixed with s- , because they are scalar particles . For example, if the electron existed in a supersymmetric theory, then there would be a particle called a selectron (superpartner electron), a bosonic partner of the electron. [ 5 ]
In supersymmetry, each particle from the class of fermions would have an associated particle in the class of bosons, and vice versa, known as a superpartner . The spin of a particle's superpartner is different by a half-integer. In the simplest supersymmetry theories, with perfectly " unbroken " supersymmetry, each pair of superpartners would share the same mass and internal quantum numbers besides spin. More complex supersymmetry theories have a spontaneously broken symmetry , allowing superpartners to differ in mass. [ 6 ] [ 7 ] [ 8 ]
Supersymmetry has various applications to different areas of physics, such as quantum mechanics , statistical mechanics , quantum field theory , condensed matter physics , nuclear physics , optics , stochastic dynamics , astrophysics , quantum gravity , and cosmology . Supersymmetry has also been applied to high-energy physics , where a supersymmetric extension of the Standard Model is a possible candidate for physics beyond the Standard Model . However, no supersymmetric extensions of the Standard Model have been experimentally verified, and some physicists are saying the theory is dead. [ 9 ] [ 2 ]
A supersymmetry relating mesons and baryons was first proposed, in the context of hadronic physics, by Hironari Miyazawa in 1966. This supersymmetry did not involve spacetime, that is, it concerned internal symmetry, and was broken badly. Miyazawa's work was largely ignored at the time. [ 10 ] [ 11 ] [ 12 ] [ 13 ]
J. L. Gervais and B. Sakita (in 1971), [ 14 ] Yu. A. Golfand and E. P. Likhtman (also in 1971), and D. V. Volkov and V. P. Akulov (1972), [ 15 ] [ 16 ] [ 17 ] independently rediscovered supersymmetry in the context of quantum field theory , a radically new type of symmetry of spacetime and fundamental fields, which establishes a relationship between elementary particles of different quantum nature, bosons and fermions, and unifies spacetime and internal symmetries of microscopic phenomena. Supersymmetry with a consistent Lie-algebraic graded structure on which the Gervais−Sakita rediscovery was based directly first arose in 1971 in the context of an early version of string theory by Pierre Ramond , John H. Schwarz and André Neveu . [ 18 ] [ 19 ]
In 1974, Julius Wess and Bruno Zumino [ 20 ] identified the characteristic renormalization features of four-dimensional supersymmetric field theories, which identified them as remarkable QFTs, and they and Abdus Salam and their fellow researchers introduced early particle physics applications. The mathematical structure of supersymmetry ( graded Lie superalgebras ) has subsequently been applied successfully to other topics of physics, ranging from nuclear physics , [ 21 ] [ 22 ] critical phenomena , [ 23 ] quantum mechanics to statistical physics , and supersymmetry remains a vital part of many proposed theories in many branches of physics.
In particle physics , the first realistic supersymmetric version of the Standard Model was proposed in 1977 by Pierre Fayet and is known as the Minimal Supersymmetric Standard Model or MSSM for short. It was proposed to solve, amongst other things, the hierarchy problem .
Supersymmetry was coined by Abdus Salam and John Strathdee in 1974 as a simplification of the term super-gauge symmetry used by Wess and Zumino, although Zumino also used the same term at around the same time. [ 24 ] [ 25 ] The term supergauge was in turn coined by Neveu and Schwarz in 1971 when they devised supersymmetry in the context of string theory. [ 19 ] [ 26 ]
One reason that physicists explored supersymmetry is because it offers an extension to the more familiar symmetries of quantum field theory. These symmetries are grouped into the Poincaré group and internal symmetries and the Coleman–Mandula theorem showed that under certain assumptions, the symmetries of the S-matrix must be a direct product of the Poincaré group with a compact internal symmetry group or if there is not any mass gap , the conformal group with a compact internal symmetry group. In 1971 Golfand and Likhtman were the first to show that the Poincaré algebra can be extended through introduction of four anticommuting spinor generators (in four dimensions), which later became known as supercharges. In 1975, the Haag–Łopuszański–Sohnius theorem analyzed all possible superalgebras in the general form, including those with an extended number of the supergenerators and central charges . This extended super-Poincaré algebra paved the way for obtaining a very large and important class of supersymmetric field theories.
Traditional symmetries of physics are generated by objects that transform by the tensor representations of the Poincaré group and internal symmetries. Supersymmetries, however, are generated by objects that transform by the spin representations . According to the spin-statistics theorem , bosonic fields commute while fermionic fields anticommute . Combining the two kinds of fields into a single algebra requires the introduction of a Z 2 -grading under which the bosons are the even elements and the fermions are the odd elements. Such an algebra is called a Lie superalgebra .
The simplest supersymmetric extension of the Poincaré algebra is the Super-Poincaré algebra . Expressed in terms of two Weyl spinors , has the following anti-commutation relation:
and all other anti-commutation relations between the Q s and commutation relations between the Q s and P s vanish. In the above expression P μ = − i ∂ μ are the generators of translation and σ μ are the Pauli matrices .
There are representations of a Lie superalgebra that are analogous to representations of a Lie algebra. Each Lie algebra has an associated Lie group and a Lie superalgebra can sometimes be extended into representations of a Lie supergroup .
Supersymmetric quantum mechanics adds the SUSY superalgebra to quantum mechanics as opposed to quantum field theory. Supersymmetric quantum mechanics often becomes relevant when studying the dynamics of supersymmetric solitons , and due to the simplified nature of having fields which are only functions of time (rather than space-time), a great deal of progress has been made in this subject and it is now studied in its own right.
SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians . (The potential energy terms which occur in the Hamiltonians are then known as partner potentials .) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy. This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a "bosonic Hamiltonian", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be "fermionic", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy.
In 2021, supersymmetric quantum mechanics was applied to option pricing and the analysis of markets in finance , [ 27 ] and to financial networks . [ dubious – discuss ] [ 28 ]
In quantum field theory, supersymmetry is motivated by solutions to several theoretical problems, for generally providing many desirable mathematical properties, and for ensuring sensible behavior at high energies. Supersymmetric quantum field theory is often much easier to analyze, as many more problems become mathematically tractable. When supersymmetry is imposed as a local symmetry, Einstein's theory of general relativity is included automatically, and the result is said to be a theory of supergravity . Another theoretically appealing property of supersymmetry is that it offers the only "loophole" to the Coleman–Mandula theorem , which prohibits spacetime and internal symmetries from being combined in any nontrivial way, for quantum field theories with very general assumptions. The Haag–Łopuszański–Sohnius theorem demonstrates that supersymmetry is the only way spacetime and internal symmetries can be combined consistently. [ 29 ]
While supersymmetry has not been discovered at high energy , see Section Supersymmetry in particle physics , supersymmetry was found to be effectively realized at the intermediate energy of hadronic physics where baryons and mesons are superpartners. An exception is the pion that appears as a zero mode in the mass spectrum and thus protected by the supersymmetry: It has no baryonic partner. [ 30 ] [ 31 ] The realization of this effective supersymmetry is readily explained in quark–diquark models : Because two different color charges close together (e.g., blue and red) appear under coarse resolution as the corresponding anti-color (e.g. anti-green), a diquark cluster viewed with coarse resolution (i.e., at the energy-momentum scale used to study hadron structure) effectively appears as an antiquark. Therefore, a baryon containing 3 valence quarks, of which two tend to cluster together as a diquark, behaves likes a meson.
SUSY concepts have provided useful extensions to the WKB approximation . Additionally, SUSY has been applied to disorder averaged systems both quantum and non-quantum (through statistical mechanics), the Fokker–Planck equation being an example of a non-quantum theory. The 'supersymmetry' in all these systems arises from the fact that one is modelling one particle and as such the 'statistics' do not matter. The use of the supersymmetry method provides a mathematical rigorous alternative to the replica trick , but only in non-interacting systems, which attempts to address the so-called 'problem of the denominator' under disorder averaging. For more on the applications of supersymmetry in condensed matter physics see Efetov (1997). [ 32 ]
In 2021, a group of researchers showed that, in theory, N = ( 0 , 1 ) {\displaystyle N=(0,1)} SUSY could be realised at the edge of a Moore–Read quantum Hall state. [ 33 ] However, to date, no experiments have been done yet to realise it at an edge of a Moore–Read state. In 2022, a different group of researchers created a computer simulation of atoms in 1 dimensions that had supersymmetric topological quasiparticles . [ 34 ]
In 2013, integrated optics was found [ 35 ] to provide a fertile ground on which certain ramifications of SUSY can be explored in readily-accessible laboratory settings. Making use of the analogous mathematical structure of the quantum-mechanical Schrödinger equation and the wave equation governing the evolution of light in one-dimensional settings, one may interpret the refractive index distribution of a structure as a potential landscape in which optical wave packets propagate. In this manner, a new class of functional optical structures with possible applications in phase matching , mode conversion [ 36 ] and space-division multiplexing becomes possible. SUSY transformations have been also proposed as a way to address inverse scattering problems in optics and as a one-dimensional transformation optics . [ 37 ]
All stochastic (partial) differential equations, the models for all types of continuous time dynamical systems, possess topological supersymmetry. [ 38 ] [ 39 ] In the operator representation of stochastic evolution, the topological supersymmetry is the exterior derivative which is commutative with the stochastic evolution operator defined as the stochastically averaged pullback induced on differential forms by SDE-defined diffeomorphisms of the phase space . The topological sector of the so-emerging supersymmetric theory of stochastic dynamics can be recognized as the Witten-type topological field theory .
The meaning of the topological supersymmetry in dynamical systems is the preservation of the phase space continuity—infinitely close points will remain close during continuous time evolution even in the presence of noise. When the topological supersymmetry is broken spontaneously, this property is violated in the limit of the infinitely long temporal evolution and the model can be said to exhibit (the stochastic generalization of) the butterfly effect . From a more general perspective, spontaneous breakdown of the topological supersymmetry is the theoretical essence of the ubiquitous dynamical phenomenon variously known as chaos , turbulence , self-organized criticality etc. The Goldstone theorem explains the associated emergence of the long-range dynamical behavior that manifests itself as 1 / f noise , butterfly effect , and the scale-free statistics of sudden (instantonic) processes, such as earthquakes, neuroavalanches, and solar flares, known as the Zipf's law and the Richter scale .
SUSY is also sometimes studied mathematically for its intrinsic properties. This is because it describes complex fields satisfying a property known as holomorphy , which allows holomorphic quantities to be exactly computed. This makes supersymmetric models useful " toy models " of more realistic theories. A prime example of this has been the demonstration of S-duality in four-dimensional gauge theories [ 40 ] that interchanges particles and monopoles .
The proof of the Atiyah–Singer index theorem is much simplified by the use of supersymmetric quantum mechanics.
Supersymmetry is an integral part of string theory , a possible theory of everything . There are two types of string theory, supersymmetric string theory or superstring theory , and non-supersymmetric string theory. By definition of superstring theory, supersymmetry is required in superstring theory at some level. However, even in non-supersymmetric string theory, a type of supersymmetry called misaligned supersymmetry is still required in the theory in order to ensure no physical tachyons appear. [ 41 ] [ 42 ] Any string theories without some kind of supersymmetry, such as bosonic string theory and the E 7 × E 7 {\displaystyle E_{7}\times E_{7}} , S U ( 16 ) {\displaystyle SU(16)} , and E 8 {\displaystyle E_{8}} heterotic string theories , will have a tachyon and therefore the spacetime vacuum itself would be unstable and would decay into some tachyon-free string theory usually in a lower spacetime dimension. [ 43 ] There is no experimental evidence that either supersymmetry or misaligned supersymmetry holds in our universe, and many physicists have moved on from supersymmetry and string theory entirely due to the non-detection of supersymmetry at the LHC. [ 44 ] [ 45 ]
Despite the null results for supersymmetry at the LHC so far, some particle physicists have nevertheless moved to string theory in order to resolve the naturalness crisis for certain supersymmetric extensions of the Standard Model. [ 46 ] According to the particle physicists, there exists a concept of "stringy naturalness" in string theory , [ 47 ] where the string theory landscape could have a power law statistical pull on soft SUSY breaking terms to large values (depending on the number of hidden sector SUSY breaking fields contributing to the soft terms). [ 48 ] If this is coupled with an anthropic requirement that contributions to the weak scale not exceed a factor between 2 and 5 from its measured value (as argued by Agrawal et al.), [ 49 ] then the Higgs mass is pulled up to the vicinity of 125 GeV while most sparticles are pulled to values beyond the current reach of LHC. [ 50 ] (The Higgs was determined to have a mass of 125 GeV ±0.15 GeV in 2022.) An exception occurs for higgsinos which gain mass not from SUSY breaking but rather from whatever mechanism solves the SUSY mu problem. Light higgsino pair production in association with hard initial state jet radiation leads to a soft opposite-sign dilepton plus jet plus missing transverse energy signal. [ 51 ]
In particle physics, a supersymmetric extension of the Standard Model is a possible candidate for undiscovered particle physics , and seen by some physicists as an elegant solution to many current problems in particle physics if confirmed correct, which could resolve various areas where current theories are believed to be incomplete and where limitations of current theories are well established. [ 52 ] [ 53 ] In particular, one supersymmetric extension of the Standard Model , the Minimal Supersymmetric Standard Model (MSSM), became popular in theoretical particle physics, as the Minimal Supersymmetric Standard Model is the simplest supersymmetric extension of the Standard Model that could resolve major hierarchy problems within the Standard Model, by guaranteeing that quadratic divergences of all orders will cancel out in perturbation theory . If a supersymmetric extension of the Standard Model is correct, superpartners of the existing elementary particles would be new and undiscovered particles and supersymmetry is expected to be spontaneously broken.
There is no experimental evidence that a supersymmetric extension to the Standard Model is correct, or whether or not other extensions to current models might be more accurate. It is only since around 2010 that particle accelerators specifically designed to study physics beyond the Standard Model have become operational (i.e. the Large Hadron Collider (LHC)), and it is not known where exactly to look, nor the energies required for a successful search. However, the negative results from the LHC since 2010 have already ruled out some supersymmetric extensions to the Standard Model, and many physicists believe that the Minimal Supersymmetric Standard Model , while not ruled out, is no longer able to fully resolve the hierarchy problem. [ 54 ]
Incorporating supersymmetry into the Standard Model requires doubling the number of particles since there is no way that any of the particles in the Standard Model can be superpartners of each other. With the addition of new particles, there are many possible new interactions. The simplest possible supersymmetric model consistent with the Standard Model is the Minimal Supersymmetric Standard Model (MSSM) which can include the necessary additional new particles that are able to be superpartners of those in the Standard Model.
One of the original motivations for the Minimal Supersymmetric Standard Model came from the hierarchy problem . Due to the quadratically divergent contributions to the Higgs mass squared in the Standard Model, the quantum mechanical interactions of the Higgs boson causes a large renormalization of the Higgs mass and unless there is an accidental cancellation, the natural size of the Higgs mass is the greatest scale possible. Furthermore, the electroweak scale receives enormous Planck-scale quantum corrections. The observed hierarchy between the electroweak scale and the Planck scale must be achieved with extraordinary fine tuning . This problem is known as the hierarchy problem.
Supersymmetry close to the electroweak scale , such as in the Minimal Supersymmetric Standard Model, would solve the hierarchy problem that afflicts the Standard Model. [ 55 ] It would reduce the size of the quantum corrections by having automatic cancellations between fermionic and bosonic Higgs interactions, and Planck-scale quantum corrections cancel between partners and superpartners (owing to a minus sign associated with fermionic loops). The hierarchy between the electroweak scale and the Planck scale would be achieved in a natural manner, without extraordinary fine-tuning. If supersymmetry were restored at the weak scale, then the Higgs mass would be related to supersymmetry breaking which can be induced from small non-perturbative effects explaining the vastly different scales in the weak interactions and gravitational interactions.
Another motivation for the Minimal Supersymmetric Standard Model comes from grand unification , the idea that the gauge symmetry groups should unify at high-energy. In the Standard Model, however, the weak , strong and electromagnetic gauge couplings fail to unify at high energy. In particular, the renormalization group evolution of the three gauge coupling constants of the Standard Model is somewhat sensitive to the present particle content of the theory. These coupling constants do not quite meet together at a common energy scale if we run the renormalization group using the Standard Model. [ 56 ] [ 57 ] After incorporating minimal SUSY at the electroweak scale, the running of the gauge couplings are modified, and joint convergence of the gauge coupling constants is projected to occur at approximately 10 16 GeV . [ 56 ] The modified running also provides a natural mechanism for radiative electroweak symmetry breaking .
In many supersymmetric extensions of the Standard Model, such as the Minimal Supersymmetric Standard Model, there is a heavy stable particle (such as the neutralino ) which could serve as a weakly interacting massive particle (WIMP) dark matter candidate. The existence of a supersymmetric dark matter candidate is related closely to R-parity . Supersymmetry at the electroweak scale (augmented with a discrete symmetry) typically provides a candidate dark matter particle at a mass scale consistent with thermal relic abundance calculations. [ 58 ] [ 59 ]
The standard paradigm for incorporating supersymmetry into a realistic theory is to have the underlying dynamics of the theory be supersymmetric, but the ground state of the theory does not respect the symmetry and supersymmetry is broken spontaneously . The supersymmetry break can not be done permanently by the particles of the MSSM as they currently appear. This means that there is a new sector of the theory that is responsible for the breaking. The only constraint on this new sector is that it must break supersymmetry permanently and must give superparticles TeV scale masses. There are many models that can do this and most of their details do not matter. In order to parameterize the relevant features of supersymmetry breaking, arbitrary soft SUSY breaking terms are added to the theory which temporarily break SUSY explicitly but could never arise from a complete theory of supersymmetry breaking.
Spin = integer
Spin = half-integer
Spin = 0
Spin = 0
Spin = 1/2
Spin = 1/2
Three generations of squarks
Three generations of sleptons
Several kinds of gauginos
Gravitino ( G ~ {\displaystyle {\tilde {G}}} ) Spin = 3/2; superpartner of the (hypothetical) Graviton
Charged and neutral combinations
All of these supersymmetric partners (sparticles) are hypothetical and have not been observed experimentally. They are predicted by various supersymmetric extensions of the Standard Model.
SUSY extensions of the standard model are constrained by a variety of experiments, including measurements of low-energy observables – for example, the anomalous magnetic moment of the muon at Fermilab ; the WMAP dark matter density measurement and direct detection experiments – for example, XENON -100 and LUX ; and by particle collider experiments, including B-physics , Higgs phenomenology and direct searches for superpartners (sparticles), at the Large Electron–Positron Collider , Tevatron and the LHC . In fact, CERN publicly states that if a supersymmetric model of the Standard Model "is correct, supersymmetric particles should appear in collisions at the LHC." [ 60 ]
Historically, the tightest limits were from direct production at colliders. The first mass limits for squarks and gluinos were made at CERN by the UA1 experiment and the UA2 experiment at the Super Proton Synchrotron . LEP later set very strong limits, [ 61 ] which in 2006 were extended by the D0 experiment at the Tevatron. [ 62 ] [ 63 ] From 2003 to 2015, WMAP's and Planck 's dark matter density measurements have strongly constrained supersymmetric extensions of the Standard Model, which, if they explain dark matter, have to be tuned to invoke a particular mechanism to sufficiently reduce the neutralino density.
Prior to the beginning of the LHC, in 2009, fits of available data to CMSSM and NUHM1 indicated that squarks and gluinos were most likely to have masses in the 500 to 800 GeV range, though values as high as 2.5 TeV were allowed with low probabilities. Neutralinos and sleptons were expected to be quite light, with the lightest neutralino and the lightest stau most likely to be found between 100 and 150 GeV. [ 64 ]
The first runs of the LHC surpassed existing experimental limits from the Large Electron–Positron Collider and Tevatron and partially excluded the aforementioned expected ranges. [ 65 ] In 2011–12, the LHC discovered a Higgs boson with a mass of about 125 GeV, and with couplings to fermions and bosons which are consistent with the Standard Model. The MSSM predicts that the mass of the lightest Higgs boson should not be much higher than the mass of the Z boson , and, in the absence of fine tuning (with the supersymmetry breaking scale on the order of 1 TeV), should not exceed 135 GeV. [ 66 ] The LHC found no previously unknown particles other than the Higgs boson which was already suspected to exist as part of the Standard Model, and therefore no evidence for any supersymmetric extension of the Standard Model. [ 52 ] [ 53 ]
Indirect methods include the search for a permanent electric dipole moment (EDM) in the known Standard Model particles, which can arise when the Standard Model particle interacts with the supersymmetric particles. The current best constraint on the electron electric dipole moment put it to be smaller than 10 −28 e·cm, equivalent to a sensitivity to new physics at the TeV scale and matching that of the current best particle colliders. [ 67 ] A permanent EDM in any fundamental particle points towards time-reversal violating physics, and therefore also CP-symmetry violation via the CPT theorem . Such EDM experiments are also much more scalable than conventional particle accelerators and offer a practical alternative to detecting physics beyond the standard model as accelerator experiments become increasingly costly and complicated to maintain. The current best limit for the electron's EDM has already reached a sensitivity to rule out so called 'naive' versions of supersymmetric extensions of the Standard Model. [ 68 ]
Research in the late 2010s and early 2020s from experimental data on the cosmological constant , LIGO noise , and pulsar timing , suggests it's very unlikely that there are any new particles with masses much higher than those which can be found in the standard model or the LHC. [ 69 ] [ 70 ] [ 71 ] However, this research has also indicated that quantum gravity or perturbative quantum field theory will become strongly coupled before 1 PeV, leading to other new physics in the TeVs. [ 69 ]
The negative findings in the experiments disappointed many physicists, who believed that supersymmetric extensions of the Standard Model (and other theories relying upon it) were by far the most promising theories for "new" physics beyond the Standard Model, and had hoped for signs of unexpected results from the experiments. [ 9 ] [ 2 ] In particular, the LHC result seems problematic for the Minimal Supersymmetric Standard Model, as the value of 125 GeV is relatively large for the model and can only be achieved with large radiative loop corrections from top squarks , which many theorists consider to be "unnatural" (see naturalness and fine tuning). [ 72 ]
In response to the so-called "naturalness crisis" in the Minimal Supersymmetric Standard Model, some researchers have abandoned naturalness and the original motivation to solve the hierarchy problem naturally with supersymmetry, while other researchers have moved on to other supersymmetric models such as split supersymmetry . [ 54 ] [ 73 ] Still others have moved to string theory as a result of the naturalness crisis. [ 74 ] [ 47 ] [ 48 ] [ 50 ] Former enthusiastic supporter Mikhail Shifman went as far as urging the theoretical community to search for new ideas and accept that supersymmetry was a failed theory in particle physics. [ 75 ] However, some researchers suggested that this "naturalness" crisis was premature because various calculations were too optimistic about the limits of masses which would allow a supersymmetric extension of the Standard Model as a solution. [ 76 ] [ 77 ]
Supersymmetry appears in many related contexts of theoretical physics. It is possible to have multiple supersymmetries and also have supersymmetric extra dimensions.
It is possible to have more than one kind of supersymmetry transformation. Theories with more than one supersymmetry transformation are known as extended supersymmetric theories. The more supersymmetry a theory has, the more constrained are the field content and interactions. Typically the number of copies of a supersymmetry is a power of 2 (1, 2, 4, 8...). In four dimensions, a spinor has four degrees of freedom and thus the minimal number of supersymmetry generators is four in four dimensions and having eight copies of supersymmetry means that there are 32 supersymmetry generators.
The maximal number of supersymmetry generators possible is 32. Theories with more than 32 supersymmetry generators automatically have massless fields with spin greater than 2. It is not known how to make massless fields with spin greater than two interact, so the maximal number of supersymmetry generators considered is 32. This is due to the Weinberg–Witten theorem . This corresponds to an N = 8 [ clarification needed ] supersymmetry theory. Theories with 32 supersymmetries automatically have a graviton .
For four dimensions there are the following theories, with the corresponding multiplets [ 78 ] (CPT adds a copy, whenever they are not invariant under such symmetry):
It is possible to have supersymmetry in dimensions other than four. Because the properties of spinors change drastically between different dimensions, each dimension has its characteristic. In d dimensions, the size of spinors is approximately 2 d /2 or 2 ( d − 1)/2 . Since the maximum number of supersymmetries is 32, the greatest number of dimensions in which a supersymmetric theory can exist is eleven. [ citation needed ]
Fractional supersymmetry is a generalization of the notion of supersymmetry in which the minimal positive amount of spin does not have to be 1 / 2 but can be an arbitrary 1 / N for integer value of N . Such a generalization is possible in two or fewer spacetime dimensions. | https://en.wikipedia.org/wiki/Supersymmetry |
In particle physics , supersymmetry breaking or SUSY breaking is a process via which a seemingly non- supersymmetric physics emerges from a supersymmetric theory. Assuming a breaking of supersymmetry is a necessary step to reconcile supersymmetry with experimental observations. [ 1 ]
Superpartner particles, whose mass is equal to the mass of the regular particles in supersymmetry, become much heavier with supersymmetry breaking. In supergravity , this results in a slightly modified counterpart of the Higgs mechanism where the gravitinos become massive. [ citation needed ]
Supersymmetry breaking is relevant in the domain of applicability of stochastic differential equations , which includes classical physics, and encompasses [ clarification needed ] nonlinear dynamical phenomena as chaos , turbulence , and pink noise . [ citation needed ] Various mechanisms for this breaking have been discussed by physicists, including soft SUSY breaking and types of spontaneous symmetry breaking . [ 1 ] [ 2 ] [ 3 ]
The energy scale where supersymmetry breaking takes place is known as the supersymmetry breaking scale . In the scenario known as low energy supersymmetry , in which supersymmetry fully solves the hierarchy problem , this scale should not be far from 1000 GeV , and therefore should be accessible using the Large Hadron Collider and future accelerators.
However, supersymmetry may also be broken at high energy scales. Nature does not have to be [ clarification needed ] supersymmetric at any scale.
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Supersymmetry_breaking |
" Supertoys Last All Summer Long " is a science fiction short story by Brian Aldiss , first published in the UK edition of Harper's Bazaar , in its December 1969 issue. [ 1 ] The story deals with humanity in an age of intelligent machines and of the aching loneliness endemic in an overpopulated future where child creation is controlled.
The short story was later used as the basis for the first act of the feature film A.I. Artificial Intelligence directed by Steven Spielberg in 2001. In the same year, the short story was republished in the eponymous Aldiss short-story collection Supertoys Last All Summer Long and Other Stories of Future Time , along with the tie-in stories Supertoys When Winter Comes and Supertoys in Other Seasons . Parts of two other Supertoys stories are also reflected in the film. The collection also contained a number of stories not tied to the Supertoys theme. [ 2 ]
In a dystopian future where only a quarter of the world's oversized human population is fed and living comfortably, families must request permission to bear children. Monica Swinton lives with her husband, Henry, and her young son, David, with whom she struggles to bond. She seeks help from Teddy, a robot toy companion of sorts, to try to understand why she feels unable to communicate with David, let alone feel compassion for him. David also questions Teddy about whether his mother truly loves him and wonders whether he is truly real. He attempts to write letters of his own to explain how he feels about his mother and the inner conflict he faces but all of his letters remain unfinished.
Meanwhile, the story jumps to Henry, who is in a meeting with a company he is associated with known as Synthank. They are discussing artificial life forms and bio-electronic beings for future developments. Henry tells them he believes that the new AI under production will finally solve humanity's problems with experiencing personal isolation and loneliness.
Monica discovers David's unfinished letters which express both love and a jealous contempt for Teddy, whom Monica always seemed to connect with more than David himself. Monica is horrified by the letters but overjoyed when Henry arrives home and she is able to share with him that the family has been chosen to give birth to a child by the Ministry of Population. It is then revealed that David is an artificial human, used as a replacement for a real child (of that he is himself unaware, he becomes aware in the second story, "Supertoys When Winter Comes"). Monica tells Henry that David is having verbal "malfunctioning" problems and must be sent back to the factory again. The story ends with David thinking of the love and warmth of his mother.
Those three short stories were used as the basis for the feature film A.I. Artificial Intelligence (2001). Stanley Kubrick originally obtained the rights in the 1970s to produce a film adaptation. However, the project was bogged down in " development hell " and was repeatedly postponed. A few years before Kubrick's death in 1999, he suggested to Steven Spielberg that it might be a project better suited for Spielberg to direct. After Kubrick's death, Spielberg ultimately did direct the film, which was released in 2001. Monica Swinton was portrayed by Frances O'Connor , Henry Swinton by Sam Robards , and David Swinton by Haley Joel Osment . The film portrays Teddy as a robotic teddy bear, voiced by Jack Angel . | https://en.wikipedia.org/wiki/Supertoys_Last_All_Summer_Long |
In ecology , a supertramp species is any type of animal which follows the "supertramp" strategy of high dispersion among many different habitats, towards none of which it is particularly specialized. Supertramp species are typically the first to arrive in newly available habitats, such as volcanic islands and freshly deforested land; they can have profoundly negative effects on more highly specialized flora and fauna, both directly through predation and indirectly through competition for resources.
The name was coined by Jared Diamond in 1974, as an allusion to both the itinerant lifestyle of the tramp , and the then-popular band Supertramp . Although Diamond originally applied the term only to birds, the term has since been applied to insects and reptiles as well, among others; any species which can migrate can be a supertramp.
In an evolutionary context, the supertramp may represent the first stage of the taxon cycle . [ 1 ]
This ecology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Supertramp_(ecology) |
A supertree is a single phylogenetic tree assembled from a combination of smaller phylogenetic trees, which may have been assembled using different datasets (e.g. morphological and molecular) or a different selection of taxa. [ 1 ] Supertree algorithms can highlight areas where additional data would most usefully resolve any ambiguities. [ 2 ] The input trees of a supertree should behave as samples from the larger tree. [ 3 ]
The construction of a supertree scales exponentially with the number of taxa included; therefore for a tree of any reasonable size, it is not possible to examine every possible supertree and weigh its success at combining the input information. Heuristic methods are thus essential, although these methods may be unreliable; the result extracted is often biased or affected by irrelevant characteristics of the input data. [ 1 ]
The most well known method for supertree construction is Matrix Representation with Parsimony (MRP), in which the input source trees are represented by matrices with 0s, 1s, and ?s (i.e., each edge in each source tree defines a bipartition of the leafset into two disjoint parts, and the leaves on one side get 0, the leaves on the other side get 1, and the missing leaves get ?), and the matrices are concatenated and then analyzed using heuristics for maximum parsimony. [ 4 ] Another approach for supertree construction include a maximum likelihood version of MRP called "MRL" (matrix representation with likelihood), which analyzes the same MRP matrix but uses heuristics for maximum likelihood instead of for maximum parsimony to construct the supertree.
The Robinson-Foulds distance is the most popular of many ways of measuring how similar a supertree is to the input trees. It is a metric for the number of clades from the input trees that are retained in the supertree. Robinson-Foulds optimization methods search for a supertree that minimizes the total (summed) Robinson-Foulds differences between the (binary) supertree and each input tree. [ 1 ] In this case the supertree can hence be view as the median of the input tree according to the Robinson-Foulds distance. Alternative approaches have been developed to infer median supertree based on different metrics, e.g. relying on triplet or quartet decomposition of the trees. [ 5 ]
A recent innovation has been the construction of Maximum Likelihood supertrees and the use of "input-tree-wise" likelihood scores to perform tests of two supertrees. [ 6 ]
Additional methods include the Min Cut Supertree approach, [ 7 ] Most Similar Supertree Analysis (MSSA), Distance Fit (DFIT) and Quartet Fit (QFIT), implemented in the software CLANN. [ 8 ] [ 9 ]
Supertrees have been applied to produce phylogenies of many groups, notably the angiosperms , [ 10 ] eukaryotes [ 11 ] and mammals. [ 12 ] They have also been applied to larger-scale problems such as the origins of diversity, vulnerability to extinction, [ 13 ] and evolutionary models of ecological structure. [ 14 ] | https://en.wikipedia.org/wiki/Supertree |
In philosophical logic , supervaluationism is a semantics for dealing with irreferential singular terms and vagueness . [ 1 ] It allows one to apply the tautologies of propositional logic in cases where truth values are undefined.
According to supervaluationism, a proposition can have a definite truth value even when its components do not. The proposition " Pegasus likes licorice ", for example, is often interpreted as having no truth-value given the assumption that the name "Pegasus" fails to refer . If indeed reference fails for "Pegasus", then it seems as though there is nothing that can justify an assignment of a truth-value to any apparent assertion in which the term "Pegasus" occurs. The statement "Pegasus likes licorice or Pegasus doesn't like licorice", however, is an instance of the valid schema p ∨ ¬ p {\displaystyle p\vee \neg p} (" p {\displaystyle p} or not- p {\displaystyle p} "), so, according to supervaluationism, it should be true regardless of whether or not its disjuncts have a truth value; that is, it should be true in all interpretations. If, in general, something is true in all precisifications , supervaluationism describes it as "supertrue", while something false in all precisifications is described as "superfalse". [ 2 ]
Supervaluations were first formalized by Bas van Fraassen . [ 3 ]
Let v be a classical valuation defined on every atomic sentence of the language L and let At( x ) be the number of distinct atomic sentences in a formula x . There are then at most 2 At( x ) classical valuations defined on every sentence x . A supervaluation V is a function from sentences to truth values such that x is supertrue (i.e. V ( x )=True) if and only if v ( x )=True for every v . Likewise for superfalse.
V(x) is undefined when there are exactly two valuations v and v * such that v(x) =True and v * (x) =False. For example, let Lp be the formal translation of "Pegasus likes licorice". There are then exactly two classical valuations v and v * on Lp , namely v(Lp) =True and v * (Lp) =False. So Lp is neither supertrue nor superfalse.
This semantics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Supervaluationism |
The Supervising Scientist is a statutory office under Australian law, originally created to assist in the monitoring of what was then one of the world's largest uranium mines, the Ranger Uranium Mine . It now provides advice more generally on a 'wide range of scientific matters and mining-related environmental issues of national importance, including; radiological matters and tropical wetlands conservation and management'. [ 1 ] The Supervising Scientist is administered as a division within the Department of the Environment, Water, Heritage and the Arts .
This Australian government-related article is a stub . You can help Wikipedia by expanding it .
This radioactivity –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Supervising_Scientist |
A supervisory program or supervisor is a computer program that is usually referred as an operating system . It controls the execution of other routines such as regulating work schedules , input/output operations, and error actions .
Historically, this term was essentially associated with IBM 's line of mainframe operating systems starting with OS/360 . In other operating systems, the supervisor is generally called the kernel .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Supervisory_program |
The supine position ( / ˈ s uː p aɪ n / ) means lying horizontally, with the face and torso facing up, as opposed to the prone position , which is face down. When used in surgical procedures, it grants access to the peritoneal , thoracic , and pericardial regions; as well as the head, neck, and extremities. [ 1 ]
Using anatomical terms of location , the dorsal side is down, and the ventral side is up, when supine.
In scientific literature "semi-supine" commonly refers to positions where the upper body is tilted (at 45° or variations) and not completely horizontal. [ 2 ]
The decline in death due to sudden infant death syndrome (SIDS) is said to be attributable to having babies sleep in the supine position. [ 3 ] The realization that infants sleeping face down, or in a prone position, had an increased mortality rate re-emerged into medical awareness at the end of the 1980s when two researchers, Susan Beal in Australia and Gus De Jonge in the Netherlands, independently noted the association. [ 4 ]
It is believed that in the prone position babies are more at risk to re-breathe their own carbon dioxide . Because of the immature state of their central chemoreceptors , infants do not respond to the subsequent respiratory acidosis that develops. [ 5 ] [ 6 ] Typical non-infants realize autonomic responses of increased rate and depth of respiration ( hyperventilation , yawning).
Obstructive sleep apnea (OSA) is a form of sleep apnea that occurs more frequently when throat muscles relax [ 7 ] and is most severe when individuals are sleeping in the supine position. Studies and evidence show that OSA related to sleeping in the supine position is related to the airway positioning , reduced lung volume , and the inability of airway muscles to dilate enough to compensate as the airway collapses. [ 8 ] With individuals who have OSA, many health care providers encourage their patients to avoid the supine position while asleep and sleep laterally or sleep with the head of their bed up in a 30- or 45-degree angle. [ 9 ] [ 10 ] | https://en.wikipedia.org/wiki/Supine_position |
In relativistic physics , Supplee's paradox (also called the submarine paradox ) is a physical paradox that arises when considering the buoyant force exerted on a relativistic bullet (or in a submarine) immersed in a fluid subject to an ambient gravitational field . If a bullet has neutral buoyancy when it is at rest in a perfect fluid and then it is launched with a relativistic speed, observers at rest within the fluid would conclude that the bullet should sink, since its density will increase due to the length contraction effect. On the other hand, in the bullet's proper frame it is the moving fluid that becomes denser and hence the bullet would float. But the bullet cannot sink in one frame and float in another, so there is a paradox situation.
The paradox was first formulated by James M. Supplee (1989), [ 1 ] where a non-rigorous explanation was presented. George Matsas [ 2 ] has analysed this paradox in the scope of general relativity and also pointed out that these relativistic buoyancy effects could be important in some questions regarding the thermodynamics of black holes . A comprehensive explanation of Supplee's paradox through both the special and the general theory of relativity was presented by Ricardo Soares Vieira. [ 3 ]
To simplify the analysis, it is customary to neglect drag and viscosity , and even to assume that the fluid has constant density .
A small object immersed in a container of fluid subjected to a uniform gravitational field will be subject to a net downward gravitational force, compared with the net downward gravitational force on an equal volume of the fluid. If the object is less dense than the fluid, the difference between these two vectors is an upward pointing vector, the buoyant force, and the object will rise. If things are the other way around, it will sink. If the object and the fluid have equal density, the object is said to have neutral buoyancy and it will neither rise nor sink.
The resolution comes down to observing that the usual Archimedes principle cannot be applied in the relativistic case. If the theory of relativity is correctly employed to analyse the forces involved, there will be no true paradox.
Supplee [ 1 ] himself concluded that the paradox can be resolved with a more careful analysis of the gravitational buoyancy forces acting on the bullet. Considering the reasonable (but not justified) assumption that the gravitational force depends on the kinetic energy content of the bodies, Supplee showed that the bullet sinks in the frame at rest with the fluid with the acceleration g ( γ 2 − 1 ) {\displaystyle g(\gamma ^{2}-1)} , where g {\displaystyle g} is the gravitational acceleration and γ {\displaystyle \gamma } is the Lorentz factor . In the proper reference frame of the bullet, the same result is obtained by noting that this frame is not inertial, which implies that the shape of the container will no longer be flat, on the contrary, the seafloor becomes curved upwards, which results in the bullet getting far away from the sea surface, i.e. , in the bullet relatively sinking.
The non-justified assumption considered by Supplee that the gravitational force on the bullet should depend on its energy content was eliminated by George Matsas, [ 2 ] who used the full mathematical methods of general relativity in order to explain the Supplee paradox and agreed with Supplee's results. In particular, he modelled the situation using a Rindler chart , where a submarine is accelerated from the rest to a given velocity v . Matsas concluded that the paradox can be resolved by noting that in the frame of the fluid, the shape of the bullet is altered, and derived the same result which had been obtained by Supplee. Matsas has applied a similar analysis to shed light on certain questions involving the thermodynamics of black holes .
Vieira [ 3 ] has recently analysed the submarine paradox through both special and general relativity. In the first case, he showed that gravitomagnetic effects should be taken into account in order to describe the forces acting in a moving submarine underwater. When these effects are considered, a relativistic Archimedes principle can be formulated, from which he showed that the submarine must sink in both frames. Vieira also considered the case of a curved space-time in the proximity of the Earth. In this case, he assumed that the space-time can be approximately regarded as consisting of a flat space but a curved time. He showed that in this case the gravitational force between the Earth at rest and a moving body increases with the speed of the body in the same way as considered by Supplee ( F = γ m g {\displaystyle F=\gamma mg} ), providing in this way a justification for his assumption. Analysing the paradox again with this speed-dependent gravitational force , the Supplee paradox is explained and the results agree with those obtained by Supplee and Matsas. | https://en.wikipedia.org/wiki/Supplee's_paradox |
Supply chain engineering is the engineering discipline that concerns the planning, design, and operation of supply chains . [ 1 ] [ 2 ] Some of its main areas include logistics , production , and pricing . [ 2 ] [ 3 ] It involves various areas in mathematical modelling such as operations research , machine learning , and optimization , which are usually implemented using software . [ 2 ] [ 1 ]
Supply chain engineering draws heavily from, and overlaps with other engineering disciplines such as industrial engineering , manufacturing engineering , systems engineering , information engineering , and software engineering . Although supply chain engineering and supply chain management have the same goals, the former is focused on a mathematical model -based approach, whereas the latter is focused on a more traditional management and business -based one. [ 1 ] Supply chain engineering can be seen as including supply chain optimization , although this can also be undertaken using more qualitative management-based approaches which are less of a focus in supply chain engineering.
Supply chain engineering is applied to all parts of supply chains, including: [ 3 ] [ 1 ]
Supply chain engineering uses a wide variety of mathematical techniques such as: [ 2 ] [ 1 ] | https://en.wikipedia.org/wiki/Supply_chain_engineering |
A supply chain responsiveness matrix is a tool that is used to analyze inventory and lead time within an organization. The matrix is one of a number of value stream mapping tools. [ 1 ] The matrix is represented by showing lead time along the x-axis and inventory along the y-axis. The result shows where slow moving stock resides. | https://en.wikipedia.org/wiki/Supply_chain_responsiveness_matrix |
David Bierens de Haan (3 May 1822, in Amsterdam – 12 August 1895, in Leiden ) was a Dutch mathematician and historian of science .
Bierens de Haan was a son of the rich merchant Abraham Pieterszoon de Haan (1795–1880) and Catharina Jacoba Bierens (1797–1835). In 1843 he completed a study in the exact sciences and received his PhD from the University of Leiden in 1847 under Gideon Janus Verdam (1802–1866) for the work De Lemniscata Bernouillana . After this he became a teacher of physics and mathematics at a gymnasium in Deventer . In 1852 he married Johanna Catharina Justina IJssel de Schepper (1827–1906) in Deventer.
In 1856 he became member of the Royal Netherlands Academy of Arts and Sciences . [ 1 ] Since 1866 he was professor of mathematics at Leiden University . Since 1888 he was co-editor of the works of Christiaan Huygens and in 1892 edited the Algebra of Willem Smaasen (1820–1850).
He had a large library on mathematics, the history of science and pedagogy, which currently resides at the Leiden University Library .
His most important contribution to mathematics consist of the issuing of a large table of integrals (Nouvelles) tables d'intégrales définies in 1858 (and 1867). His doctoral students include Pieter Hendrik Schoute . | https://en.wikipedia.org/wiki/Supplément_aux_tables_d'intégrales_définies |
In mathematics , the support of a real-valued function f {\displaystyle f} is the subset of the function domain of elements that are not mapped to zero. If the domain of f {\displaystyle f} is a topological space , then the support of f {\displaystyle f} is instead defined as the smallest closed set containing all points not mapped to zero. This concept is used widely in mathematical analysis .
Suppose that f : X → R {\displaystyle f:X\to \mathbb {R} } is a real-valued function whose domain is an arbitrary set X . {\displaystyle X.} The set-theoretic support of f , {\displaystyle f,} written supp ( f ) , {\displaystyle \operatorname {supp} (f),} is the set of points in X {\displaystyle X} where f {\displaystyle f} is non-zero: supp ( f ) = { x ∈ X : f ( x ) ≠ 0 } . {\displaystyle \operatorname {supp} (f)=\{x\in X\,:\,f(x)\neq 0\}.}
The support of f {\displaystyle f} is the smallest subset of X {\displaystyle X} with the property that f {\displaystyle f} is zero on the subset's complement. If f ( x ) = 0 {\displaystyle f(x)=0} for all but a finite number of points x ∈ X , {\displaystyle x\in X,} then f {\displaystyle f} is said to have finite support .
If the set X {\displaystyle X} has an additional structure (for example, a topology ), then the support of f {\displaystyle f} is defined in an analogous way as the smallest subset of X {\displaystyle X} of an appropriate type such that f {\displaystyle f} vanishes in an appropriate sense on its complement. The notion of support also extends in a natural way to functions taking values in more general sets than R {\displaystyle \mathbb {R} } and to other objects, such as measures or distributions .
The most common situation occurs when X {\displaystyle X} is a topological space (such as the real line or n {\displaystyle n} -dimensional Euclidean space ) and f : X → R {\displaystyle f:X\to \mathbb {R} } is a continuous real- (or complex -) valued function. In this case, the support of f {\displaystyle f} , supp ( f ) {\displaystyle \operatorname {supp} (f)} , or the closed support of f {\displaystyle f} , is defined topologically as the closure (taken in X {\displaystyle X} ) of the subset of X {\displaystyle X} where f {\displaystyle f} is non-zero [ 1 ] [ 2 ] [ 3 ] that is, supp ( f ) := cl X ( { x ∈ X : f ( x ) ≠ 0 } ) = f − 1 ( { 0 } c ) ¯ . {\displaystyle \operatorname {supp} (f):=\operatorname {cl} _{X}\left(\{x\in X\,:\,f(x)\neq 0\}\right)={\overline {f^{-1}\left(\{0\}^{\mathrm {c} }\right)}}.} Since the intersection of closed sets is closed, supp ( f ) {\displaystyle \operatorname {supp} (f)} is the intersection of all closed sets that contain the set-theoretic support of f . {\displaystyle f.} Note that if the function f : R n ⊇ X → R {\displaystyle f:\mathbb {R} ^{n}\supseteq X\to \mathbb {R} } is defined on an open subset X ⊆ R n {\displaystyle X\subseteq \mathbb {R} ^{n}} , then the closure is still taken with respect to X {\displaystyle X} and not with respect to the ambient R n {\displaystyle \mathbb {R} ^{n}} .
For example, if f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is the function defined by f ( x ) = { 1 − x 2 if | x | < 1 0 if | x | ≥ 1 {\displaystyle f(x)={\begin{cases}1-x^{2}&{\text{if }}|x|<1\\0&{\text{if }}|x|\geq 1\end{cases}}} then supp ( f ) {\displaystyle \operatorname {supp} (f)} , the support of f {\displaystyle f} , or the closed support of f {\displaystyle f} , is the closed interval [ − 1 , 1 ] , {\displaystyle [-1,1],} since f {\displaystyle f} is non-zero on the open interval ( − 1 , 1 ) {\displaystyle (-1,1)} and the closure of this set is [ − 1 , 1 ] . {\displaystyle [-1,1].}
The notion of closed support is usually applied to continuous functions, but the definition makes sense for arbitrary real or complex-valued functions on a topological space, and some authors do not require that f : X → R {\displaystyle f:X\to \mathbb {R} } (or f : X → C {\displaystyle f:X\to \mathbb {C} } ) be continuous. [ 4 ]
Functions with compact support on a topological space X {\displaystyle X} are those whose closed support is a compact subset of X . {\displaystyle X.} If X {\displaystyle X} is the real line, or n {\displaystyle n} -dimensional Euclidean space, then a function has compact support if and only if it has bounded support , since a subset of R n {\displaystyle \mathbb {R} ^{n}} is compact if and only if it is closed and bounded.
For example, the function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } defined above is a continuous function with compact support [ − 1 , 1 ] . {\displaystyle [-1,1].} If f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } is a smooth function then because f {\displaystyle f} is identically 0 {\displaystyle 0} on the open subset R n ∖ supp ( f ) , {\displaystyle \mathbb {R} ^{n}\setminus \operatorname {supp} (f),} all of f {\displaystyle f} 's partial derivatives of all orders are also identically 0 {\displaystyle 0} on R n ∖ supp ( f ) . {\displaystyle \mathbb {R} ^{n}\setminus \operatorname {supp} (f).}
The condition of compact support is stronger than the condition of vanishing at infinity . For example, the function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } defined by f ( x ) = 1 1 + x 2 {\displaystyle f(x)={\frac {1}{1+x^{2}}}} vanishes at infinity, since f ( x ) → 0 {\displaystyle f(x)\to 0} as | x | → ∞ , {\displaystyle |x|\to \infty ,} but its support R {\displaystyle \mathbb {R} } is not compact.
Real-valued compactly supported smooth functions on a Euclidean space are called bump functions . Mollifiers are an important special case of bump functions as they can be used in distribution theory to create sequences of smooth functions approximating nonsmooth (generalized) functions, via convolution .
In good cases , functions with compact support are dense in the space of functions that vanish at infinity, but this property requires some technical work to justify in a given example. As an intuition for more complex examples, and in the language of limits , for any ε > 0 , {\displaystyle \varepsilon >0,} any function f {\displaystyle f} on the real line R {\displaystyle \mathbb {R} } that vanishes at infinity can be approximated by choosing an appropriate compact subset C {\displaystyle C} of R {\displaystyle \mathbb {R} } such that | f ( x ) − I C ( x ) f ( x ) | < ε {\displaystyle \left|f(x)-I_{C}(x)f(x)\right|<\varepsilon } for all x ∈ X , {\displaystyle x\in X,} where I C {\displaystyle I_{C}} is the indicator function of C . {\displaystyle C.} Every continuous function on a compact topological space has compact support since every closed subset of a compact space is indeed compact.
If X {\displaystyle X} is a topological measure space with a Borel measure μ {\displaystyle \mu } (such as R n , {\displaystyle \mathbb {R} ^{n},} or a Lebesgue measurable subset of R n , {\displaystyle \mathbb {R} ^{n},} equipped with Lebesgue measure), then one typically identifies functions that are equal μ {\displaystyle \mu } -almost everywhere. In that case, the essential support of a measurable function f : X → R {\displaystyle f:X\to \mathbb {R} } written e s s s u p p ( f ) , {\displaystyle \operatorname {ess\,supp} (f),} is defined to be the smallest closed subset F {\displaystyle F} of X {\displaystyle X} such that f = 0 {\displaystyle f=0} μ {\displaystyle \mu } -almost everywhere outside F . {\displaystyle F.} Equivalently, e s s s u p p ( f ) {\displaystyle \operatorname {ess\,supp} (f)} is the complement of the largest open set on which f = 0 {\displaystyle f=0} μ {\displaystyle \mu } -almost everywhere [ 5 ] e s s s u p p ( f ) := X ∖ ⋃ { Ω ⊆ X : Ω is open and f = 0 μ -almost everywhere in Ω } . {\displaystyle \operatorname {ess\,supp} (f):=X\setminus \bigcup \left\{\Omega \subseteq X:\Omega {\text{ is open and }}f=0\,\mu {\text{-almost everywhere in }}\Omega \right\}.}
The essential support of a function f {\displaystyle f} depends on the measure μ {\displaystyle \mu } as well as on f , {\displaystyle f,} and it may be strictly smaller than the closed support. For example, if f : [ 0 , 1 ] → R {\displaystyle f:[0,1]\to \mathbb {R} } is the Dirichlet function that is 0 {\displaystyle 0} on irrational numbers and 1 {\displaystyle 1} on rational numbers, and [ 0 , 1 ] {\displaystyle [0,1]} is equipped with Lebesgue measure, then the support of f {\displaystyle f} is the entire interval [ 0 , 1 ] , {\displaystyle [0,1],} but the essential support of f {\displaystyle f} is empty, since f {\displaystyle f} is equal almost everywhere to the zero function.
In analysis one nearly always wants to use the essential support of a function, rather than its closed support, when the two sets are different, so e s s s u p p ( f ) {\displaystyle \operatorname {ess\,supp} (f)} is often written simply as supp ( f ) {\displaystyle \operatorname {supp} (f)} and referred to as the support. [ 5 ] [ 6 ]
If M {\displaystyle M} is an arbitrary set containing zero, the concept of support is immediately generalizable to functions f : X → M . {\displaystyle f:X\to M.} Support may also be defined for any algebraic structure with identity (such as a group , monoid , or composition algebra ), in which the identity element assumes the role of zero. For instance, the family Z N {\displaystyle \mathbb {Z} ^{\mathbb {N} }} of functions from the natural numbers to the integers is the uncountable set of integer sequences. The subfamily { f ∈ Z N : f has finite support } {\displaystyle \left\{f\in \mathbb {Z} ^{\mathbb {N} }:f{\text{ has finite support }}\right\}} is the countable set of all integer sequences that have only finitely many nonzero entries.
Functions of finite support are used in defining algebraic structures such as group rings and free abelian groups . [ 7 ]
In probability theory , the support of a probability distribution can be loosely thought of as the closure of the set of possible values of a random variable having that distribution. There are, however, some subtleties to consider when dealing with general distributions defined on a sigma algebra , rather than on a topological space.
More formally, if X : Ω → R {\displaystyle X:\Omega \to \mathbb {R} } is a random variable on ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},P)} then the support of X {\displaystyle X} is the smallest closed set R X ⊆ R {\displaystyle R_{X}\subseteq \mathbb {R} } such that P ( X ∈ R X ) = 1. {\displaystyle P\left(X\in R_{X}\right)=1.}
In practice however, the support of a discrete random variable X {\displaystyle X} is often defined as the set R X = { x ∈ R : P ( X = x ) > 0 } {\displaystyle R_{X}=\{x\in \mathbb {R} :P(X=x)>0\}} and the support of a continuous random variable X {\displaystyle X} is defined as the set R X = { x ∈ R : f X ( x ) > 0 } {\displaystyle R_{X}=\{x\in \mathbb {R} :f_{X}(x)>0\}} where f X ( x ) {\displaystyle f_{X}(x)} is a probability density function of X {\displaystyle X} (the set-theoretic support ). [ 8 ]
Note that the word support can refer to the logarithm of the likelihood of a probability density function. [ 9 ]
It is possible also to talk about the support of a distribution , such as the Dirac delta function δ ( x ) {\displaystyle \delta (x)} on the real line. In that example, we can consider test functions F , {\displaystyle F,} which are smooth functions with support not including the point 0. {\displaystyle 0.} Since δ ( F ) {\displaystyle \delta (F)} (the distribution δ {\displaystyle \delta } applied as linear functional to F {\displaystyle F} ) is 0 {\displaystyle 0} for such functions, we can say that the support of δ {\displaystyle \delta } is { 0 } {\displaystyle \{0\}} only. Since measures (including probability measures ) on the real line are special cases of distributions, we can also speak of the support of a measure in the same way.
Suppose that f {\displaystyle f} is a distribution, and that U {\displaystyle U} is an open set in Euclidean space such that, for all test functions ϕ {\displaystyle \phi } such that the support of ϕ {\displaystyle \phi } is contained in U , {\displaystyle U,} f ( ϕ ) = 0. {\displaystyle f(\phi )=0.} Then f {\displaystyle f} is said to vanish on U . {\displaystyle U.} Now, if f {\displaystyle f} vanishes on an arbitrary family U α {\displaystyle U_{\alpha }} of open sets, then for any test function ϕ {\displaystyle \phi } supported in ⋃ U α , {\textstyle \bigcup U_{\alpha },} a simple argument based on the compactness of the support of ϕ {\displaystyle \phi } and a partition of unity shows that f ( ϕ ) = 0 {\displaystyle f(\phi )=0} as well. Hence we can define the support of f {\displaystyle f} as the complement of the largest open set on which f {\displaystyle f} vanishes. For example, the support of the Dirac delta is { 0 } . {\displaystyle \{0\}.}
In Fourier analysis in particular, it is interesting to study the singular support of a distribution. This has the intuitive interpretation as the set of points at which a distribution fails to be a smooth function .
For example, the Fourier transform of the Heaviside step function can, up to constant factors, be considered to be 1 / x {\displaystyle 1/x} (a function) except at x = 0. {\displaystyle x=0.} While x = 0 {\displaystyle x=0} is clearly a special point, it is more precise to say that the transform of the distribution has singular support { 0 } {\displaystyle \{0\}} : it cannot accurately be expressed as a function in relation to test functions with support including 0. {\displaystyle 0.} It can be expressed as an application of a Cauchy principal value improper integral.
For distributions in several variables, singular supports allow one to define wave front sets and understand Huygens' principle in terms of mathematical analysis . Singular supports may also be used to understand phenomena special to distribution theory, such as attempts to 'multiply' distributions (squaring the Dirac delta function fails – essentially because the singular supports of the distributions to be multiplied should be disjoint).
An abstract notion of family of supports on a topological space X , {\displaystyle X,} suitable for sheaf theory , was defined by Henri Cartan . In extending Poincaré duality to manifolds that are not compact, the 'compact support' idea enters naturally on one side of the duality; see for example Alexander–Spanier cohomology .
Bredon, Sheaf Theory (2nd edition, 1997) gives these definitions. A family Φ {\displaystyle \Phi } of closed subsets of X {\displaystyle X} is a family of supports , if it is down-closed and closed under finite union . Its extent is the union over Φ . {\displaystyle \Phi .} A paracompactifying family of supports that satisfies further that any Y {\displaystyle Y} in Φ {\displaystyle \Phi } is, with the subspace topology , a paracompact space ; and has some Z {\displaystyle Z} in Φ {\displaystyle \Phi } which is a neighbourhood . If X {\displaystyle X} is a locally compact space , assumed Hausdorff , the family of all compact subsets satisfies the further conditions, making it paracompactifying. | https://en.wikipedia.org/wiki/Support_(mathematics) |
For a rigid object in contact with a fixed environment and acted upon by gravity in the vertical direction, its support polygon is a horizontal region over which the center of mass must lie to achieve static stability. [ 1 ] For example, for an object resting on a horizontal surface (e.g. a table), the support polygon is the convex hull of its "footprint" on the table.
The support polygon succinctly represents the conditions necessary for an object to be at equilibrium under gravity. That is, if the object's center of mass lies over the support polygon, then there exist a set of forces over the region of contact that exactly counteracts the forces of gravity. Note that this is a necessary condition for stability, but not a sufficient one.
Let the object be in contact at a finite number of points C 1 , … , C N {\displaystyle C_{1},\ldots ,C_{N}} . At each point C k {\displaystyle C_{k}} , let F C k {\displaystyle FC_{k}} be the set of forces that can be applied on the object at that point. Here, F C k {\displaystyle FC_{k}} is known as the friction cone , and for the Coulomb model of friction , is actually a cone with apex at the origin, extending to infinity in the normal direction of the contact.
Let f 1 , … , f N {\displaystyle f_{1},\ldots ,f_{N}} be the (unspecified) forces at the contact points. To balance the object in static equilibrium, the following Newton-Euler equations must be met on f 1 , … , f N {\displaystyle f_{1},\ldots ,f_{N}} :
where G {\displaystyle G} is the force of gravity on the object, and C M {\displaystyle CM} is its center of mass. The first two equations are the Newton-Euler equations , and the third requires all forces to be valid. If there is no set of forces f 1 , … , f N {\displaystyle f_{1},\ldots ,f_{N}} that meet all these conditions, the object will not be in equilibrium.
The second equation has no dependence on the vertical component of the center of mass, and thus if a solution exists for one C M {\displaystyle CM} , the same solution works for all C M + α G {\displaystyle CM+\alpha G} . Therefore, the set of all C M {\displaystyle CM} that have solutions to the above conditions is a set that extends infinitely in the up and down directions. The support polygon is simply the projection of this set on the horizontal plane.
These results can easily be extended to different friction models and an infinite number of contact points (i.e. a region of contact).
Even though the word "polygon" is used to describe this region, in general it can be any convex shape with curved edges. The support polygon is invariant under translations and rotations about the gravity vector (that is, if the contact points and friction cones were translated and rotated about the gravity vector, the support polygon is simply translated and rotated).
If the friction cones are convex cones (as they typically are), the support polygon is always a convex region. It is also invariant to the mass of the object (provided it is nonzero).
If all contacts lie on a (not necessarily horizontal) plane, and the friction cones at all contacts contain the negative gravity vector − G {\displaystyle -G} , then the support polygon is the convex hull of the contact points projected onto the horizontal plane. | https://en.wikipedia.org/wiki/Support_polygon |
In convex analysis and mathematical optimization , the supporting functional is a generalization of the supporting hyperplane of a set.
Let X be a locally convex topological space , and C ⊂ X {\displaystyle C\subset X} be a convex set , then the continuous linear functional ϕ : X → R {\displaystyle \phi :X\to \mathbb {R} } is a supporting functional of C at the point x 0 {\displaystyle x_{0}} if ϕ ≠ 0 {\displaystyle \phi \not =0} and ϕ ( x ) ≤ ϕ ( x 0 ) {\displaystyle \phi (x)\leq \phi (x_{0})} for every x ∈ C {\displaystyle x\in C} . [ 1 ]
If h C : X ∗ → R {\displaystyle h_{C}:X^{*}\to \mathbb {R} } (where X ∗ {\displaystyle X^{*}} is the dual space of X {\displaystyle X} ) is a support function of the set C , then if h C ( x ∗ ) = x ∗ ( x 0 ) {\displaystyle h_{C}\left(x^{*}\right)=x^{*}\left(x_{0}\right)} , it follows that h C {\displaystyle h_{C}} defines a supporting functional ϕ : X → R {\displaystyle \phi :X\to \mathbb {R} } of C at the point x 0 {\displaystyle x_{0}} such that ϕ ( x ) = x ∗ ( x ) {\displaystyle \phi (x)=x^{*}(x)} for any x ∈ X {\displaystyle x\in X} .
If ϕ {\displaystyle \phi } is a supporting functional of the convex set C at the point x 0 ∈ C {\displaystyle x_{0}\in C} such that
then H = ϕ − 1 ( σ ) {\displaystyle H=\phi ^{-1}(\sigma )} defines a supporting hyperplane to C at x 0 {\displaystyle x_{0}} . [ 2 ] | https://en.wikipedia.org/wiki/Supporting_functional |
In geometry , a supporting hyperplane of a set S {\displaystyle S} in Euclidean space R n {\displaystyle \mathbb {R} ^{n}} is a hyperplane that has both of the following two properties: [ 1 ]
Here, a closed half-space is the half-space that includes the points within the hyperplane.
This theorem states that if S {\displaystyle S} is a convex set in the topological vector space X = R n , {\displaystyle X=\mathbb {R} ^{n},} and x 0 {\displaystyle x_{0}} is a point on the boundary of S , {\displaystyle S,} then there exists a supporting hyperplane containing x 0 . {\displaystyle x_{0}.} If x ∗ ∈ X ∗ ∖ { 0 } {\displaystyle x^{*}\in X^{*}\backslash \{0\}} ( X ∗ {\displaystyle X^{*}} is the dual space of X {\displaystyle X} , x ∗ {\displaystyle x^{*}} is a nonzero linear functional) such that x ∗ ( x 0 ) ≥ x ∗ ( x ) {\displaystyle x^{*}\left(x_{0}\right)\geq x^{*}(x)} for all x ∈ S {\displaystyle x\in S} , then
defines a supporting hyperplane. [ 2 ]
Conversely, if S {\displaystyle S} is a closed set with nonempty interior such that every point on the boundary has a supporting hyperplane, then S {\displaystyle S} is a convex set, and is the intersection of all its supporting closed half-spaces. [ 2 ]
The hyperplane in the theorem may not be unique, as noticed in the second picture on the right. If the closed set S {\displaystyle S} is not convex, the statement of the theorem is not true at all points on the boundary of S , {\displaystyle S,} as illustrated in the third picture on the right.
The supporting hyperplanes of convex sets are also called tac-planes or tac-hyperplanes . [ 3 ]
The forward direction can be proved as a special case of the separating hyperplane theorem (see the page for the proof ). For the converse direction,
Define T {\displaystyle T} to be the intersection of all its supporting closed half-spaces. Clearly S ⊂ T {\displaystyle S\subset T} . Now let y ∉ S {\displaystyle y\not \in S} , show y ∉ T {\displaystyle y\not \in T} .
Let x ∈ i n t ( S ) {\displaystyle x\in \mathrm {int} (S)} , and consider the line segment [ x , y ] {\displaystyle [x,y]} . Let t {\displaystyle t} be the largest number such that [ x , t ( y − x ) + x ] {\displaystyle [x,t(y-x)+x]} is contained in S {\displaystyle S} . Then t ∈ ( 0 , 1 ) {\displaystyle t\in (0,1)} .
Let b = t ( y − x ) + x {\displaystyle b=t(y-x)+x} , then b ∈ ∂ S {\displaystyle b\in \partial S} . Draw a supporting hyperplane across b {\displaystyle b} . Let it be represented as a nonzero linear functional f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } such that ∀ a ∈ T , f ( a ) ≥ f ( b ) {\displaystyle \forall a\in T,f(a)\geq f(b)} . Then since x ∈ i n t ( S ) {\displaystyle x\in \mathrm {int} (S)} , we have f ( x ) > f ( b ) {\displaystyle f(x)>f(b)} . Thus by f ( y ) − f ( b ) 1 − t = f ( b ) − f ( x ) t − 0 < 0 {\displaystyle {\frac {f(y)-f(b)}{1-t}}={\frac {f(b)-f(x)}{t-0}}<0} , we have f ( y ) < f ( b ) {\displaystyle f(y)<f(b)} , so y ∉ T {\displaystyle y\not \in T} . | https://en.wikipedia.org/wiki/Supporting_hyperplane |
In geometry , a supporting line L of a curve C in the plane is a line that contains a point of C , but does not separate any two points of C . [ 1 ] In other words, C lies completely in one of the two closed half-planes defined by L and has at least one point on L .
There can be many supporting lines for a curve at a given point. When a tangent exists at a given point, then it is the unique supporting line at this point, if it does not separate the curve.
The notion of supporting line is also discussed for planar shapes. In this case a supporting line may be defined as a line which has common points with the boundary of the shape, but not with its interior. [ 2 ]
The notion of a supporting line to a planar curve or convex shape can be generalized to n dimension as a supporting hyperplane .
If two bounded connected planar shapes have disjoint convex hulls that are separated by a positive distance, then they necessarily have exactly four common lines of support, the bitangents of the two convex hulls. Two of these lines of support separate the two shapes, and are called critical support lines . [ 2 ] Without the assumption of convexity, there may be more or fewer than four lines of support, even if the shapes themselves are disjoint. For instance, if one shape is an annulus that contains the other, then there are no common lines of support, while if each of two shapes consists of a pair of small disks at opposite corners of a square then there may be as many as 16 common lines of support. | https://en.wikipedia.org/wiki/Supporting_line |
Subtractive hybridization is a technology that allows for PCR-based amplification of only cDNA fragments that differ between a control (driver) and experimental transcriptome . cDNA is produced from mRNA . Differences in relative abundance of transcripts are highlighted, as are genetic differences between species. The technique relies on the removal of dsDNA formed by hybridization between a control and test sample, thus eliminating cDNAs or gDNAs of similar abundance, and retaining differentially expressed, or variable in sequence, transcripts or genomic sequences.
Suppression subtractive hybridization has also been successfully used to identify strain- or species-specific DNA sequences in a variety of bacteria including Vibrio species ( Metagenomics ).
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Suppression_subtractive_hybridization |
A suppressor mutation is a second mutation that alleviates or reverts the phenotypic effects of an already existing mutation in a process defined synthetic rescue . Genetic suppression therefore restores the phenotype seen prior to the original background mutation. [ 1 ] Suppressor mutations are useful for identifying new genetic sites which affect a biological process of interest. They also provide evidence between functionally interacting molecules and intersecting biological pathways . [ 2 ]
Intragenic suppression results from suppressor mutations that occur in the same gene as the original mutation. In a classic study, Francis Crick (et al.) used intragenic suppression to study the fundamental nature of the genetic code . From this study it was shown that genes are expressed as non-overlapping triplets ( codons ). [ 1 ]
Researchers showed that mutations caused by either a single base insertion (+) or a single base deletion (-) could be "suppressed" or restored by a second mutation of the opposite sign, as long as the two mutations occurred in the same vicinity of the gene. This led to the conclusion that genes needed to be read in a specific " reading frame " and a single base insertion or deletion would shift the reading frame ( frameshift mutation ) in such a way that the remaining DNA would code for a different polypeptide than the one intended. Therefore, researchers concluded that the second mutation of opposite sign suppresses the original mutation by restoring the reading frame, as long as the portion between the two mutations is not critical for protein function. [ 1 ]
In addition to the reading frame, Crick also used suppressor mutations to determine codon size. It was found that while one and two base insertions/deletions of the same sign resulted in a mutant phenotype, deleting or inserting three bases could give a wild type phenotype. From these results it was concluded that an inserted or deleted triplet does not disturb the reading frame and the genetic code is in fact a triplet. [ 1 ]
Intergenic (also known as extragenic ) suppression relieves the effects of a mutation in one gene by a mutation somewhere else within the genome . The second mutation is not on the same gene as the original mutation. [ 2 ] Intergenic suppression is useful for identifying and studying interactions between molecules, such as proteins . For example, a mutation which disrupts the complementary interaction between protein molecules may be compensated for by a second mutation elsewhere in the genome that restores or provides a suitable alternative interaction between those molecules. Several proteins of biochemical, signal transduction , and gene expression pathways have been identified using this approach. Examples of such pathways include receptor-ligand interactions as well as the interaction of components involved in DNA replication , transcription , and translation . [ 1 ]
These Intergenic suppressions are also likely to persist in the population. When these compensatory mutations are established in organisms like E. coli making it resistant to the drug due to the presence of a drug, and the drug usage is halted, the resistant strains are not easily able to evolve back into strains that can then once again be sensitive to the drug they had incurred resistance to. [ 3 ] These strains are likely not subject to losing these compensatory mutations and which would greatly decrease the fitness in the strain resulting in the intermediate strains. These intermediate strains are subjected to bottlenecking and thus making it difficult for the alleles to be reverted prior to Intergenic suppressions. Consequently, when drugs are halted it can be seen that these mutations are likely to persist in the population.
Suppressor mutations also occur in genes that code for virus structural proteins. To create a viable phage T4 virus (see image), a balance of structural components is required. An amber mutant of phage T4 contains a mutation that changes a codon for an amino acid in a protein to the nonsense stop codon TAG (see stop codon and nonsense mutation ). If, upon infection, an amber mutant defective in a gene encoding a needed structural component of phage T4 is weakly suppressed (in an E. coli host containing a specific altered tRNA – see nonsense suppressor ), it will produce a reduced number of the needed structural component. As a consequence few if any viable phage are formed. However, it was found that viable phage could sometimes be produced in the host with the weak nonsense suppressor if a second amber mutation in a gene that encodes another structural protein is also present in the phage genome. [ 4 ] It was found that the reason the second amber mutation could suppress the first one is that the two numerically reduced structural proteins would now be in balance. For instance, if the first amber mutation caused a reduction of tail fibers to one tenth the normal level, most phage particles produced would have insufficient tail fibers to be infective. However, if a second amber mutation is defective in a base plate component and causes one tenth the number of base plates to be made, this may restore the balance of tail fibers and base plates, and thus allow infective phage to be produced. [ 4 ]
In microbial genetics , a revertant is a mutant that has reverted to its former genotype or to the original phenotype by means of a suppressor mutation, or else by compensatory mutation somewhere in the gene (second site reversion). | https://en.wikipedia.org/wiki/Suppressor_mutation |
8651
12703
ENSG00000185338
ENSMUSG00000038037
O15524
O35716
NM_003745
NM_001271603 NM_009896
NP_003736
NP_001258532 NP_034026
Suppressor of cytokine signaling 1 is a protein that in humans is encoded by the SOCS1 gene . [ 5 ] [ 6 ] SOCS1 orthologs [ 7 ] have been identified in several mammals for which complete genome data are available.
This gene encodes a member of the STAT -induced STAT inhibitor (SSI), also known as suppressor of cytokine signalling (SOCS), family. SSI family members are cytokine-inducible negative regulators of cytokine signaling. The expression of this gene can be induced by a subset of cytokines, including IL2 , IL3 , erythropoietin (EPO), GM-CSF , and interferon-gamma (IFN-γ). The protein encoded by this gene functions downstream of cytokine receptors, and takes part in a negative feedback loop to attenuate cytokine signaling. Knockout studies in mice suggested the role of this gene as a modulator of IFN-γ action, which is required for normal postnatal growth and survival. [ 8 ]
Several recent viral studies have shown that viral genes, such as Tax gene product (Tax), encoded by HTLV-1, could hijack SOCS1 to inhibit host antiviral pathways, as a strategy to evade host immunity. [ 9 ]
The suppressor of cytokine signaling 1 has been shown to interact with:
This article incorporates text from the United States National Library of Medicine , which is in the public domain . | https://en.wikipedia.org/wiki/Suppressor_of_cytokine_signaling_1 |
SOCS ( suppressor of cytokine signaling proteins ) refers to a family of genes involved in inhibiting the JAK-STAT signaling pathway .
All SOCS have certain structures in common. This includes a varying N-terminal domain involved in protein-protein interactions, a central SH2 domain, which can bind to molecules that have been phosphorylated by tyrosine kinases, and a SOCS box located at the C-terminal that enables recruitment of E3 ligases and ubiquitin signaling molecules. [ 1 ]
The first protein to be classified as a suppressor of cytokine signaling, CIS (cytokine-inducible SH2), was discovered in 1995, when it was found to have a unique ability to regulate cytokine signal transduction. [ 2 ]
SOCS are negative regulators of the JAK-STAT signaling pathway. SOCS have also been implicated in the regulation of cytokines, growth factors, and tumor suppression. [ 3 ]
It has been suggested that SOCS can help prevent cytokine-mediated apoptosis in diabetes through negative regulation of pro-inflammatory cytokines secreted by immune cells, such as IFNγ, TNFα and IL-15. Improper functioning of one specific SOCS, SOCS3 may lead to type 2 diabetes, as it has been found that SOCS3 plays an important role in proper leptin signaling. [ 4 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Suppressor_of_cytokine_signalling |
Supra-arcade downflows ( SADs ) are sunward-traveling plasma voids that are sometimes observed in the Sun 's outer atmosphere , or corona , during solar flares . In solar physics , arcade refers to a bundle of coronal loops , and the prefix supra indicates that the downflows appear above flare arcades. They were first described in 1999 using the Soft X-ray Telescope (SXT) on board the Yohkoh satellite. [ 1 ] SADs are byproducts of the magnetic reconnection process that drives solar flares, but their precise cause remains unknown.
SADs are dark, finger-like plasma voids that are sometimes observed descending through the hot, dense plasma above bright coronal loop arcades during solar flares . They were first reported for a flare and associated coronal mass ejection that occurred on January 20, 1999, and was observed by the SXT onboard Yohkoh . [ 1 ] SADs are sometimes referred to as “ tadpoles ” for their shape and have since been identified in many other events (e.g. [ 2 ] [ 3 ] [ 4 ] [ 5 ] ). They tend to be most easily observed in the decay phases of long-duration flares , [ 2 ] when sufficient plasma has accumulated above the flare arcade to make SADs visible, but they do begin earlier during the rise phase. [ 6 ] In addition to the SAD voids, there are related structures known as supra-arcade downflowing loops (SADLs). SADLs are retracting (shrinking) coronal loops that form as the overlying magnetic field is reconfigured during the flare . SADs and SADLs are thought to be manifestations of the same process viewed from different angles, such that SADLs are observed if the viewer's perspective is along the axis of the arcade (i.e. through the arch), while SADs are observed if the perspective is perpendicular to the arcade axis. [ 7 ] [ 8 ]
SADs typically begin 100–200 Mm above the photosphere and descend 20–50 Mm before dissipating near the top of the flare arcade after a few minutes . [ 7 ] [ 9 ] Sunward speeds generally fall between 50 and 500 km s −1 [ 2 ] [ 7 ] but may occasionally approach 1000 km s −1 . [ 7 ] [ 10 ] As they fall, the downflows decelerate at rates of 0.1 to 2 km s −2 . [ 7 ] SADs appear dark because they are considerably less dense than the surrounding plasma , [ 3 ] while their temperatures (100,000 to 10,000,000 K ) do not differ significantly from their surroundings. [ 11 ] Their cross-sectional areas range from a few million to 70 million km 2 [ 7 ] (for comparison, the cross-sectional area of the Moon is 9.5 million km 2 ).
SADs are typically observed using soft X-ray and Extreme Ultraviolet (EUV) telescopes that cover a wavelength range of roughly 10 to 1500 Angstroms (Å) and are sensitive to the high-temperature (100,000 to 10,000,000 K ) coronal plasma through which the downflows move. These emissions are blocked by Earth's atmosphere , so observations are made using space observatories . The first detection was made by the Soft X-ray Telescope (SXT) onboard Yohkoh (1991–2001). [ 1 ] Observations soon followed from the Transition Region and Coronal Explorer (TRACE, 1998–2010), an EUV imaging satellite, and the spectroscopic SUMER instrument on board the Solar and Heliospheric Observatory (SOHO, 1995–2016). [ 3 ] [ 4 ] More recently, studies on SADs have used data from the X-Ray Telescope (XRT) onboard Hinode (2006—present) and the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO, 2010—present). [ 11 ] In addition to EUV and X-ray instruments, SADs may also be seen by white light coronagraphs such as the Large Angle and Spectrometric Coronagraph (LASCO) onboard SOHO , [ 12 ] though these observations are less common.
SADs are widely accepted to be byproducts of magnetic reconnection , the physical process that drives solar flares by releasing energy stored in the Sun's magnetic field . Reconnection reconfigures the local magnetic field surrounding the flare site from a higher-energy (non-potential, stressed ) state to a lower-energy ( potential ) state. This process is facilitated by the development of a current sheet , often preceded by or in tandem with a coronal mass ejection . As the field is being reconfigured, newly formed magnetic field lines are swept away from the reconnection site, producing outflows both toward and away from the solar surface , respectively referred to as downflows and upflows. SADs are believed to be related to reconnection downflows that perturb the hot, dense plasma that collects above flare arcades, [ 4 ] but precisely how SADs form is uncertain and is an area of active research.
SADs were first interpreted as cross sections of magnetic flux tubes , which comprise coronal loops , that retract down due to magnetic tension after being formed at the reconnection site. [ 1 ] [ 7 ] This interpretation was later revised to suggest that SADs are instead wakes behind much smaller retracting loops (SADLs), [ 8 ] rather than cross sections of the flux tubes themselves. Another possibility, also related to reconnection outflows, is that SADs arise from an instability, such as the Rayleigh-Taylor instability [ 13 ] or a combination of the tearing mode and Kelvin-Helmholtz instabilities. [ 14 ] | https://en.wikipedia.org/wiki/Supra-arcade_downflows |
Supramolecular chemistry refers to the branch of chemistry concerning chemical systems composed of a discrete number of molecules . The strength of the forces responsible for spatial organization of the system range from weak intermolecular forces , electrostatic charge , or hydrogen bonding to strong covalent bonding , provided that the electronic coupling strength remains small relative to the energy parameters of the component. [ 1 ] [ 2 ] [ page needed ] While traditional chemistry concentrates on the covalent bond, supramolecular chemistry examines the weaker and reversible non-covalent interactions between molecules. [ 3 ] These forces include hydrogen bonding, metal coordination , hydrophobic forces , van der Waals forces , pi–pi interactions and electrostatic effects. [ 4 ] [ 5 ]
Important concepts advanced by supramolecular chemistry include molecular self-assembly , molecular folding , molecular recognition , host–guest chemistry , mechanically-interlocked molecular architectures , and dynamic covalent chemistry . [ 6 ] The study of non-covalent interactions is crucial to understanding many biological processes that rely on these forces for structure and function. Biological systems are often the inspiration for supramolecular research.
The existence of intermolecular forces was first postulated by Johannes Diderik van der Waals in 1873. However, Nobel laureate Hermann Emil Fischer developed supramolecular chemistry's philosophical roots. In 1894, [ 16 ] Fischer suggested that enzyme–substrate interactions take the form of a "lock and key", the fundamental principles of molecular recognition and host–guest chemistry. In the early twentieth century non-covalent bonds were understood in gradually more detail, with the hydrogen bond being described by Latimer and Rodebush in 1920.
With the deeper understanding of the non-covalent interactions, for example, the clear elucidation of DNA structure, chemists started to emphasize the importance of non-covalent interactions. [ 17 ] In 1967, Charles J. Pedersen discovered crown ethers, which are ring-like structures capable of chelating certain metal ions. Then, in 1969, Jean-Marie Lehn discovered a class of molecules similar to crown ethers, called cryptands. After that, Donald J. Cram synthesized many variations to crown ethers, on top of separate molecules capable of selective interaction with certain chemicals. The three scientists were awarded the Nobel Prize in Chemistry in 1987 for "development and use of molecules with structure-specific interactions of high selectivity”. [ 18 ] In 2016, Bernard L. Feringa , Sir J. Fraser Stoddart , and Jean-Pierre Sauvage were awarded the Nobel Prize in Chemistry, "for the design and synthesis of molecular machines ". [ 19 ]
The term supermolecule (or supramolecule ) was introduced by Karl Lothar Wolf et al. ( Übermoleküle ) in 1937 to describe hydrogen-bonded acetic acid dimers . [ 20 ] [ 21 ] The term supermolecule is also used in biochemistry to describe complexes of biomolecules , such as peptides and oligonucleotides composed of multiple strands. [ 22 ]
Eventually, chemists applied these concepts to synthetic systems. One breakthrough came in the 1960s with the synthesis of the crown ethers by Charles J. Pedersen . Following this work, other researchers such as Donald J. Cram , Jean-Marie Lehn and Fritz Vögtle reported a variety of three-dimensional receptors, and throughout the 1980s research in the area gathered a rapid pace with concepts such as mechanically interlocked molecular architectures emerging.
The influence of supramolecular chemistry was established by the 1987 Nobel Prize for Chemistry which was awarded to Donald J. Cram, Jean-Marie Lehn, and Charles J. Pedersen in recognition of their work in this area. [ 23 ] The development of selective "host–guest" complexes in particular, in which a host molecule recognizes and selectively binds a certain guest, was cited as an important contribution.
Molecular self-assembly is the construction of systems without guidance or management from an outside source (other than to provide a suitable environment). The molecules are directed to assemble through non-covalent interactions. Self-assembly may be subdivided into intermolecular self-assembly (to form a supramolecular assembly ), and intramolecular self-assembly (or folding as demonstrated by foldamers and polypeptides). Molecular self-assembly also allows the construction of larger structures such as micelles , membranes , vesicles , liquid crystals , and is important to crystal engineering . [ 24 ]
Molecular recognition is the specific binding of a guest molecule to a complementary host molecule to form a host–guest complex. Often, the definition of which species is the "host" and which is the "guest" is arbitrary. The molecules are able to identify each other using non-covalent interactions. Key applications of this field are the construction of molecular sensors and catalysis . [ 25 ] [ 26 ] [ 27 ] [ 28 ]
Molecular recognition and self-assembly may be used with reactive species in order to pre-organize a system for a chemical reaction (to form one or more covalent bonds). It may be considered a special case of supramolecular catalysis . Non-covalent bonds between the reactants and a "template" hold the reactive sites of the reactants close together, facilitating the desired chemistry. This technique is particularly useful for situations where the desired reaction conformation is thermodynamically or kinetically unlikely, such as in the preparation of large macrocycles. This pre-organization also serves purposes such as minimizing side reactions, lowering the activation energy of the reaction, and producing desired stereochemistry . After the reaction has taken place, the template may remain in place, be forcibly removed, or may be "automatically" decomplexed on account of the different recognition properties of the reaction product. The template may be as simple as a single metal ion or may be extremely complex. [ citation needed ]
Mechanically interlocked molecular architectures consist of molecules that are linked only as a consequence of their topology. Some non-covalent interactions may exist between the different components (often those that were used in the construction of the system), but covalent bonds do not. Supramolecular chemistry, and template-directed synthesis in particular, is key to the efficient synthesis of the compounds. Examples of mechanically interlocked molecular architectures include catenanes , rotaxanes , molecular knots , molecular Borromean rings , [ 29 ] 2D [ c 2]daisy chain polymer [ 30 ] and ravels. [ 31 ]
In dynamic covalent chemistry covalent bonds are broken and formed in a reversible reaction under thermodynamic control. While covalent bonds are key to the process, the system is directed by non-covalent forces to form the lowest energy structures. [ 32 ]
Many synthetic supramolecular systems are designed to copy functions of biological systems. These biomimetic architectures can be used to learn about both the biological model and the synthetic implementation. Examples include photoelectrochemical systems, catalytic systems, protein design and self-replication . [ 33 ]
Molecular imprinting describes a process by which a host is constructed from small molecules using a suitable molecular species as a template. After construction, the template is removed leaving only the host. The template for host construction may be subtly different from the guest that the finished host binds to. In its simplest form, imprinting uses only steric interactions, but more complex systems also incorporate hydrogen bonding and other interactions to improve binding strength and specificity. [ 34 ]
Molecular machines are molecules or molecular assemblies that can perform functions such as linear or rotational movement, switching, and entrapment. These devices exist at the boundary between supramolecular chemistry and nanotechnology , and prototypes have been demonstrated using supramolecular concepts. [ 35 ] Jean-Pierre Sauvage , Sir J. Fraser Stoddart and Bernard L. Feringa shared the 2016 Nobel Prize in Chemistry for the 'design and synthesis of molecular machines'. [ 36 ]
Supramolecular systems are rarely designed from first principles. Rather, chemists have a range of well-studied structural and functional building blocks that they are able to use to build up larger functional architectures. Many of these exist as whole families of similar units, from which the analog with the exact desired properties can be chosen.
Macrocycles are very useful in supramolecular chemistry, as they provide whole cavities that can completely surround guest molecules and may be chemically modified to fine-tune their properties.
Many supramolecular systems require their components to have suitable spacing and conformations relative to each other, and therefore easily employed structural units are required. [ 39 ]
Supramolecular chemistry has found many applications, [ 41 ] in particular molecular self-assembly processes have been applied to the development of new materials. Large structures can be readily accessed using bottom-up synthesis as they are composed of small molecules requiring fewer steps to synthesize. Thus most of the bottom-up approaches to nanotechnology are based on supramolecular chemistry. [ 42 ] Many smart materials [ 43 ] are based on molecular recognition. [ 44 ]
A major application of supramolecular chemistry is the design and understanding of catalysts and catalysis. Non-covalent interactions influence the binding reactants. [ 45 ]
Design based on supramolecular chemistry has led to numerous applications in the creation of functional biomaterials and therapeutics. [ 46 ] Supramolecular biomaterials afford a number of modular and generalizable platforms with tunable mechanical, chemical and biological properties. These include systems based on supramolecular assembly of peptides, host–guest macrocycles, high-affinity hydrogen bonding, and metal–ligand interactions.
A supramolecular approach has been used extensively to create artificial ion channels for the transport of sodium and potassium ions into and out of cells. [ 47 ]
Supramolecular chemistry is also important to the development of new pharmaceutical therapies by understanding the interactions at a drug binding site. The area of drug delivery has also made critical advances as a result of supramolecular chemistry providing encapsulation and targeted release mechanisms. [ 48 ] In addition, supramolecular systems have been designed to disrupt protein–protein interactions that are important to cellular function. [ 49 ]
Supramolecular chemistry has been used to demonstrate computation functions on a molecular scale. In many cases, photonic or chemical signals have been used in these components, but electrical interfacing of these units has also been shown by supramolecular signal transduction devices. Data storage has been accomplished by the use of molecular switches with photochromic and photoisomerizable units, by electrochromic and redox -switchable units, and even by molecular motion. Synthetic molecular logic gates have been demonstrated on a conceptual level. Even full-scale computations have been achieved by semi-synthetic DNA computers . | https://en.wikipedia.org/wiki/Supramolecular_assembly |
Supramolecular catalysis refers to an application of supramolecular chemistry , especially molecular recognition and guest binding, toward catalysis. [ 1 ] [ 2 ] This field was originally inspired by enzymatic system which, unlike classical organic chemistry reactions, utilizes non-covalent interactions such as hydrogen bonding, cation-pi interaction, and hydrophobic forces to dramatically accelerate rate of reaction and/or allow highly selective reactions to occur. Because enzymes are structurally complex and difficult to modify, supramolecular catalysts offer a simpler model for studying factors involved in catalytic efficiency of the enzyme. [ 1 ] : 1 Another goal that motivates this field is the development of efficient and practical catalysts that may or may not have an enzyme equivalent in nature.
A related field of study is asymmetric catalysis which requires molecular recognition to differentiate enantiomeric starting materials. It could be categorized as an area of supramolecular catalysis, but supramolecular catalysis however does not necessarily involve asymmetric reactions.
The term supramolecular chemistry is defined by Jean-Marie Lehn as "the chemistry of intermolecular bond, covering structures and functions of the entities formed by association of two or more chemical species" in his Nobel lecture in 1987, [ 5 ] but the concept of supramolecular catalysis was started way earlier in 1946 by Linus Pauling when he founded the theory of enzymatic catalysis in which rate acceleration is the result of non-covalent stabilization of the transition state by the enzymes. [ 6 ] Nevertheless, it was not until a few decades later that an artificial enzyme was developed. The first simple enzyme mimics were based on crown ether and cryptand. [ 7 ] In 1976, less than ten years after the discovery of crown ether, Cram et al. developed a functionalized binapthyl crown ether that catalyze transacylation. [ 3 ] The catalyst makes use the crown ether motif's ability to capture cation to bind to the ammonium ion part of the substrate and subsequently employs the nearby thiol motif to cleave the ester.
From the early 1970s, cyclodextrins have been extensively studied for its encapsulation properties and used as binding sites in supramolecular catalyst. [ 2 ] Cyclodextrins have rigid ring structure, hydrophilic surface, and hydrophobic cavity on the inside; therefore, they are capable of binding organic molecules in aqueous solution. In 1978, with the background knowledge that the hydrolysis of m-tert-butylphenyl acetate is accelerated in the presence of 2-benzimidazoleacetic acid and alpha-cyclodextrin, [ 8 ] Brewslow et al. developed a catalyst based on a beta-cyclodextrin carrying two imidazole groups. This cyclodextrin catalytic system mimics ribonuclease A by its use of a neutral imidazole and an imidazolium cation to selective cleave cyclic phosphate substrates. The rate of the reaction is catalyzed 120 times faster, and unlike a hydrolysis by simple base NaOH that gives a 1:1 mixture of the products, this catalysts yield a 99:1 selectivity for one compound. [ 4 ]
1993 witness the first self-assembled capsule [ 9 ] and in 1997 the so-called "tennis ball" structure was used to catalyze a Diels-Alder reaction. [ 10 ] Self-assembled molecules have an advantage over crown ether and cyclodextrin in that they can capture significant larger molecules or even two molecules at the same time. In the following decades, many research groups, such as Makoto Fujita, Ken Raymond , and Jonathan Nitschke, developed cage-like catalysts also from molecular self-assembly principle.
In 2002, Sanders and coworkers published the use of dynamic combinatorial library technique to construct a receptor [ 11 ] and in 2003 they employed the technique to develop a catalyst for Diels-Alder reaction. [ 12 ]
Some common modes of supramolecular catalysis are described below.
A supramolecular host could bind to a guest molecule in such a way that the guest's labile group is positioned close to the reactive group of another reactive species. The proximity of the two groups enhances the probability that the reaction could occur and thus the reaction rate is increased. This concept is similar to the principle of preorganization which states that complexation could be improved if the binding motifs are preorganized in a well-defined position so that the host does not require any major conformational change for complexation. [ 14 ] In this case, the catalyst is preorganized such that no major conformational changes is required for the reaction to occur. A notable example of catalysts that employ this mechanism is Jean-Marie Lehn's crown ether. [ 13 ] In addition, catalysts based on functionalized cyclodextrins often employ this mode of catalysis. [ 15 ] : 88
Bimolecular reactions are highly dependent on the concentration of substrates. Therefore, when a supramolecular container encapsulates both reactants within its small cavity, the effective local concentration of the reactants is increased and, as a result of an entropic effect, the rate of the reaction is accelerated. [ 15 ] : 89 That is to say an intramolecular reaction is faster than its corresponding intermolecular reaction.
Although high raise in effective concentration is observed, molecules that employ this mode of catalysis have tiny rate acceleration compared to that of enzymes. A proposed explanation is that in a container the substrates are not as tightly bound as in enzyme. The reagents have room to wiggle in a cavity and so the entropic effect might not be as important. Even in the case of enzymes, computational studies have shown that the entropic effect might also be overestimated. [ 16 ]
Examples of molecules that work via this mechanism are Rebek's tennis ball and Fujita's octahedral complex. [ 10 ] [ 17 ]
Supramolecular catalysts can accelerate reactions not only by placing the two reactants in close proximity but also by stabilizing the transition state of the reaction and reducing activation energy . [ 15 ] : 89 While this fundamental principle of catalysis is common in small molecule or heterogeneous catalysts, supramolecular catalysts however has a difficult time utilizing the concept due to their often rigid structures. Unlike enzymes that can change shape to accommodate the substrates, supramolecules do not have that kind of flexibility and so rarely achieve sub-angstrom adjustment required for perfect transition state stabilization. [ 1 ] : 2
An example of catalysts of this type is Sander's porphyrin trimer. A Diels Alder reaction between two pyridine functionalized substrates normally yield a mixture of endo and exo products. In the presence of the two catalysts, however, complete endo selectivity or exo selectivity could be obtained. The underlying cause of the selectivity is the coordination interaction between pyridine and the zinc ion on porphyrin. Depending on the shape of the catalysts, one product is preferred over the other. [ 18 ]
The traditional approach to supramolecular catalysts focuses on the design of macromolecular receptor with appropriately placed catalytic functional groups. These catalysts are often inspired by the structure of enzymes with the catalytic group mimicking reactive amino acid residues, but unlike real enzymes, the binding sites of these catalysts are rigid structure made from chemical building blocks. [ 19 ] All of the examples in this article are developed via the design approach.
Jeremy Sanders pointed out that the design approach has not been successful and has produced very few efficient catalysts because of rigidity of the supramolecules. He argued that rigid molecules with a slight mismatch to the transition state cannot be an efficient catalyst. Rather than investing so much synthesis effort on one rigid molecule that we cannot determine its precise geometry to the sub-angstrom level which is required for good stabilization, Sanders suggested the use of many small flexible building blocks with competing weak interactions so that it is possible for the catalyst to adjust its structure to accommodate the substrate better. [ 20 ] There is a direct trade-off between the enthalpic benefit from flexible structure and the entropic benefit from rigid structure. [ 1 ] : 3 Flexible structure could perhaps bind the transition state better but it allows more room for the substrates to move and vibrate. Most supramolecular chemists in the past prefer to build rigid structures out of fear of entropic cost. [ 20 ]
This problem could perhaps be mended by Baker and Houk 's "inside-out approach" which allows a systematic de novo enzyme development. [ 21 ] This computational method starts simply with a predicted transition state structure and slowly builds outward by optimizing the arrangement of functional groups to stabilize the transition state. Then it fills out the remainder of the active site and, finally, it generates an entire protein scaffold that could contain the designed active site. This method could potentially be applied to supramolecular catalysis, although a plethora of chemical building blocks could easily overwhelm the computational model intended to work with 20 amino acids.
Assuming that catalytic activity largely depends on the catalyst's affinity to the transition state, one could synthesize a transition state analog (TSA), a structure that resembles the transition state of the reaction. Then one could link the TSA to a solid-support or identifiable tag and use that TSA to select an optimal catalyst from a mixture of many different potential catalysts generated chemically or biologically by a diversity oriented synthesis . This method allows quick screening of a library of diverse compounds. It does not require as much synthetic effort and it allows a study of various catalytic factors simultaneously. Hence the method could potentially yield an efficient catalyst that we could not have designed with our current knowledge. [ 19 ]
Many catalytic antibodies were developed and studied using this approach.
A problem with transition state analogue selection approach is that catalytic activity is not a screening criterion. TSAs do not necessarily represent real transition states and so a catalyst obtained from screening could just be the best receptor for a TSA but is not necessarily the best catalyst. To circumvent this problem, catalytic activity needs to be measured directly and also quickly. To develop a high-throughput screen , substrates could be designed to change color or release a fluorescent product upon reaction. For example, Crabtree and coworkers utilized this method in screening for a hydrosylation catalysts for alkene and imine. [ 22 ] Unfortunately the prerequisite for such substrates narrow down the range of reactions for study. [ 19 ]
In contrast to traditional combinatorial synthesis where a library of catalysts were first generated and later screened (as in the two above approaches), dynamic combinatorial library approach utilizes a mixture of multicomponent building blocks that reversibly form library of catalysts. With out a template, the library consists of a roughly equal mixture of different combination of building blocks. In the presence of a template which is either a starting material or a TSA, the combination that provides the best binding to the template is thermodynamically favorable and thus that combination is more prevalent than other library members. The biased ratio of the desired catalyst to other combinatorial products could then be frozen by terminating the reversibility of the equilibrium by means such as change in temperature, pH, or radiation to yield the optimal catalyst. [ 19 ] For example, Lehn et al. used this method to create a dynamic combinatorial library of imine inhibitor from a set of amines and a set of aldehydes. After some time, the equilibrium was terminated by an addition of NaBH 3 CN to afford the desired catalyst. [ 23 ]
In nature, pyruvate oxidase employs two cofactors thiamine pyrophosphate (ThDP) and Flavin adenine dinucleotide (FAD) to catalyze a conversion of pyruvate to acetyl phosphate. First, ThDP mediates a decarboxylation of pyruvate and generates an active aldehyde as a product. The aldehyde is then oxidized by FAD and is subsequently attacked by phosphate to yield acetyl phosphate.
This biological system inspired the design of a supramolecular catalyst based on cyclophane . The catalyst has thiazolium ion, a reactive part of ThDP and flavin, a bare-bones core of FAD, in close proximity and near the substrate binding site. The catalytic cycle is almost the same as that in nature, except the substrate is an aromatic aldehyde rather than pyruvate. First, the catalyst binds the substrate within its cyclophane ring. Then, it uses thiazolium ion to condense with the substrate generating an active aldehyde. This aldehyde is oxidized by flavin and then attacked by methanol to yield a methyl ester. [ 24 ]
Processive enzymes are proteins that catalyze consecutive reactions without releasing its substrate. An example of processive enzymes is RNA polymerase which binds to a DNA strand and repeatedly catalyzes nucleotide transfers, effectively synthesizing a corresponding RNA strand.
An artificial processive enzyme has been designed in a form of manganese porphyrin rotaxane that treads along a long polymer of alkene and catalyze multiple rounds of alkene epoxidation. Manganese (III) ion in the porphyrin is the molecule's catalytic center, capable of epoxidation in the presence of an oxygen donor and an activating ligand. With a small ligand such pyridine that binds manganese from inside the cavity of the rotaxane, epoxidation happens outside the catalyst. With a large bulky ligand such as tert-butyl pyridine that does not fit inside the cavity however, epoxidation happens on the inside of the catalyst. [ 25 ]
A supramolecular host M 4 L 6 (4 gallium ions and 6 ligands for each complex) self-assembles via metal-ligand interaction in aqueous solution. This container molecule is polyanionic and thus its tetrahedron-shaped cavity is capable of encapsulating and stabilizing a cationic molecule. Consequently, encapsulated molecule can be easily protonated as a resulting carbocation from protonation is stabilized by the surrounding anions. This container assists in acid-catalyzed Nazarov cyclizations. The catalyst accelerates the reaction by over one million fold, making it the most efficient supramolecular catalyst to date. It was proposed that such a high catalytic activity does not arise just from the increased basicity of the encapsulated substrate but also from the constrictive binding that stabilize the transition state of the cyclization. Unfortunately, this catalyst has a problem with product inhibition . To by pass that problem, the product of the cyclization reaction could be reacted with a dienophile transforming it into a Diels-Alder adduct that no longer fits inside the catalyst cavity. [ 26 ]
In this case, the supramolecular host was initially designed to simply capture cationic guests. Almost a decade later, it was exploited as a catalyst for Nazarov cyclization.
Fujita and coworkers discovered a self-assemble M 6 L 4 (6 palladium ions and 4 ligands in each complex) supramolecular container that could be enhanced into a chiral supramolecule by an addition of peripheral chiral auxiliary. In this case, the auxiliary diethyldiaminocyclohexane does not directly activate the catalytic site but induces a slight deformation of the triazine plane to create chiral cavity inside the container molecule. This container could then be used to asymmetrically catalyze a [2+2] photoaddition of maleimide and inert aromatic compound fluoranthene, which previously have not been shown to undergo thermal or photochemical pericyclic reaction. The catalyst yields an enantiomeric excess of 40%. [ 27 ]
Enzymes also inspired a set of confined Bronsted acids within an extremely sterically demanding chiral pocket based on a C 2 -symmetric bis(binapthyl) imidodiphosphoric acid. Within the chiral microenvironment, the catalysts has a geometrically fixed bifunctional active site that activates both an electrophilic part and a nucleophilic part of a substrate. This catalyst enables stereoselective spiroacetal formation with high enantiomeric excess for a variety of substrates. [ 28 ]
Supramolecular containers do not only have an application in catalysis but also in the opposite, namely, inhibition. A container molecule could encapsulate a guest molecule and thus subsequently renders the guest unreactive. A mechanism of inhibition could either be that the substrate is completely isolated from the reagent or that the container molecule destabilize the transition state of the reaction.
Nitschke and coworkers invented a self-assembly M 4 L 6 supramolecular host with a tetrahedral hydrophobic cavity that can encapsulate white phosphorus . Pyrophoric phosphorus, which could self-combust upon contact with air, is rendered air-stable within the cavity. Even though the hole in the cavity is large enough for an oxygen molecule to enter, the transition state of the combustion is too large to fit within the small cage cavity. [ 29 ]
After many decades since its inception, supramolecular chemistry's application in practical catalysis remains elusive. Supramolecular catalysis has not yet made significant contribution in the area of industrial chemistry or synthetic methodology. [ 20 ] Here are few problems associated with this field.
In many supramolecular catalytic systems designed to work with bimolecular addition reactions like the Diels-Alder, the product of the reaction binds more strongly to the supramolecular host than the two substrates do, consequently leading to inhibition by the product. As a result, these catalysts has a turnover number of one and are not truly catalytic. A stoichiometric quantity of the catalysts is needed for a full conversion. [ 30 ]
Most supramolecular catalysts are developed from rigid building blocks because rigid blocks are less complicated than flexible parts in constructing a desired shape and placing functional groups where the designer wants. Due to the rigidity, however, a slight mismatch from the transition state inevitably leads to poor stabilization and thus poor catalysis. In nature, enzymes are flexible and could change their structures to bind a transition state better than their native form. [ 20 ]
Syntheses of large complex catalysts are time and resource consuming. An unexpected deviation from the design could be disastrous. Once a catalyst is discovered, modification for further adjustment could be so synthetically challenging that it is easier to study the poor catalyst than to improve it. [ 20 ] | https://en.wikipedia.org/wiki/Supramolecular_catalysis |
Supramolecular chemistry refers to the branch of chemistry concerning chemical systems composed of a discrete number of molecules . The strength of the forces responsible for spatial organization of the system range from weak intermolecular forces , electrostatic charge , or hydrogen bonding to strong covalent bonding , provided that the electronic coupling strength remains small relative to the energy parameters of the component. [ 1 ] [ 2 ] [ page needed ] While traditional chemistry concentrates on the covalent bond, supramolecular chemistry examines the weaker and reversible non-covalent interactions between molecules. [ 3 ] These forces include hydrogen bonding, metal coordination , hydrophobic forces , van der Waals forces , pi–pi interactions and electrostatic effects. [ 4 ] [ 5 ]
Important concepts advanced by supramolecular chemistry include molecular self-assembly , molecular folding , molecular recognition , host–guest chemistry , mechanically-interlocked molecular architectures , and dynamic covalent chemistry . [ 6 ] The study of non-covalent interactions is crucial to understanding many biological processes that rely on these forces for structure and function. Biological systems are often the inspiration for supramolecular research.
The existence of intermolecular forces was first postulated by Johannes Diderik van der Waals in 1873. However, Nobel laureate Hermann Emil Fischer developed supramolecular chemistry's philosophical roots. In 1894, [ 16 ] Fischer suggested that enzyme–substrate interactions take the form of a "lock and key", the fundamental principles of molecular recognition and host–guest chemistry. In the early twentieth century non-covalent bonds were understood in gradually more detail, with the hydrogen bond being described by Latimer and Rodebush in 1920.
With the deeper understanding of the non-covalent interactions, for example, the clear elucidation of DNA structure, chemists started to emphasize the importance of non-covalent interactions. [ 17 ] In 1967, Charles J. Pedersen discovered crown ethers, which are ring-like structures capable of chelating certain metal ions. Then, in 1969, Jean-Marie Lehn discovered a class of molecules similar to crown ethers, called cryptands. After that, Donald J. Cram synthesized many variations to crown ethers, on top of separate molecules capable of selective interaction with certain chemicals. The three scientists were awarded the Nobel Prize in Chemistry in 1987 for "development and use of molecules with structure-specific interactions of high selectivity”. [ 18 ] In 2016, Bernard L. Feringa , Sir J. Fraser Stoddart , and Jean-Pierre Sauvage were awarded the Nobel Prize in Chemistry, "for the design and synthesis of molecular machines ". [ 19 ]
The term supermolecule (or supramolecule ) was introduced by Karl Lothar Wolf et al. ( Übermoleküle ) in 1937 to describe hydrogen-bonded acetic acid dimers . [ 20 ] [ 21 ] The term supermolecule is also used in biochemistry to describe complexes of biomolecules , such as peptides and oligonucleotides composed of multiple strands. [ 22 ]
Eventually, chemists applied these concepts to synthetic systems. One breakthrough came in the 1960s with the synthesis of the crown ethers by Charles J. Pedersen . Following this work, other researchers such as Donald J. Cram , Jean-Marie Lehn and Fritz Vögtle reported a variety of three-dimensional receptors, and throughout the 1980s research in the area gathered a rapid pace with concepts such as mechanically interlocked molecular architectures emerging.
The influence of supramolecular chemistry was established by the 1987 Nobel Prize for Chemistry which was awarded to Donald J. Cram, Jean-Marie Lehn, and Charles J. Pedersen in recognition of their work in this area. [ 23 ] The development of selective "host–guest" complexes in particular, in which a host molecule recognizes and selectively binds a certain guest, was cited as an important contribution.
Molecular self-assembly is the construction of systems without guidance or management from an outside source (other than to provide a suitable environment). The molecules are directed to assemble through non-covalent interactions. Self-assembly may be subdivided into intermolecular self-assembly (to form a supramolecular assembly ), and intramolecular self-assembly (or folding as demonstrated by foldamers and polypeptides). Molecular self-assembly also allows the construction of larger structures such as micelles , membranes , vesicles , liquid crystals , and is important to crystal engineering . [ 24 ]
Molecular recognition is the specific binding of a guest molecule to a complementary host molecule to form a host–guest complex. Often, the definition of which species is the "host" and which is the "guest" is arbitrary. The molecules are able to identify each other using non-covalent interactions. Key applications of this field are the construction of molecular sensors and catalysis . [ 25 ] [ 26 ] [ 27 ] [ 28 ]
Molecular recognition and self-assembly may be used with reactive species in order to pre-organize a system for a chemical reaction (to form one or more covalent bonds). It may be considered a special case of supramolecular catalysis . Non-covalent bonds between the reactants and a "template" hold the reactive sites of the reactants close together, facilitating the desired chemistry. This technique is particularly useful for situations where the desired reaction conformation is thermodynamically or kinetically unlikely, such as in the preparation of large macrocycles. This pre-organization also serves purposes such as minimizing side reactions, lowering the activation energy of the reaction, and producing desired stereochemistry . After the reaction has taken place, the template may remain in place, be forcibly removed, or may be "automatically" decomplexed on account of the different recognition properties of the reaction product. The template may be as simple as a single metal ion or may be extremely complex. [ citation needed ]
Mechanically interlocked molecular architectures consist of molecules that are linked only as a consequence of their topology. Some non-covalent interactions may exist between the different components (often those that were used in the construction of the system), but covalent bonds do not. Supramolecular chemistry, and template-directed synthesis in particular, is key to the efficient synthesis of the compounds. Examples of mechanically interlocked molecular architectures include catenanes , rotaxanes , molecular knots , molecular Borromean rings , [ 29 ] 2D [ c 2]daisy chain polymer [ 30 ] and ravels. [ 31 ]
In dynamic covalent chemistry covalent bonds are broken and formed in a reversible reaction under thermodynamic control. While covalent bonds are key to the process, the system is directed by non-covalent forces to form the lowest energy structures. [ 32 ]
Many synthetic supramolecular systems are designed to copy functions of biological systems. These biomimetic architectures can be used to learn about both the biological model and the synthetic implementation. Examples include photoelectrochemical systems, catalytic systems, protein design and self-replication . [ 33 ]
Molecular imprinting describes a process by which a host is constructed from small molecules using a suitable molecular species as a template. After construction, the template is removed leaving only the host. The template for host construction may be subtly different from the guest that the finished host binds to. In its simplest form, imprinting uses only steric interactions, but more complex systems also incorporate hydrogen bonding and other interactions to improve binding strength and specificity. [ 34 ]
Molecular machines are molecules or molecular assemblies that can perform functions such as linear or rotational movement, switching, and entrapment. These devices exist at the boundary between supramolecular chemistry and nanotechnology , and prototypes have been demonstrated using supramolecular concepts. [ 35 ] Jean-Pierre Sauvage , Sir J. Fraser Stoddart and Bernard L. Feringa shared the 2016 Nobel Prize in Chemistry for the 'design and synthesis of molecular machines'. [ 36 ]
Supramolecular systems are rarely designed from first principles. Rather, chemists have a range of well-studied structural and functional building blocks that they are able to use to build up larger functional architectures. Many of these exist as whole families of similar units, from which the analog with the exact desired properties can be chosen.
Macrocycles are very useful in supramolecular chemistry, as they provide whole cavities that can completely surround guest molecules and may be chemically modified to fine-tune their properties.
Many supramolecular systems require their components to have suitable spacing and conformations relative to each other, and therefore easily employed structural units are required. [ 39 ]
Supramolecular chemistry has found many applications, [ 41 ] in particular molecular self-assembly processes have been applied to the development of new materials. Large structures can be readily accessed using bottom-up synthesis as they are composed of small molecules requiring fewer steps to synthesize. Thus most of the bottom-up approaches to nanotechnology are based on supramolecular chemistry. [ 42 ] Many smart materials [ 43 ] are based on molecular recognition. [ 44 ]
A major application of supramolecular chemistry is the design and understanding of catalysts and catalysis. Non-covalent interactions influence the binding reactants. [ 45 ]
Design based on supramolecular chemistry has led to numerous applications in the creation of functional biomaterials and therapeutics. [ 46 ] Supramolecular biomaterials afford a number of modular and generalizable platforms with tunable mechanical, chemical and biological properties. These include systems based on supramolecular assembly of peptides, host–guest macrocycles, high-affinity hydrogen bonding, and metal–ligand interactions.
A supramolecular approach has been used extensively to create artificial ion channels for the transport of sodium and potassium ions into and out of cells. [ 47 ]
Supramolecular chemistry is also important to the development of new pharmaceutical therapies by understanding the interactions at a drug binding site. The area of drug delivery has also made critical advances as a result of supramolecular chemistry providing encapsulation and targeted release mechanisms. [ 48 ] In addition, supramolecular systems have been designed to disrupt protein–protein interactions that are important to cellular function. [ 49 ]
Supramolecular chemistry has been used to demonstrate computation functions on a molecular scale. In many cases, photonic or chemical signals have been used in these components, but electrical interfacing of these units has also been shown by supramolecular signal transduction devices. Data storage has been accomplished by the use of molecular switches with photochromic and photoisomerizable units, by electrochromic and redox -switchable units, and even by molecular motion. Synthetic molecular logic gates have been demonstrated on a conceptual level. Even full-scale computations have been achieved by semi-synthetic DNA computers . | https://en.wikipedia.org/wiki/Supramolecular_chemistry |
In chemistry , the term supramolecular chirality is used to describe supramolecular assemblies that are non-superposable on their mirror images.
Chirality in supramolecular chemistry implies the non-symmetric arrangement of molecular components in a non-covalent assembly. Chirality may arise in a supramolecular system if one of its component is chiral or if achiral components arrange in a non symmetrical way to produce a supermolecule that is chiral. [ 1 ]
This stereochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Supramolecular_chirality |
Supramolecular electronics is the experimental field of supramolecular chemistry that bridges the gap between molecular electronics and bulk plastics in the construction of electronic circuitry at the nanoscale . [ 1 ] In supramolecular electronics, assemblies of pi-conjugated systems on the 5 to 100 nanometer scale are prepared by molecular self-assembly with the aim to fit these structures between electrodes . With single molecules as researched in molecular electronics at the 5 nanometer scale this would be impractical. [ why? ] Nanofibers can be prepared from polymers such as polyaniline and polyacetylene . [ 2 ] Chiral oligo(p-phenylenevinylene)s self-assemble in a controlled fashion into (helical) wires. [ 3 ] An example of actively researched compounds in this field are certain coronenes .
This electronics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Supramolecular_electronics |
Supramolecular polymers are a subset of polymers where the monomeric units are connected by reversible and highly directional secondary interactions–that is, non-covalent bonds . These non-covalent interactions include van der Waals interactions, hydrogen bonding , Coulomb or ionic interactions , π-π stacking , metal coordination , halogen bonding , chalcogen bonding , and host–guest interaction . [ 1 ] Their behavior can be described by the theories of polymer physics in dilute and concentrated solution , as well as in the bulk. [ 2 ]
Additionally, some supramolecular polymers have distinctive characteristics, [ 3 ] [ 4 ] [ 5 ] such as the ability to self-heal . Covalent polymers can be difficult to recycle, but supramolecular polymers may address this problem. [ 6 ] [ 7 ] [ 8 ]
The preamble of the field of supramolecular polymers can be considered dye-aggregates and host-guest complexes. [ 9 ] In early 19th century, it was noticed that dyes aggregate via "a special kind of polymerization". In 1988, Takuzo Aida , a Japanese polymer chemist, reported the concept of cofacial assembly wherein the amphiphilic porphyrin monomers are connected via van der Waals interaction forming one-dimensional architectures in solution, which can be considered as a prototype of supramolecular polymers. [ 10 ] Soon thereafter, one-dimensional aggregates were described based on hydrogen bonding interaction in the crystalline state. [ 11 ] With a different strategyusing hydrogen bonds, Jean M. J. Fréchet showed in 1989 that mesogenic molecules with carboxylic acid and pyridyl motifs, upon mixing in bulk, heterotropically dimerize to form a stable liquid crystalline structure. [ 12 ] In 1990, Jean-Marie Lehn showed that this strategy can be expanded to form a new category of polymers, which he called "liquid crystalline supramolecular polymer" using complementary triple hydrogen bonding motifs in bulk. [ 13 ] In 1993, M. Reza Ghadiri reported a nanotubular supramolecular polymer where a b -sheet-forming macrocyclic peptide monomer assembled together via multiple hydrogen bonding between adjacent macrocycles. [ 14 ] In 1994, Anselm. C. Griffin showed an amorphous supramolecular material using a single hydrogen bond between a homotropic molecules having carboxylic acid and pyridine termini. [ 15 ] The idea to make mechanically strong polymeric materials by 1D supramolecular association of small molecules requires a high association constant between the repeating building blocks. In 1997, E.W. "Bert" Meijer reported a telechelic monomer with ureidopyrimidinone termini as a "self-complementary" quadruple hydrogen bonding motif and demonstrated that the resulting supramolecular polymer in chloroform shows a temperature-dependent viscoelastic property in solution. [ 16 ] This is the first demonstration that supramolecular polymers, when sufficiently mechanically robust, are physically entangled in solution.
Monomers undergoing supramolecular polymerization are considered to be in equilibrium with the growing polymers, and thermodynamic factors therefore dominate the system. [ 17 ] However, when the constituent monomers are connected via strong and multivalent interactions, a " metastable " kinetic state can dominate the polymerization. An externally supplied energy, in the form of heat in most cases, can transform the "metastable" state into a thermodynamically stable polymer. A clear understanding of multiple pathways exist in supramolecular polymerization is still under debate, however, the concept of "pathway complexity", introduced by E.W. "Bert" Meijer , shed a light on the kinetic behavior of supramolecular polymerization. [ 18 ] Thereafter, many dedicated scientists are expanding the scope of "pathway complexity" because it can produce a variety of interesting assembled structures from the same monomeric units. Along this line of kinetically controlled processes, supramolecular polymers having "stimuli-responsive" [ 19 ] and "thermally bisignate" characteristics is also possible. [ 20 ]
In conventional covalent polymerization, two models based on step-growth and chain-growth mechanisms are operative. Nowadays, a similar subdivision is acceptable for supramolecular polymerization; isodesmic also known as equal-K model (step-growth mechanism) and cooperative or nucleation-elongation model (chain-growth mechanism). A third category is seeded supramolecular polymerization, which can be considered as a special case of chain-growth mechanism.
Supramolecular equivalent of step-growth mechanism is commonly known as isodesmic or equal-K model (K represents the total binding interaction between two neighboring monomers). In isodesmic supramolecular polymerization, no critical temperature or concentration of monomers is required for the polymerization to occur and the association constant between polymer and monomer is independent of the polymer chain length. Instead, the length of the supramolecular polymer chains rises as the concentration of monomers in the solution increases, or as the temperature decreases. In conventional polycondensation, the association constant is usually large that leads to a high degree of polymerization; however, a byproduct is observed. In isodesmic supramolecular polymerization, due to non-covalent bonding, the association between monomeric units is weak, and the degree of polymerization strongly depends on the strength of interaction, i.e. multivalent interaction between monomeric units. For instance, supramolecular polymers consisting of bifunctional monomers having single hydrogen bonding donor/acceptor at their termini usually end up with low degree of polymerization, however those with quadrupole hydrogen bonding, as in the case of ureidopyrimidinone motifs, result in a high degree of polymerization. In ureidopyrimidinone-based supramolecular polymer, the experimentally observed molecular weight at semi-dilute concentrations is in the order of 10 6 Dalton and the molecular weight of the polymer can be controlled by adding mono-functional chain-cappers.
Conventional chain-growth polymerization involves at least two phases; initiation and propagation, while and in some cases termination and chain transfer phases also occur. Chain-growth supramolecular polymerization in a broad sense involves two distinct phases; a less favored nucleation and a favored propagation. In this mechanism, after the formation of a nucleus of a certain size, the association constant is increased, and further monomer addition becomes more favored, at which point the polymer growth is initiated. Long polymer chains will form only above a minimum concentration of monomer and below a certain temperature. However, to realize a covalent analogue of chain-growth supramolecular polymerization, a challenging prerequisite is the design of appropriate monomers that can polymerize only by the action of initiators. Recently one example of chain-growth supramolecular polymerization with "living" characteristics is demonstrated. [ 21 ] In this case, a bowl-shaped monomer with amide-appended side chains form a kinetically favored intramolecular hydrogen bonding network and does not spontaneously undergo supramolecular polymerization at ambient temperatures. [ 22 ] However, an N-methylated version of the monomer serves as an initiator by opening the intramolecular hydrogen bonding network for the supramolecular polymerization, just like ring-opening covalent polymerization. The chain end in this case remains active for further extension of supramolecular polymer and hence chain-growth mechanism allows for the precise control of supramolecular polymer materials.
This is a special category of chain-growth supramolecular polymerization, where the monomer nucleates only in an early stage of polymerization to generate "seeds" and becomes active for polymer chain elongation upon further addition of a new batch of monomer. A secondary nucleation is suppressed in most of the case and thus possible to realize a narrow polydispersity of the resulting supramolecular polymer. In 2007, Ian Manners and Mitchell A. Winnik introduced this concept using a polyferrocenyldimethylsilane–polyisoprene diblock copolymer as the monomer, which assembles into cylindrical micelles. [ 23 ] When a fresh feed of the monomer is added to the micellar "seeds" obtained by sonication, the polymerization starts in a living polymerization manner. They named this method as crystallization-driven self-assembly (CDSA) and is applicable to construct micron-scale supramolecular anisotropic structures in 1D–3D. A conceptually different seeded supramolecular polymerization was shown by Kazunori Sugiyasu in a porphyrin-based monomer bearing amide-appended long alkyl chains. [ 24 ] At low temperature, this monomer preferentially forms spherical J-aggregates while fibrous H-aggregates at higher temperature. By adding a sonicated mixture of the J-aggregates ("seeds") into a concentrated solution of the J-aggregate particles, long fibers can be prepared via living seeded supramolecular polymerization. Frank Würthner achieved similar seeded supramolecular polymerization of amide functionalized perylene bisimide as monomer. [ 25 ] Importantly, the seeded supramolecular polymerization is also applicable to prepare supramolecular block copolymers .
Monomers capable of forming single, double, triple or quadruple hydrogen bonding has been utilized for making supramolecular polymers, and increased association of monomers obviously possible when monomers have maximum number of hydrogen bonding donor/acceptor motifs. For instance, ureidopyrimidinone-based monomer with self-complementary quadruple hydrogen bonding termini polymerized in solution, accordingly with the theory of conventional polymers and displayed a distinct viscoelastic nature at ambient temperatures.
Monomers with aromatic motifs such as bis(merocyanine), oligo( para -phenylenevinylene) (OPV), perylene bisimide (PBI) dye, cyanine dye, corannulene and nano-graphene derivatives have been employed to prepare supramolecular polymers. In some cases, hydrogen bonding side chains appended onto the core aromatic motif help to hold the monomer strongly in the supramolecular polymer. A notable system in this category is a nanotubular supramolecular polymer formed by the supramolecular polymerization of amphiphilic hexa- peri - hexabenzocoronene (HBC) derivatives. [ 26 ] Generally, nanotubes are categorized as 1D objects morphologically, however, their walls adopt a 2D geometry and therefore require a different design strategy. [ 27 ] HBC amphiphiles in polar solvents solvophobically assemble into a 2D bilayer membrane, which roles up into a helical tape or a nanotubular polymer. Conceptually similar amphiphilic design based on cyanine dye and zinc chlorin dye also polymerize in water resulting in nanotubular supramolecular polymers. [ 28 ] [ 29 ]
A variety of supramolecular polymers can be synthesized by using monomers with host-guest complementary binding motifs, such as crown ethers /ammonium ions, cucurbiturils / viologens , calixarene /viologens, cyclodextrins / adamantane derivatives, and pillar arene /imidazolium derivatives [30–33]. [ 30 ] [ 31 ] [ 32 ] When the monomers are "heteroditopic", supramolecular copolymers results, provided the monomers does not homopolymerize. Akira Harada was one of the firstwhorecognize the importance of combining polymers and cyclodextrins. [ 33 ] Feihe Huang showed an example of supramolecular alternating copolymer from two heteroditopic monomers carrying both crown ether and ammonium ion termini. [ 34 ] Takeharo Haino demonstrated an extreme example of sequence control in supramolecular copolymer, where three heteroditopic monomers are arranged in an ABC sequence along the copolymer chain. [ 35 ] The design strategy utilizing three distinct binding interactions; ball-and-socket (calix[5]arene/C60), donor-acceptor (bisporphyrin/trinitrofluorenone), and Hamilton's H-bonding interactions is the key to attain a high orthogonality to form an ABC supramolecular terpolymer.
Stereochemical information of a chiral monomer can be expressed in a supramolecular polymer. [ 36 ] Helical supramolecular polymer with P-and M-conformation are widely seen, especially those composed of disc-shaped monomers. When the monomers are achiral, both P-and M-helices are formed in equal amounts. When the monomers are chiral, typically due to the presence of one or more stereocenters in the side chains, the diastereomeric relationship between P- and M-helices leads to the preference of one conformation over the other. Typical example is a C 3 -symmetric disk-shaped chiral monomer that forms helical supramolecular polymers via the "majority rule". A slight excess of one enantiomer of the chiral monomer resulted in a strong bias to either the right-handed or left-handed helical geometry at the supramolecular polymer level. [ 37 ] In this case, a characteristic nonlinear dependence of the anisotropic factor, g, on the enantiomeric excess of a chiral monomer can be generally observed. Like in small molecule based chiral system, chirality of a supramolecular polymer also affected by chiral solvents. Some application such as a catalyst for asymmetric synthesis [ 38 ] and circular polarized luminescence are observed in chiral supramolecular polymers too.
A copolymer is formed from more than one monomeric species. Advanced polymerization techniques have been established for the preparation of covalent copolymers, however supramolecular copolymers are still in its infancy and is slowly progressing. In recent years, all plausible category of supramolecular copolymers such as random, alternating, block, blocky, or periodic has been demonstrated in a broad sense. [ 39 ]
Supramolecular polymers are the subject of research in academia and industry.
The stability of a supramolecular polymer can be described using the association constant, K ass . When K ass ≤ 10 4 M −1 , the polymeric aggregates are typically small in size and do not show any interesting properties and when K ass ≥ 10 10 M −1 , the supramolecular polymer behaves just like covalent polymers due to the lack of dynamics. So, an optimum K ass = 10 4 –10 10 M −1 need to be attained for producing functional supramolecular polymers. The dynamics and stability of the supramolecular polymers often affect by the influence of additives (e.g. co-solvent or chain-capper). When a good solvent, for instance chloroform, is added to a supramolecular polymer in a poor solvent, for instance heptane, the polymer disassembles. However, in some cases, cosolvents contribute the stabilization/destabilization of supramolecular polymer. For instance, supramolecular polymerization of a hydrogen bonding porphyrin-based monomer in a hydrocarbon solvent containing a minute amount of a hydrogen bond scavenging alcohol shows distinct pathways, i.e. polymerization favored both by cooling as well as heating, and is known as "thermally bisignate supramolecular polymerization". In another example, minute amounts of molecularly dissolved water molecules in apolar solvents, like methylcyclohexane, become part of the supramolecular polymer at lower temperatures, due to specific hydrogen bonding interaction between the monomer and water. [ 40 ]
Supramolecular polymers may be relevant to self-healing materials . [ 41 ] A supramolecular rubber based on vitrimers can self-heal simply by pressing the two broken edges of the material together. [ 42 ] High mechanical strength of a material and self-healing ability are generally mutually exclusive. Thus, a glassy material that can self-heal at room temperature remained a challenge until recently. A supramolecularly polymer based on ether- thiourea is mechanically robust ( e = 1.4 GPa) but can self-heal at room temperature by a compression at the fractured surfaces. [ 43 ] The invention of self-healable polymer glass updated the preconception that only soft rubbery materials can heal.
Another strategy uses a bivalent poly(isobutylene)s (PIBs) functionalized with barbituric acid at head and tail. [ 44 ] Multiple hydrogen bonding existed between the carbonyl group and amide group of barbituric acid enable it to form a supramolecular network. In this case, the snipped small PIBs-based disks can recover itself from mechanical damage after several-hour contact at room temperature .
Interactions between catechol and ferric ions exhibit pH -controlled self-healing supramolecular polymers. [ 45 ] The formation of mono-, bis- and triscatehchol-Fe 3+ complexes can be manipulated by pH, of which the bis- and triscatehchol-Fe 3+ complexes show elastic moduli as well as self-healing capacity. For example, the triscatehchol-Fe 3+ can restore its cohesiveness and shape after being torn. Chain-folding polyimide and pyrenyl-end-capped chains give rise to supramolecular networks. [ 46 ]
By incorporating electron donors and electron acceptors into the supramolecular polymers, features of artificial photosynthesis can be replicated. [ 47 ] [ 2 ]
DNA is a major example of a supramolecular polymer. [ 48 ] protein [ 49 ] Much effort has been develoted to related but synthetic materials. [ 50 ] At the same time, their reversible and dynamic nature make supramolecular polymers bio-degradable , [ 51 ] [ 52 ] which surmounts hard-to-degrade issue of covalent polymers and makes supramolecular polymers a promising platform for biomedical applications. Being able to degrade in biological environment lowers potential toxicity of polymers to a great extent and therefore, enhances biocompatibility of supramolecular polymers. [ 53 ] [ 54 ]
With the excellent nature in biodegradation and biocompatibility , supramolecular polymers show great potential in the development of drug delivery , gene transfection and other biomedical applications. [ 50 ]
Drug delivery : Multiple cellular stimuli could induce responses in supramolecular polymers. [ 55 ] [ 56 ] [ 50 ] The dynamic molecular skeletons of supramolecular polymers can be depolymerized when exposing to the external stimuli like pH in vivo . On the basis of this property, supramolecular polymers are capable of being a drug carrier. Making use of hydrogen bonding between nucleobases to induce self-assemble into pH-sensitive spherical micelles .
Gene transfection : Effective and low-toxic nonviral cationic vectors are highly desired in the field of gene therapy. [ 50 ] On account of the dynamic and stimuli-responsive properties, supramolecular polymers offer a cogent platform to construct vectors for gene transfection. By combining ferrocene dimer with β- cyclodextrin dimer , a redox-control supramolecular polymers system has been proposed as a vector. In COS-7 cells, this supramolecular polymersic vector can release enclosed DNA upon exposing to hydrogen peroxide and achieve gene transfection. [ 57 ]
Supramolecular polymers can simultaneously meet the requirements of aqueous compatibility, bio-degradability, biocompatibility, stimuli-responsiveness and other strict criterion. [ 50 ] Consequently, supramolecular polymers could be applicable to the biomedical fields.
The reversible nature of supramolecular polymers can produce biomaterials that can sense and respond to physiological cues, or that mimic the structural and functional aspects of biological signaling. [ 61 ]
Protein delivery, [ 62 ] [ 63 ] bio- imaging and diagnosis [ 64 ] [ 65 ] and tissue engineering , [ 66 ] [ 67 ] are also well developed. | https://en.wikipedia.org/wiki/Supramolecular_polymer |
Supriatna (born 14 March 1967) is an Indonesian professor of geography and academic administrator at the University of Indonesia. He is currently the Director of the University of Indonesia School of Environmental Science since 26 February 2025, and the acting director of the University of Indonesia School of Strategic and Global Studies since 3 March 2025.
Supriatna was born in Sukabumi on 14 March 1967 [ 1 ] as the son of Zaenal Emuch and Yohana Lobo. [ 2 ] He completed his primary education at the 1st Cikole State Primary School in Sukabumi in 1981, followed by secondary education at the 2nd Sukabumi Junior High School in 1983 and the 1st Sukabumi State High School in 1986. He then pursued his higher education at the University of Indonesia, graduating with a Bachelor of Science degree in Geography from the Faculty of Mathematics and Natural Sciences in 1992. He continued his studies at the Bandung Institute of Technology , earning a Master of Engineering degree in Geodesy from the Faculty of Earth Sciences and Technology (FTSP) in 1998. [ 1 ] In 2016, he obtained a doctorate in Environmental Science from the Postgraduate Program in Environmental Science (PSIL) at the University of Indonesia. [ 3 ] He also holds a Professional Geographer certification from the Indonesian Geographers Association (IGI) in 2024. [ 4 ]
Supriatna began his career as a lecturer in the University of Indonesia Department of Geography, teaching digital cartography and geographic information system . [ 1 ] He began his academic career as the department's student affairs coordinator in 1998 before being promoted to the department secretary a year later. [ 2 ] On 8 April 2004, Supriatna was appointed as the deputy dean for non-academic affairs of the Faculty of Maths and Natural Sciences under dean Adi Basukriadi, [ 5 ] serving in the position until 2014. [ 6 ] On 31 July 2012, Supriatna was appointed as the acting dean of the faculty by rector Gumilar Rusliwa Soemantri, citing Adi's expired term as his reasoning. In response to the dismissal, Adi and several other deans who were dismissed filed a letter of complaint to education Minister Mohammad Nuh, explaining their motion of no confidence against Gumilar. [ 7 ] Adi and these deans was restored to his position following a meeting between the university's board of trustees and the minister of education in August that year. [ 8 ]
After serving as deputy dean for ten years, Supriatna was appointed as the head of the Geosciences Study Center in the faculty, serving from 2014 to 2016. He then returned to structural position at the geography department and became the coordinator of geographic information system and remote sensing specialization (Ketua KBP SIG dan PJ, Ketua Kelompok Bidang Pembelajaran Sistem Informasi Geografis dan Penginderaan Jauh ) from 2016 to 2017. After briefly serving as the head of the geography's master studies in 2017, Supriatna became the head of the applied geography center in the faculty on the same year, serving until 2019. The next year, he was appointed as the chair of the geography department. He was re-appointed as center head in 2019 and as department head in 2022. [ 9 ] From 2022 to 2024, Supriatna was also involved in the Nusantara transition team, serving as the coordinator for mapping and spatial planning. [ 2 ] Supriatna was promoted to the rank of full professor on 1 September 2024. His inaugural speech on 15 January 2025, titled Spatial Modelling for Sustainable Development , discussed the use of various geographic technologies to analyze land cover changes, urbanization dynamics, and ecosystem sustainability, as well as the integration of geography in various disciplines. [ 4 ]
Supriatna was named as the Director of the University of Indonesia School of Environmental Science (SIL, Sekolah Ilmu Lingkungan ) on 22 January 2025 after passing a series of selection. [ 10 ] He was installed for the position on 26 February. [ 11 ] On 3 March 2025, Supriatna was named as the acting director of the University of Indonesia School of Strategic and Global Studies (SKSG, Sekolah Kajian Stratejik dan Global ), [ 12 ] following the removal of the previous director, Athor Subroto, in relation to his involvement in the doctorate promotion of Minister of Energy and Mineral Resources Bahlil Lahadalia . [ 13 ] Supriatna was tasked to restructure and improve the internal structure of SKSG, enhance aspects related to human resources, update and refine the learning processes, and ensuring accountability and transparency in the school. [ 12 ]
Aside from teaching at the University of Indonesia, Supriatna also taught at the National Development University "Veteran" Jakarta. He is also a member of a number of academic organizations, such as the Indonesian Geographers Association, Indonesian Lecturers Association, Indonesian Geospatial Council, Indonesian Environmental Experts Association, Indonesian Disaster Experts Association, and MIPAnet. [ 2 ]
Supriatna is married to Vreshty Winda Aryanti and has a son and a daughter. The couple currently resides in Cimanggis, Depok. [ 2 ] | https://en.wikipedia.org/wiki/Supriatna |
In decision theory , the sure-thing principle states that a decision maker who decided they would take a certain action in the case that event E has occurred, as well as in the case that the negation of E has occurred, should also take that same action if they know nothing about E .
The principle was coined by L.J. Savage : [ 1 ]
A businessman contemplates buying a certain piece of property. He considers the outcome of the next presidential election relevant. So, to clarify the matter to himself, he asks whether he would buy if he knew that the Democratic candidate were going to win, and decides that he would. Similarly, he considers whether he would buy if he knew that the Republican candidate were going to win, and again finds that he would. Seeing that he would buy in either event, he decides that he should buy, even though he does not know which event obtains, or will obtain, as we would ordinarily say. It is all too seldom that a decision can be arrived at on the basis of this principle, but except possibly for the assumption of simple ordering, I know of no other extralogical principle governing decisions that finds such ready acceptance.
Savage formulated the principle as a dominance principle , but it can also be framed probabilistically. [ 2 ] Richard Jeffrey [ 2 ] and later Judea Pearl [ 3 ] showed that Savage's principle is only valid when the probability of the event considered (e.g., the winner of the election) is unaffected by the action (buying the property). Under such conditions, the sure-thing principle is a theorem in the do -calculus [ 3 ] (see Bayes networks ). Blyth constructed a counterexample to the sure-thing principle using sequential sampling in the context of Simpson's paradox , [ 4 ] but this example violates the required action-independence provision. [ 5 ]
In the above cited paragraph, Savage illustrated the principle in terms of knowledge. However the formal definition of the principle, known as P2, does not involve knowledge because, in Savage's words, "it would introduce
new undefined technical terms referring to knowledge and possibility that would render it mathematically useless without still more
postulates governing these terms." Samet [ 6 ] provided a formal definition of the principle in terms of knowledge and showed that the impossibility to agree to disagree is a generalization of the sure-thing principle.
It is similarly targeted by the Ellsberg and Allais paradoxes , in which actual people's choices seem to violate this principle. [ 2 ] | https://en.wikipedia.org/wiki/Sure-thing_principle |
Suresh Venepally ( Telugu : సురేశ్ వేనెపల్లి ; born 1966) is an Indian mathematician known for his research work in algebra . He is a professor at Emory University .
Suresh was born in Vangoor, Telangana, India and studied in ZPHS at Vangoor up to 9th standard. He did his M.Sc at University of Hyderabad .
He joined Tata Institute of Fundamental Research (TIFR) in 1989 and got his PhD in under the guidance of Raman Parimala (1994). He later joined the faculty at University of Hyderabad . | https://en.wikipedia.org/wiki/Suresh_Venapally |
The surface-area-to-volume ratio or surface-to-volume ratio (denoted as SA:V , SA/V , or sa/vol ) is the ratio between surface area and volume of an object or collection of objects.
SA:V is an important concept in science and engineering. It is used to explain the relation between structure and function in processes occurring through the surface and the volume. Good examples for such processes are processes governed by the heat equation , [ 1 ] that is, diffusion and heat transfer by thermal conduction . [ 2 ] SA:V is used to explain the diffusion of small molecules, like oxygen and carbon dioxide between air, blood and cells, [ 3 ] water loss by animals, [ 4 ] bacterial morphogenesis, [ 5 ] organism's thermoregulation , [ 6 ] design of artificial bone tissue, [ 7 ] artificial lungs [ 8 ] and many more biological and biotechnological structures. For more examples see Glazier. [ 9 ]
The relation between SA:V and diffusion or heat conduction rate is explained from flux and surface perspective, focusing on the surface of a body as the place where diffusion, or heat conduction, takes place, i.e., the larger the SA:V there is more surface area per unit volume through which material can diffuse, therefore, the diffusion or heat conduction, will be faster. Similar explanation appears in the literature: "Small size implies a large ratio of surface area to volume, thereby helping to maximize the uptake of nutrients across the plasma membrane", [ 10 ] and elsewhere. [ 9 ] [ 11 ] [ 12 ]
For a given volume, the object with the smallest surface area (and therefore with the smallest SA:V) is a ball , a consequence of the isoperimetric inequality in 3 dimensions . By contrast, objects with acute-angled spikes will have very large surface area for a given volume.
A solid sphere or ball is a three-dimensional object, being the solid figure bounded by a sphere . (In geometry, the term sphere properly refers only to the surface, so a sphere thus lacks volume in this context.)
For an ordinary three-dimensional ball, the SA:V can be calculated using the standard equations for the surface and volume, which are, respectively, S A = 4 π r 2 {\displaystyle SA=4\pi {r^{2}}} and V = ( 4 / 3 ) π r 3 {\displaystyle V=(4/3)\pi {r^{3}}} . For the unit case in which r = 1 the SA:V is thus 3. For the general case, SA:V equals 3/ r , in an inverse relationship with the radius - if the radius is doubled, the SA:V halves (see figure).
Balls exist in any dimension and are generically called n -balls or hyperballs , where n is the number of dimensions.
The same reasoning can be generalized to n-balls using the general equations for volume and surface area, which are:
So the ratio equals S A / V = n r − 1 {\displaystyle SA/V=nr^{-1}} . Thus, the same linear relationship between area and volume holds for any number of dimensions (see figure): doubling the radius always halves the ratio.
The surface-area-to-volume ratio has physical dimension inverse length (L −1 ) and is therefore expressed in units of inverse metre (m −1 ) or its prefixed unit multiples and submultiples. As an example, a cube with sides of length 1 cm will have a surface area of 6 cm 2 and a volume of 1 cm 3 . The surface to volume ratio for this cube is thus
For a given shape, SA:V is inversely proportional to size. A cube 2 cm on a side has a ratio of 3 cm −1 , half that of a cube 1 cm on a side. Conversely, preserving SA:V as size increases requires changing to a less compact shape.
Materials with high surface area to volume ratio (e.g. very small diameter, very porous , or otherwise not compact ) react at much faster rates than monolithic materials, because more surface is available to react. An example is grain dust: while grain is not typically flammable, grain dust is explosive . Finely ground salt dissolves much more quickly than coarse salt.
A high surface area to volume ratio provides a strong "driving force" to speed up thermodynamic processes that minimize free energy . [ 13 ]
The ratio between the surface area and volume of cells and organisms has an enormous impact on their biology , including their physiology and behavior . For example, many aquatic microorganisms have increased surface area to increase their drag in the water. This reduces their rate of sink and allows them to remain near the surface with less energy expenditure. [ citation needed ]
An increased surface area to volume ratio also means increased exposure to the environment. The finely-branched appendages of filter feeders such as krill provide a large surface area to sift the water for food. [ 14 ]
Individual organs like the lung have numerous internal branchings that increase the surface area; in the case of the lung, the large surface supports gas exchange, bringing oxygen into the blood and releasing carbon dioxide from the blood. [ 15 ] [ 16 ] Similarly, the small intestine has a finely wrinkled internal surface, allowing the body to absorb nutrients efficiently. [ 17 ]
Cells can achieve a high surface area to volume ratio with an elaborately convoluted surface, like the microvilli lining the small intestine . [ 18 ]
Increased surface area can also lead to biological problems. More contact with the environment through the surface of a cell or an organ (relative to its volume) increases loss of water and dissolved substances. High surface area to volume ratios also present problems of temperature control in unfavorable environments. [ citation needed ]
The surface to volume ratios of organisms of different sizes also leads to some biological rules such as Allen's rule , Bergmann's rule [ 19 ] [ 20 ] [ 21 ] and gigantothermy . [ 22 ]
In the context of wildfires , the ratio of the surface area of a solid fuel to its volume is an important measurement. Fire spread behavior is frequently correlated to the surface-area-to-volume ratio of the fuel (e.g. leaves and branches). The higher its value, the faster a particle responds to changes in environmental conditions, such as temperature or moisture. Higher values are also correlated to shorter fuel ignition times, and hence faster fire spread rates.
A body of icy or rocky material in outer space may, if it can build and retain sufficient heat, develop a differentiated interior and alter its surface through volcanic or tectonic activity. The length of time through which a planetary body can maintain surface-altering activity depends on how well it retains heat, and this is governed by its surface area-to-volume ratio. For Vesta (r=263 km), the ratio is so high that astronomers were surprised to find that it did differentiate and have brief volcanic activity. The moon , Mercury and Mars have radii in the low thousands of kilometers; all three retained heat well enough to be thoroughly differentiated although after a billion years or so they became too cool to show anything more than very localized and infrequent volcanic activity. As of April 2019, however, NASA has announced the detection of a "marsquake" measured on April 6, 2019, by NASA's InSight lander. [ 23 ] Venus and Earth (r>6,000 km) have sufficiently low surface area-to-volume ratios (roughly half that of Mars and much lower than all other known rocky bodies) so that their heat loss is minimal. [ 24 ] | https://en.wikipedia.org/wiki/Surface-area-to-volume_ratio |
Surface-enhanced Raman spectroscopy or surface-enhanced Raman scattering ( SERS ) is a surface-sensitive technique that enhances Raman scattering by molecules adsorbed on rough metal surfaces or by nanostructures such as plasmonic-magnetic silica nanotubes. [ 1 ] The enhancement factor can be as much as 10 10 to 10 11 , [ 2 ] [ 3 ] which means the technique may detect single molecules. [ 4 ] [ 5 ]
SERS from pyridine adsorbed on electrochemically roughened silver was first observed by Martin Fleischmann , Patrick J. Hendra and A. James McQuillan at the Department of Chemistry at the University of Southampton , UK in 1973. [ 6 ] This initial publication has been cited over 6000 times. The 40th Anniversary of the first observation of the SERS effect has been marked by the Royal Society of Chemistry by the award of a National Chemical Landmark plaque to the University of Southampton. In 1977, two groups independently noted that the concentration of scattering species could not account for the enhanced signal and each proposed a mechanism for the observed enhancement. Their theories are still accepted as explaining the SERS effect. Jeanmaire and Richard Van Duyne [ 7 ] proposed an electromagnetic effect, while Albrecht and Creighton [ 8 ] proposed a charge-transfer effect. Rufus Ritchie, of Oak Ridge National Laboratory 's Health Sciences Research Division, predicted the existence of the surface plasmon . [ 9 ]
The exact mechanism of the enhancement effect of SERS is still a matter of debate in the literature. [ 10 ] There are two primary theories and while their mechanisms differ substantially, distinguishing them experimentally has not been straightforward. The electromagnetic theory proposes the excitation of localized surface plasmons , while the chemical theory proposes the formation of charge-transfer complexes . The chemical theory is based on resonance Raman spectroscopy , [ 11 ] in which the frequency coincidence (or resonance) of the incident photon energy and electron transition greatly enhances Raman scattering intensity. Research in 2015 on a more powerful extension of the SERS technique called SLIPSERS (Slippery Liquid-Infused Porous SERS) [ 12 ] has further supported the EM theory. [ 13 ]
The increase in intensity of the Raman signal for adsorbates on particular surfaces occurs because of an enhancement in the electric field provided by the surface. When the incident light in the experiment strikes the surface, localized surface plasmons are excited. The field enhancement is greatest when the plasmon frequency, ω p , is in resonance with the radiation ( ω = ω p / 3 {\displaystyle \omega =\omega _{p}/{\sqrt {3}}} for spherical particles). In order for scattering to occur, the plasmon oscillations must be perpendicular to the surface; if they are in-plane with the surface, no scattering will occur. It is because of this requirement that roughened surfaces or arrangements of nanoparticles are typically employed in SERS experiments as these surfaces provide an area on which these localized collective oscillations can occur. [ 14 ] SERS enhancement can occur even when an excited molecule is relatively far apart from the surface which hosts metallic nanoparticles enabling surface plasmon phenomena. [ 15 ]
The light incident on the surface can excite a variety of phenomena in the surface, yet the complexity of this situation can be minimized by surfaces with features much smaller than the wavelength of the light, as only the dipolar contribution will be recognized by the system. The dipolar term contributes to the plasmon oscillations, which leads to the enhancement. The SERS effect is so pronounced because the field enhancement occurs twice. First, the field enhancement magnifies the intensity of incident light, which will excite the Raman modes of the molecule being studied, therefore increasing the signal of the Raman scattering. The Raman signal is then further magnified by the surface due to the same mechanism that excited the incident light, resulting in a greater increase in the total output. At each stage the electric field is enhanced as E 2 , for a total enhancement of E 4 . [ 16 ]
The enhancement is not equal for all frequencies. For those frequencies for which the Raman signal is only slightly shifted from the incident light, both the incident laser light and the Raman signal can be near resonance with the plasmon frequency, leading to the E 4 enhancement. When the frequency shift is large, the incident light and the Raman signal cannot both be on resonance with ω p , thus the enhancement at both stages cannot be maximal. [ 17 ]
The choice of surface metal is also dictated by the plasmon resonance frequency. Visible and near-infrared radiation (NIR) are used to excite Raman modes. Silver and gold are typical metals for SERS experiments because their plasmon resonance frequencies fall within these wavelength ranges, providing maximal enhancement for visible and NIR light. Copper's absorption spectrum also falls within the range acceptable for SERS experiments. [ 18 ] Platinum and palladium nanostructures also display plasmon resonance within visible and NIR frequencies. [ 19 ]
Resonance Raman spectroscopy explains the huge enhancement of Raman scattering intensity. Intermolecular and intramolecular charge transfers significantly enhance Raman spectrum peaks. In particular, the enhancement is huge for species adsorbing the metal surface due to the high-intensity charge transfers from the metal surface with wide band to the adsorbing species. [ 20 ] This resonance Raman enhancement is dominant in SERS for species on small nanoclusters with considerable band gaps , [ 20 ] because surface plasmon appears only in metal surface with near-zero band gaps. This chemical mechanism probably occurs in concert with the electromagnetic mechanism for metal surface. [ 21 ] [ 22 ]
While SERS can be performed in colloidal solutions, today the most common method for performing SERS measurements is by depositing a liquid sample onto a silicon or glass surface with a nanostructured noble metal surface. While the first experiments were performed on electrochemically roughened silver, [ 6 ] now surfaces are often prepared using a distribution of metal nanoparticles on the surface [ 23 ] as well as using lithography [ 24 ] or porous silicon as a support. [ 25 ] [ 26 ] Two dimensional silicon nanopillars decorated with silver have also been used to create SERS active substrates. [ 27 ] The most common metals used for plasmonic surfaces in visible light SERS are silver and gold; however, aluminium has recently been explored as an alternative plasmonic material, because its plasmon band is in the UV region, contrary to silver and gold. [ 28 ] Hence, there is great interest in using aluminium for UV SERS. It has, however, surprisingly also been shown to have a large enhancement in the infrared, which is not fully understood. [ 29 ] In the current decade, it has been recognized that the cost of SERS substrates must be reduced in order to become a commonly used analytical chemistry measurement technique. [ 30 ] To meet this need, plasmonic paper has experienced widespread attention in the field, with highly sensitive SERS substrates being formed through approaches such as soaking, [ 31 ] [ 32 ] [ 33 ] in-situ synthesis, [ 34 ] [ 35 ] screen printing [ 36 ] and inkjet printing. [ 37 ] [ 38 ] [ 39 ]
The shape and size of the metal nanoparticles strongly affect the strength of the enhancement because these factors influence the ratio of absorption and scattering events. [ 40 ] [ 41 ] There is an ideal size for these particles, and an ideal surface thickness for each experiment. [ 42 ] If concentration and particle size can be tuned better for each experiment this will go a long way in the cost reduction of substrates. Particles that are too large allow the excitation of multipoles , which are nonradiative. As only the dipole transition leads to Raman scattering, the higher-order transitions will cause a decrease in the overall efficiency of the enhancement. Particles that are too small lose their electrical conductance and cannot enhance the field. When the particle size approaches a few atoms, the definition of a plasmon does not hold, as there must be a large collection of electrons to oscillate together. [ 16 ] An ideal SERS substrate must possess high uniformity and high field enhancement. Such substrates can be fabricated on a wafer scale and label-free superresolution microscopy has also been demonstrated using the fluctuations of surface enhanced Raman scattering signal on such highly uniform, high-performance plasmonic metasurfaces. [ 43 ]
Due to their unique physical and chemical properties, two-dimensional (2D) materials have gained significant attention as alternative substrates for surface-enhanced Raman spectroscopy (SERS). The use of 2D materials as SERS substrates offers several advantages over traditional metal substrates, including high sensitivity, reproducibility, and chemical stability. [ 44 ]
Graphene is one of the most widely studied 2D materials for SERS applications. Graphene has a high surface area, high electron mobility, and excellent chemical stability, making it an attractive substrate for SERS. Graphene-based SERS sensors have also been shown to be highly reproducible and stable, making them attractive for real-world applications. [ 45 ] In addition to graphene, other 2D materials, especially MXenes, have also been investigated for SERS applications. [ 46 ] [ 47 ] MXenes have a high surface area, good electrical conductivity, and chemical stability, making them attractive for SERS applications. [ 46 ] As a result, MXene-based SERS sensors have been used to detect various analytes, including organic molecules, [ 48 ] drugs and their metabolites. [ 47 ]
As research and development continue, 2D materials-based SERS sensors will likely be more widely used in various industries, including environmental monitoring, healthcare, and food safety. [ 49 ]
SERS substrates are used to detect the presence of low-abundance biomolecules, and can therefore detect proteins in bodily fluids. [ 50 ] Early detection of pancreatic cancer biomarkers was accomplished using SERS-based immunoassay approach. [ 50 ] A SERS-base multiplex protein biomarker detection platform in a microfluidic chip is used to detect several protein biomarkers to predict the type of disease and critical biomarkers and increase the chance of differentiating diseases with similar biomarkers like pancreatic cancer, ovarian cancer, and pancreatitis. [ 51 ] This technology has been utilized to detect urea and blood plasma label free in human serum and may become the next generation in cancer detection and screening. [ 52 ] [ 53 ]
The ability to analyze the composition of a mixture at a nanoscale makes the use of SERS substrates that are beneficial for environmental analysis, pharmaceuticals, material sciences, art and archaeological research, forensic science, drug and explosives detection, food quality analysis, [ 54 ] and single algal cell detection. [ 55 ] [ 56 ] [ 57 ] SERS combined with plasmonic sensing can be used for high-sensitivity quantitative analysis of small molecules in human biofluids, [ 58 ] the quantitative detection of biomolecular interaction, [ 59 ] the detection of low-level cancer biomarkers via sandwich immunoassay platforms, [ 60 ] [ 61 ] the label-free characterization of exosomes, [ 62 ] and the study of redox processes at a single-molecule level. [ 63 ]
SERS is a powerful technique for determining structural information about molecular systems. It has found a wide range of applications in ultra-sensitive chemical sensing and environmental analyses. [ 64 ]
A review of the present and future applications of SERS was published in 2020. [ 65 ]
The term surface enhanced Raman spectroscopy implies that it provides the same information that traditional Raman spectroscopy does, simply with a greatly enhanced signal. While the spectra of most SERS experiments are similar to the non-surface enhanced spectra, there are often differences in the number of modes present. Additional modes not found in the traditional Raman spectrum can be present in the SERS spectrum, while other modes can disappear. The modes observed in any spectroscopic experiment are dictated by the symmetry of the molecules and are usually summarized by Selection rules . When molecules are adsorbed to a surface, the symmetry of the system can change, slightly modifying the symmetry of the molecule, which can lead to differences in mode selection. [ 66 ]
One common way in which selection rules are modified arises from the fact that many molecules that have a center of symmetry lose that feature when adsorbed to a surface. The loss of a center of symmetry eliminates the requirements of the mutual exclusion rule , which dictates that modes can only be either Raman or infrared active. Thus modes that would normally appear only in the infrared spectrum of the free molecule can appear in the SERS spectrum. [ 14 ]
A molecule's symmetry can be changed in different ways depending on the orientation in which the molecule is attached to the surface. In some experiments, it is possible to determine the orientation of adsorption to the surface from the SERS spectrum, as different modes will be present depending on how the symmetry is modified. [ 67 ]
Remote surface-enhanced Raman spectroscopy (SERS) consists of using metallic nanowaveguides supporting propagating surface plasmon polaritons (SPPs) to perform SERS at a distant location different to that of the incident laser.
Propagating SPPs supported by nanowires has been used to show the remote excitation., [ 68 ] [ 69 ] as well as the remote detection of SERS. [ 70 ] A silver nanowire was also used to show remote excitation and detection using graphene as Raman scatterer [ 71 ]
Applications
Different plasmonic systems have already been used to show Raman detection of biomolecules in vivo in cells and remote excitation of surface catalytic reactions.
SERS-based immunoassays can be used for detection of low-abundance biomarkers. For example, antibodies and gold particles can be used to quantify proteins in serum with high sensitivity and specificity. [ 50 ] [ 51 ]
SERS can be used to target specific DNA and RNA sequences using a combination of gold and silver nanoparticles and Raman-active dyes, such as Cy3 . Specific single nucleotide polymorphisms (SNP) can be identified using this technique. The gold nanoparticles facilitate the formation of a silver coating on the dye-labelled regions of DNA or RNA, allowing SERS to be performed. This has several potential applications: For example, Cao et al. report that gene sequences for HIV, Ebola, Hepatitis, and Bacillus Anthracis can be uniquely identified using this technique. Each spectrum was specific, which is advantageous over fluorescence detection; some fluorescent markers overlap and interfere with other gene markers. The advantage of this technique to identify gene sequences is that several Raman dyes are commercially available, which could lead to the development of non-overlapping probes for gene detection. [ 72 ] | https://en.wikipedia.org/wiki/Surface-enhanced_Raman_spectroscopy |
Surface-enhanced laser desorption/ionization ( SELDI ) is a soft ionization method in mass spectrometry (MS) used for the analysis of protein mixtures . It is a variation of matrix-assisted laser desorption/ionization (MALDI). [ 1 ] [ 2 ] In MALDI, the sample is mixed with a matrix material and applied to a metal plate before irradiation by a laser, [ 3 ] whereas in SELDI, proteins of interest in a sample become bound to a surface before MS analysis. The sample surface is a key component in the purification , desorption , and ionization of the sample. SELDI is typically used with time-of-flight (TOF) mass spectrometers and is used to detect proteins in tissue samples, blood , urine , or other clinical samples, however, SELDI technology can potentially be used in any application by simply modifying the sample surface. [ 1 ] [ 2 ]
SELDI can be seen as a combination of solid-phase chromatography and TOF-MS. The sample is applied to a modified chip surface, which allows for the specific binding of proteins from the sample to the surface. Contaminants and unbound proteins are then washed away. After washing the sample, an energy absorbing matrix, such as sinapinic acid (SPA) or α-Cyano-4-hydroxycinnamic acid (CHCA), is applied to the surface and allowed to crystallize with the sample. [ 1 ] [ 2 ] Alternatively, the matrix can be attached to the sample surface by covalent modification or adsorption before the sample is applied. [ 4 ] The sample is then irradiated by a pulsed laser, causing ablation and desorption of the sample and matrix. [ 1 ] [ 2 ]
Samples spotted on a SELDI surface are typically analyzed using time-of-flight mass spectrometry. An irradiating laser ionizes peptides from crystals of the sample/matrix mixture. The matrix absorbs the energy of the laser pulse, preventing destruction of the molecule, and transfers charge to the sample molecules, forming ions. The ions are then briefly accelerated through an electric potential and travel down a field-free flight tube where they are separated by their velocity differences. The mass-to-charge ratio of each ion can be determined from the length of the tube, the kinetic energy given to ions by the electric field, and the velocity of the ions in the tube. The velocity of the ions is inversely proportional to the square root of the mass-to-charge ratio of the ion; ions with low mass-to-charge ratios are detected earlier than ions with high mass-to-charge ratios. [ 5 ]
The binding of proteins to the SELDI surface acts as a solid-phase chromatographic separation step, and as a result, the proteins attached to the surface are easier to analyze. The surface is composed primarily of materials with a variety of physico-chemical characteristics, metal ions, or anion or cation exchangers. Common surfaces include CM10 (weak cation exchange ), H50 (hydrophobic surface, similar to C 6 -C 12 reverse phase chromatography ), IMAC30 (metal-binding surface), and Q10 (strong anion exchange). SELDI surfaces can also be modified to study DNA-protein binding, antibody-antigen assays, and receptor-ligand interactions. [ 2 ]
The SELDI process is a combination of surface-enhanced neat desorption (SEND),surface-enhanced affinity-capture (SEAC), and surface-enhanced photolabile attachment and release (SEPAR) mass spectrometry. With SEND, analytes can be desorbed and ionized without adding a matrix; the matrix is incorporated into the sample surface. In SEAC, the sample surface is modified to bind the analyte of interest for analysis with laser desorption/ionization mass spectrometry (LDI-MS). [ 1 ] [ 4 ] [ 6 ] SEPAR is a combination of SEND and SEAC; the modified sample surface also acts as an energy absorbing matrix for ionization. [ 4 ]
SELDI technology was developed by T. William Hutchens and Tai-Tung Yip at Baylor College of Medicine in 1993. [ 7 ] Hutchens and Yip attached single-stranded DNA to agarose beads and used the beads to capture lactoferrin , an iron-binding glycoprotein , from preterm infant urine. The beads were incubated in the sample and then removed, washed, and analyzed with a MALDI-MS probe tip. This research led to the idea that MALDI surfaces could be derivatized with SEAC devices; the technique was later described by Hutchens and Yip in 1998. [ 1 ] [ 7 ]
SELDI technology was first commercialized by Ciphergen Biosystems in 1997 as the ProteinChip system, and is now produced and marketed by Bio-Rad Laboratories. [ 6 ]
SELDI technology can potentially be used in any application by modifying the SELDI surface. [ 1 ] SELDI-TOF-MS is optimal for analyzing low molecular weight proteins (<20 kDa) in a variety of biological materials, such as tissue samples, blood, urine, and serum. This technique is often used in combination with immunoblotting and immunohistochemistry as a diagnostic tool to aid in the detection of biomarkers for diseases, and has also been applied to the diagnosis of cancer and neurological disorders. [ 8 ] [ 9 ] SELDI-TOF-MS has been used in biomarker discovery for lung , breast , liver , colon , pancreatic , bladder , kidney , cervical , ovarian , and prostate cancers. [ 2 ] SELDI technology is most widely used in biomarker discovery to compare protein levels in serum samples from healthy and diseased patients. [ 9 ] [ 10 ] [ 11 ] [ 12 ] Serum studies allow for a minimally invasive approach to disease monitoring in patients and are useful in the early detection and diagnosis of diseases and neurological disorders, such as amyotrophic lateral sclerosis (ALS) and Alzheimer's . [ 9 ] [ 10 ]
SELDI-TOF-MS can also be used in biological applications to detect post-translationally modified proteins and to study phosphorylation states of proteins. [ 8 ]
A major advantage of the SELDI process is the chromatographic separation step. While liquid chromatography-mass spectrometry (LC-MS) is based on the elution of analytes in the separated sample, separation in SELDI is based on retention. Any sample components that interfere with analytical measurements, such as salts, detergents, and buffers, are washed away before analysis with mass spectrometry. Only the analytes that are bound to the surface are analyzed, reducing the overall complexity of the sample. As a result, there is an increased probability of detecting analytes that are present in lower concentrations. [ 10 ] Because of the initial separation step, protein profiles can be obtained from samples of as few as 25-50 cells. [ 8 ]
In biological applications, SELDI-TOF-MS has a major advantage in that the technique does not require the use of radioactive isotopes. Furthermore, an assay can be sampled at multiple time points during an experiment. [ 8 ] Additionally, in proteomics , the biomarker discovery, identification, and validation steps can all be done on the SELDI surface. [ 1 ]
SELDI is often criticized for its reproducibility due to differences in the mass spectra obtained when using different batches of chip surfaces. [ 2 ] While the method has been successful with analyzing low molecular weight proteins, consistent results have not been obtained when analyzing high molecular weight proteins. [ 8 ] There also exists a potential for sample bias, as nonspecific absorption matrices favor the binding of analytes with higher abundances in the sample at the expense of less abundant analytes. [ 2 ] While SELDI-TOF-MS has detection limits in the femtomolar range, [ 10 ] the baseline signal in the spectra varies and noise due to the matrix is maximal below 2000 Da, with Ciphergen Biosystems suggesting to ignore spectral peaks below 2000 Da. [ 13 ] | https://en.wikipedia.org/wiki/Surface-enhanced_laser_desorption/ionization |
Surface-extended X-ray absorption fine structure ( SEXAFS ) is the surface-sensitive equivalent of the EXAFS technique. This technique involves the illumination of the sample by high-intensity X-ray beams from a synchrotron and monitoring their photoabsorption by detecting in the intensity of Auger electrons as a function of the incident photon energy . Surface sensitivity is achieved by the interpretation of data depending on the intensity of the Auger electrons (which have an escape depth of ~1–2 nm ) instead of looking at the relative absorption of the X-rays as in the parent method, EXAFS.
The photon energies are tuned through the characteristic energy for the onset of core level excitation for surface atoms. The core holes thus created can then be filled by nonradiative decay of a higher-lying electron and communication of energy to yet another electron, which can then escape from the surface ( Auger emission ). The photoabsorption can therefore be monitored by direct detection of these Auger electrons to the total photoelectron yield. The absorption coefficient versus incident photon energy contains oscillations which are due to the interference of the backscattered Auger electrons with the outward propagating waves. The period of this oscillations depends on the type of the backscattering atom and its distance from the central atom. Thus, this technique enables the investigation of interatomic distances for adsorbates and their coordination chemistry.
This technique benefits from long range order not being required, which sometimes becomes a limitation in the other conventional techniques like LEED (about 10 nm). This method also largely eliminates the background from the signal. It also benefits because it can probe different species in the sample by just tuning the X-ray photon energy to the absorption edge of that species. Joachim Stöhr played a major role in the initial development of this technique.
Normally, the SEXAFS work is done using synchrotron radiation as it has highly collimated, plane-polarized and precisely pulsed X-ray sources, with fluxes of 10 12 to 10 14 photons/sec/mrad/mA and greatly improves the signal-to-noise ratio over that obtainable from conventional sources. A bright source X-ray source is illuminating the sample and the transmission is being measured as the absorption coefficient as
where I is the transmitted and I o is the incident intensity of the X-rays. Then it is plotted against the energy of the incoming X-ray photon energy.
In SEXAFS, an electron detector and a high-vacuum chamber is required to calculate the Auger yields instead of the intensity of the transmitted X-ray waves. The detector can be either an energy analyzer, as in the case of Auger measurements , or an electron multiplier, as in the case of total or partial secondary electron yield. The energy analyzer gives rise to better resolution while the electron multiplier has larger solid angle acceptance.
The equation governing the signal-to-noise ratio is
where
The absorption of an X-ray photon by the atom excites a core level electron, thus generating a core hole. This generates a spherical electron wave with the excited atom as the center. The wave propagates outwards and get scattered off from the neighbouring atoms and is turned back towards the central ionized atom. The oscillatory component of the photoabsorption originates from the coupling of this reflected wave to the initial state via the dipole operator M fs as in (1). The Fourier transform of the oscillations gives the information about the spacing of the neighboring atoms and their chemical environment. This phase information is carried over to the oscillations in the Auger signal because the transition time in Auger emission is of the same order of magnitude as the average time for a photoelectron in the energy range of interest. Thus, with a proper choice of the absorption edge and characteristic Auger transition, measurement of the variation of the intensity in a particular Auger line as a function of incident photon energy would be a measure of the photoabsorption cross section.
This excitation also triggers various decay mechanisms. These can be of radiative (fluorescence) or nonradiative (Auger and Coster–Kronig ) nature. The intensity ratio between the Auger electron and X-ray emissions depends on the atomic number Z . The yield of the Auger electrons decreases with increasing Z .
The cross section of photoabsorption is given by Fermi's golden rule , which, in the dipole approximation, is given as
where the initial state, i with energy E i , consists of the atomic core and the Fermi sea, and the incident radiation field, the final state, ƒ with energy E ƒ (larger than the Fermi level), consists of a core hole and an excited electron. ε is the polarization vector of the electric field, e the electron charge, and ħω the x-ray photon energy. The photoabsorption signal contains a peak when the core level excitation is neared. It is followed by an oscillatory component which originates from the coupling of that part of the electron wave which upon scattering by the medium is turned back towards the central ionized atom, where it couples to the initial state via the dipole operator, M i .
Assuming single-scattering and small-atom approximation for kR j >> 1, where R j is the distance from the central excited atom to the j th shell of neighbors and k is the photoelectrons wave vector,
where ħω T is the absorption edge energy and V o is the inner potential of the solid associated with exchange and correlation, the following expression for the oscillatory component of the photoabsorption cross section (for K-shell excitation) is obtained:
where the atomic scattering factor in a partial wave expansion with partial wave phase-shifts δ l is given by
P l ( x ) is the l th Legendre polynomial, γ is an attenuation coefficient, exp(−2 σ i 2 k 2 ) is a Debye–Waller factor and weight W j is given in terms of the number of atoms in the j th shell and their distance as
The above equation for the χ ( k ) forms the basis of a direct, Fourier transform, method of analysis which has been successfully applied to the analysis of the EXAFS data.
The number of electrons arriving at the detector with an energy of the characteristic W α XY Auger line (where W α is the absorption edge core-level of element α , to which the incident x-ray line has been tuned) can be written as
where N B ( ħω ) is the background signal and N W α X Y ( ℏ ω ) {\displaystyle N_{W_{\alpha }XY}(\hbar \omega )} is the Auger signal we are interested in, where
N W α X Y ( ℏ ω ) = ( 4 π ) − 1 ψ W α X Y [ 1 − κ ] ∫ Ω ∫ 0 ∞ ρ α ( z ) P W α ( ℏ ω ; z ) exp [ − z λ ( W α X Y ) cos θ ] d z d Ω , {\displaystyle N_{W_{\alpha }XY}(\hbar \omega )=(4\pi )^{-1}\psi _{W_{\alpha }XY}[1-\kappa ]\int _{\Omega }\int _{0}^{\infty }\ \rho _{\alpha }(z)\,\,P_{W_{\alpha }}(\hbar \omega ;z)\exp \left[{\frac {-z}{\lambda (W_{\alpha }XY)}}\cos \theta \right]\ dzd\Omega ,}
where ψ W α X Y {\displaystyle \psi _{W_{\alpha }XY}} is the probability that an excited atom will decay via W α XY Auger transition, ρ α ( z ) is the atomic concentration of the element α at depth z , λ ( W α XY ) is the mean free path for an W α XY Auger electron, θ is the angle that the escaping Auger electron makes with the surface normal and κ is the photon emission probability which is dictated the atomic number. As the photoabsorption probability, P W α ( ℏ ω ; z ) {\displaystyle P_{W_{\alpha }}(\hbar \omega ;z)} is the only term that is dependent on the photon energy, the oscillations in it as a function of energy would give rise to similar oscillations in the N W α X Y ( ℏ ω ) {\displaystyle N_{W_{\alpha }XY}(\hbar \omega )} . | https://en.wikipedia.org/wiki/Surface-extended_X-ray_absorption_fine_structure |
Surface-water hydrology is the sub-field of hydrology concerned with above-earth water ( surface water ), in contrast to groundwater hydrology that deals with water below the surface of the Earth. Its applications include rainfall and runoff , the routes that surface water takes (for example through rivers or reservoirs ), and the occurrence of floods and droughts. [ 1 ] Surface-water hydrology is used to predict the effects of water constructions such as dams and canals. It considers the layout of the watershed , geology , soils , vegetation, nutrients, energy and wildlife. [ 2 ] Modelled aspects include precipitation , the interception of rain water by vegetation or artificial structures, evaporation , the runoff function and the soil-surface system itself. [ 3 ]
When surface water seeps into the ground above bedrock, it is categorized as groundwater , [ 4 ] and the rate at which this occurs determines baseflow needs for instream flow , as well as subsurface water levels in wells . While groundwater is not part of surface-water hydrology, it must be taken into account for a full understanding of the behaviour of surface water. [ 3 ]
Glacial hydrology is a part of surface-water hydrology; some of the runoff from glaciers and snow also involves groundwater hydrology concepts. [ 5 ]
This hydrology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Surface-water_hydrology |
In mathematics , a surface is a mathematical model of the common concept of a surface . It is a generalization of a plane , but, unlike a plane, it may be curved ; this is analogous to a curve generalizing a straight line .
There are several more precise definitions, depending on the context and the mathematical tools that are used for the study. The simplest mathematical surfaces are planes and spheres in the Euclidean 3-space . The exact definition of a surface may depend on the context. Typically, in algebraic geometry , a surface may cross itself (and may have other singularities ), while, in topology and differential geometry , it may not.
A surface is a topological space of dimension two; this means that a moving point on a surface may move in two directions (it has two degrees of freedom ). In other words, around almost every point, there is a coordinate patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles (ideally) a sphere, and latitude and longitude provide two-dimensional coordinates on it (except at the poles and along the 180th meridian ).
Often, a surface is defined by equations that are satisfied by the coordinates of its points. This is the case of the graph of a continuous function of two variables. The set of the zeros of a function of three variables is a surface, which is called an implicit surface . [ 1 ] If the defining three-variate function is a polynomial , the surface is an algebraic surface . For example, the unit sphere is an algebraic surface, as it may be defined by the implicit equation
A surface may also be defined as the image , in some space of dimension at least 3, of a continuous function of two variables (some further conditions are required to ensure that the image is not a curve ). In this case, one says that one has a parametric surface , which is parametrized by these two variables, called parameters . For example, the unit sphere may be parametrized by the Euler angles , also called longitude u and latitude v by
Parametric equations of surfaces are often irregular at some points. For example, all but two points of the unit sphere, are the image, by the above parametrization, of exactly one pair of Euler angles ( modulo 2 π ). For the remaining two points (the north and south poles ), one has cos v = 0 , and the longitude u may take any values. Also, there are surfaces for which there cannot exist a single parametrization that covers the whole surface. Therefore, one often considers surfaces which are parametrized by several parametric equations, whose images cover the surface. This is formalized by the concept of manifold : in the context of manifolds, typically in topology and differential geometry , a surface is a manifold of dimension two; this means that a surface is a topological space such that every point has a neighborhood which is homeomorphic to an open subset of the Euclidean plane (see Surface (topology) and Surface (differential geometry) ). This allows defining surfaces in spaces of dimension higher than three, and even abstract surfaces , which are not contained in any other space. On the other hand, this excludes surfaces that have singularities , such as the vertex of a conical surface or points where a surface crosses itself.
In classical geometry , a surface is generally defined as a locus of a point or a line. For example, a sphere is the locus of a point which is at a given distance of a fixed point, called the center; a conical surface is the locus of a line passing through a fixed point and crossing a curve ; a surface of revolution is the locus of a curve rotating around a line. A ruled surface is the locus of a moving line satisfying some constraints; in modern terminology, a ruled surface is a surface, which is a union of lines.
There are several kinds of surfaces that are considered in mathematics. An unambiguous terminology is thus necessary to distinguish them when needed. A topological surface is a surface that is a manifold of dimension two (see § Topological surface ). A differentiable surface is a surfaces that is a differentiable manifold (see § Differentiable surface ). Every differentiable surface is a topological surface, but the converse is false.
A "surface" is often implicitly supposed to be contained in a Euclidean space of dimension 3, typically R 3 . A surface that is contained in a projective space is called a projective surface (see § Projective surface ). A surface that is not supposed to be included in another space is called an abstract surface .
A parametric surface is the image of an open subset of the Euclidean plane (typically R 2 {\displaystyle \mathbb {R} ^{2}} ) by a continuous function , in a topological space , generally a Euclidean space of dimension at least three. Usually the function is supposed to be continuously differentiable , and this will be always the case in this article.
Specifically, a parametric surface in R 3 {\displaystyle \mathbb {R} ^{3}} is given by three functions of two variables u and v , called parameters
As the image of such a function may be a curve (for example, if the three functions are constant with respect to v ), a further condition is required, generally that, for almost all values of the parameters, the Jacobian matrix
has rank two. Here "almost all" means that the values of the parameters where the rank is two contain a dense open subset of the range of the parametrization. For surfaces in a space of higher dimension, the condition is the same, except for the number of columns of the Jacobian matrix.
A point p where the above Jacobian matrix has rank two is called regular , or, more properly, the parametrization is called regular at p .
The tangent plane at a regular point p is the unique plane passing through p and having a direction parallel to the two row vectors of the Jacobian matrix. The tangent plane is an affine concept , because its definition is independent of the choice of a metric . In other words, any affine transformation maps the tangent plane to the surface at a point to the tangent plane to the image of the surface at the image of the point.
The normal line at a point of a surface is the unique line passing through the point and perpendicular to the tangent plane; a normal vector is a vector which is parallel to the normal line.
For other differential invariants of surfaces, in the neighborhood of a point, see Differential geometry of surfaces .
A point of a parametric surface which is not regular is irregular . There are several kinds of irregular points.
It may occur that an irregular point becomes regular, if one changes the parametrization. This is the case of the poles in the parametrization of the unit sphere by Euler angles : it suffices to permute the role of the different coordinate axes for changing the poles.
On the other hand, consider the circular cone of parametric equation
The apex of the cone is the origin (0, 0, 0) , and is obtained for t = 0 . It is an irregular point that remains irregular, whichever parametrization is chosen (otherwise, there would exist a unique tangent plane). Such an irregular point, where the tangent plane is undefined, is said singular .
There is another kind of singular points. There are the self-crossing points , that is the points where the surface crosses itself. In other words, these are the points which are obtained for (at least) two different values of the parameters.
Let z = f ( x , y ) be a function of two real variables, a bivariate function . This is a parametric surface, parametrized as
Every point of this surface is regular , as the two first columns of the Jacobian matrix form the identity matrix of rank two.
A rational surface is a surface that may be parametrized by rational functions of two variables. That is, if f i ( t , u ) are, for i = 0, 1, 2, 3 , polynomials in two indeterminates, then the parametric surface, defined by
is a rational surface.
A rational surface is an algebraic surface , but most algebraic surfaces are not rational.
An implicit surface in a Euclidean space (or, more generally, in an affine space ) of dimension 3 is the set of the common zeros of a differentiable function of three variables
Implicit means that the equation defines implicitly one of the variables as a function of the other variables. This is made more exact by the implicit function theorem : if f ( x 0 , y 0 , z 0 ) = 0 , and the partial derivative in z of f is not zero at ( x 0 , y 0 , z 0 ) , then there exists a differentiable function φ ( x , y ) such that
in a neighbourhood of ( x 0 , y 0 , z 0 ) . In other words, the implicit surface is the graph of a function near a point of the surface where the partial derivative in z is nonzero. An implicit surface has thus, locally, a parametric representation, except at the points of the surface where the three partial derivatives are zero.
A point of the surface where at least one partial derivative of f is nonzero is called regular . At such a point ( x 0 , y 0 , z 0 ) {\displaystyle (x_{0},y_{0},z_{0})} , the tangent plane and the direction of the normal are well defined, and may be deduced, with the implicit function theorem from the definition given above, in § Tangent plane and normal vector . The direction of the normal is the gradient , that is the vector
The tangent plane is defined by its implicit equation
A singular point of an implicit surface (in R 3 {\displaystyle \mathbb {R} ^{3}} ) is a point of the surface where the implicit equation holds and the three partial derivatives of its defining function are all zero. Therefore, the singular points are the solutions of a system of four equations in three indeterminates. As most such systems have no solution, many surfaces do not have any singular point. A surface with no singular point is called regular or non-singular .
The study of surfaces near their singular points and the classification of the singular points is singularity theory . A singular point is isolated if there is no other singular point in a neighborhood of it. Otherwise, the singular points may form a curve. This is in particular the case for self-crossing surfaces.
Originally, an algebraic surface was a surface which could be defined by an implicit equation
where f is a polynomial in three indeterminates , with real coefficients.
The concept has been extended in several directions, by defining surfaces over arbitrary fields , and by considering surfaces in spaces of arbitrary dimension or in projective spaces . Abstract algebraic surfaces, which are not explicitly embedded in another space, are also considered.
Polynomials with coefficients in any field are accepted for defining an algebraic surface.
However, the field of coefficients of a polynomial is not well defined, as, for example, a polynomial with rational coefficients may also be considered as a polynomial with real or complex coefficients. Therefore, the concept of point of the surface has been generalized in the following way. [ 2 ] [ page needed ]
Given a polynomial f ( x , y , z ) , let k be the smallest field containing the coefficients, and K be an algebraically closed extension of k , of infinite transcendence degree . [ 3 ] Then a point of the surface is an element of K 3 which is a solution of the equation
If the polynomial has real coefficients, the field K is the complex field , and a point of the surface that belongs to R 3 {\displaystyle \mathbb {R} ^{3}} (a usual point) is called a real point . A point that belongs to k 3 is called rational over k , or simply a rational point , if k is the field of rational numbers .
A projective surface in a projective space of dimension three is the set of points whose homogeneous coordinates are zeros of a single homogeneous polynomial in four variables. More generally, a projective surface is a subset of a projective space, which is a projective variety of dimension two.
Projective surfaces are strongly related to affine surfaces (that is, ordinary algebraic surfaces). One passes from a projective surface to the corresponding affine surface by setting to one some coordinate or indeterminate of the defining polynomials (usually the last one). Conversely, one passes from an affine surface to its associated projective surface (called projective completion ) by homogenizing the defining polynomial (in case of surfaces in a space of dimension three), or by homogenizing all polynomials of the defining ideal (for surfaces in a space of higher dimension).
One cannot define the concept of an algebraic surface in a space of dimension higher than three without a general definition of an algebraic variety and of the dimension of an algebraic variety . In fact, an algebraic surface is an algebraic variety of dimension two .
More precisely, an algebraic surface in a space of dimension n is the set of the common zeros of at least n – 2 polynomials, but these polynomials must satisfy further conditions that may be not immediate to verify. Firstly, the polynomials must not define a variety or an algebraic set of higher dimension, which is typically the case if one of the polynomials is in the ideal generated by the others. Generally, n – 2 polynomials define an algebraic set of dimension two or higher. If the dimension is two, the algebraic set may have several irreducible components . If there is only one component the n – 2 polynomials define a surface, which is a complete intersection . If there are several components, then one needs further polynomials for selecting a specific component.
Most authors consider as an algebraic surface only algebraic varieties of dimension two, but some also consider as surfaces all algebraic sets whose irreducible components have the dimension two.
In the case of surfaces in a space of dimension three, every surface is a complete intersection, and a surface is defined by a single polynomial, which is irreducible or not, depending on whether non-irreducible algebraic sets of dimension two are considered as surfaces or not.
In topology , a surface is generally defined as a manifold of dimension two. This means that a topological surface is a topological space such that every point has a neighborhood that is homeomorphic to an open subset of a Euclidean plane .
Every topological surface is homeomorphic to a polyhedral surface such that all facets are triangles . The combinatorial study of such arrangements of triangles (or, more generally, of higher-dimensional simplexes ) is the starting object of algebraic topology . This allows the characterization of the properties of surfaces in terms of purely algebraic invariants , such as the genus and homology groups .
The homeomorphism classes of surfaces have been completely described (see Surface (topology) ).
In mathematics , the differential geometry of surfaces deals with the differential geometry of smooth surfaces [ a ] with various additional structures, most often, a Riemannian metric . [ b ]
Surfaces have been extensively studied from various perspectives: extrinsically , relating to their embedding in Euclidean space and intrinsically , reflecting their properties determined solely by the distance within the surface as measured along curves on the surface. One of the fundamental concepts investigated is the Gaussian curvature , first studied in depth by Carl Friedrich Gauss , [ 4 ] who showed that curvature was an intrinsic property of a surface, independent of its isometric embedding in Euclidean space.
A fractal landscape or fractal surface is generated using a stochastic algorithm designed to produce fractal behavior that mimics the appearance of natural terrain . In other words, the surface resulting from the procedure is not a deterministic, but rather a random surface that exhibits fractal behavior. [ 5 ]
Many natural phenomena exhibit some form of statistical self-similarity that can be modeled by fractal surfaces . [ 6 ] Moreover, variations in surface texture provide important visual cues to the orientation and slopes of surfaces, and the use of almost self-similar fractal patterns can help create natural looking visual effects. [ 7 ] The modeling of the Earth's rough surfaces via fractional Brownian motion was first proposed by Benoit Mandelbrot . [ 8 ]
Because the intended result of the process is to produce a landscape, rather than a mathematical function, processes are frequently applied to such landscapes that may affect the stationarity and even the overall fractal behavior of such a surface , in the interests of producing a more convincing landscape. | https://en.wikipedia.org/wiki/Surface_(mathematics) |
Surface Evolver is an interactive program for the study of surfaces shaped by surface tension and other energies, and subject to various constraints. A surface is implemented as a simplicial complex . The user defines an initial surface in a datafile. The Evolver evolves the surface toward minimal energy by a gradient descent method. The aim can be to find a minimal energy surface, or to model the process of evolution by mean curvature . The energy in the Evolver can be a combination of surface tension, gravitational energy , squared mean curvature, user-defined surface integrals , or knot energies . The Evolver can handle arbitrary topology , volume constraints, boundary constraints, boundary contact angles , prescribed mean curvature, crystalline integrands , gravity, and constraints expressed as surface integrals. The surface can be in an ambient space of arbitrary dimension , which can have a Riemannian metric , and the ambient space can be a quotient space under a group action . [ 1 ] [ 2 ]
Evolver was written at The Geometry Center , sponsored by the National Science Foundation , the Department of Energy , Enterprise Minnesota, and the University of Minnesota .
This scientific software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Surface_Evolver |
Surface Science is a monthly peer-reviewed scientific journal published by Elsevier that covers the physics and chemistry of surfaces and interfaces . It was established in 1964. The journal encompasses Surface Science Letters , which was published separately until 1993.
The scope of the journal includes nanotechnology , catalysis , and soft matter and features both experimental and computational studies. Extended reviews are published in its companion journal, Surface Science Reports .
According to the Journal Citation Reports , the journal has a 2020 impact factor of 1.942. [ 1 ]
This article about a materials science journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Surface_Science_(journal) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.