text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Sum frequency generation spectroscopy ( SFG ) is a nonlinear laser spectroscopy technique used to analyze surfaces and interfaces. It can be expressed as a sum of a series of Lorentz oscillators . In a typical SFG setup, two laser beams mix at an interface and generate an output beam with a frequency equal to the sum of the two input frequencies, traveling in a direction allegedly given by the sum of the incident beams' wavevectors . The technique was developed in 1987 by Yuen-Ron Shen and his students as an extension of second harmonic generation spectroscopy and rapidly applied to deduce the composition, orientation distributions, and structural information of molecules at gas–solid, gas–liquid and liquid–solid interfaces. [ 1 ] [ 2 ] Soon after its invention, Philippe Guyot-Sionnest extended the technique to obtain the first measurements of electronic and vibrational dynamics at surfaces. [ 3 ] [ 4 ] [ 5 ] SFG has advantages in its ability to be monolayer surface sensitive, ability to be performed in situ (for example aqueous surfaces and in gases), and its capability to provide ultrafast time resolution. SFG gives information complementary to infrared and Raman spectroscopy . [ 6 ]
IR-visible sum frequency generation spectroscopy uses two laser beams (an infrared probe, and a visible pump) that spatially and temporally overlap at a surface of a material or the interface between two media. An output beam is generated at a frequency of the sum of the two input beams. The two input beams must be able to access the surface with sufficiently high intensities, and the output beam must be able to reflect off (or transmit through) the surface in order to be detected. [ 7 ] Broadly speaking, most sum frequency spectrometers can be considered as one of two types, scanning systems (those with narrow bandwidth probe beams) and broadband systems (those with broad bandwidth probe beams). For the former type of spectrometer, the pump beam is a visible wavelength laser held at a constant frequency, and the other (the probe beam) is a tunable infrared laser — by tuning the IR laser, the system can scan across molecular resonances and obtain a vibrational spectrum of the interfacial region in a piecewise fashion. [ 6 ] In a broadband spectrometer, the visible pump beam is once again held at a fixed frequency, while the probe beam is spectrally broad. These laser beams overlap at a surface, but may access a wider range of molecular resonances simultaneously than a scanning spectrometer, and hence spectra can be acquired significantly faster, allowing the ability to perform time-resolved measurements with interfacial sensitivity. [ 8 ]
For a given nonlinear optical process, the polarization P → {\displaystyle {\overrightarrow {P}}} which generates the output is given by
where χ ( i ) {\displaystyle \chi ^{(i)}} is the i {\displaystyle i} th order nonlinear susceptibility, for i ∈ [ 1 , 2 , 3 , … , n ] {\displaystyle i\in [1,2,3,\dots ,n]} .
It is worth noting that all the even order susceptibilities become zero in centrosymmetric media. A proof of this is as follows.
Let I i n v {\displaystyle I_{inv}} be the inversion operator, defined by I i n v L → = − L → {\displaystyle I_{inv}{\overrightarrow {L}}=-{\overrightarrow {L}}} for some arbitrary vector L → {\displaystyle {\overrightarrow {L}}} . Then applying I i n v {\displaystyle I_{inv}} to the left and right hand side of the polarization equation above gives
Adding together this equation with the original polarization equation then gives
which implies χ ( 2 i ) = 0 {\displaystyle \chi ^{(2i)}=0} for i ∈ [ 1 , 2 , 3 , … , n / 2 ] {\displaystyle i\in [1,2,3,\dots ,n/2]} in centrosymmetric media. Q.E.D.
[Note 1: The final equality can be proven by mathematical induction , by considering two cases in the inductive step; where k {\displaystyle k} is odd and k {\displaystyle k} is even.]
[Note 2: This proof holds for the case where n {\displaystyle n} is even. Setting m = n − 1 {\displaystyle m=n-1} gives the odd case and the remainder of the proof is the same.]
As a second-order nonlinear process, SFG is dependent on the 2nd order susceptibility χ ( 2 ) {\displaystyle \chi ^{(2)}} , which is a third rank tensor. This limits what samples are accessible for SFG. Centrosymmetric media include the bulk of gases, liquids, and most solids under the assumption of the electric-dipole approximation, which neglects the signal generated by multipoles and magnetic moments. [ 7 ] At an interface between two different materials or two centrosymmetric media, the inversion symmetry is broken and an SFG signal can be generated. This suggests that the resulting spectra represent a thin layer of molecules. A signal is found when there is a net polar orientation. [ 7 ] [ 9 ]
The output beam is collected by a detector and its intensity I {\displaystyle I} is calculated using [ 7 ] [ 10 ]
where ω 1 {\displaystyle \omega _{1}} is the visible frequency, ω 2 {\displaystyle \omega _{2}} is the IR frequency and ω 3 = ω 1 + ω 2 {\displaystyle \omega _{3}=\omega _{1}+\omega _{2}} is the SFG frequency. The constant of proportionality varies across literature, many of them including the product of the square of the output frequency, ω 2 {\displaystyle \omega _{2}} and the squared secant of the reflection angle, sec 2 β {\displaystyle \sec ^{2}\beta } . Other factors include index of refractions for the three beams. [ 6 ]
The second order susceptibility has two contributions
where χ n r {\displaystyle \chi _{nr}} is the non-resonating contribution and χ r {\displaystyle \chi _{r}} is the resonating contribution. The non-resonating contribution is assumed to be from electronic responses. Although this contribution has often been considered to be constant over the spectrum, because it is generated simultaneously with the resonant response, the two responses must compete for intensity. This competition shapes the nonresonant contribution in the presence of resonant features by resonant attenuation. [ 11 ] Because it is not currently known how to adequately correct for nonresonant interferences, it is very important to experimentally isolate the resonant contributions from any nonresonant interference, often done using the technique of nonresonant suppression. [ 12 ]
The resonating contribution is from the vibrational modes and shows changes in resonance. It can be expressed as a sum of a series of Lorentz oscillators
where A {\displaystyle A} is the strength or amplitude, ω 0 {\displaystyle \omega _{0}} is the resonant frequency, Γ {\displaystyle \Gamma } is the damping or linewidth coefficient (FWHM), and each q > 1 {\displaystyle q>1} indexes the normal (resonant vibrational) mode. The amplitude is a product of μ {\displaystyle \mu } , the induced dipole moment, and α {\displaystyle \alpha } , the polarizability. [ 7 ] [ 9 ] Together, this indicates that the transition must be both IR and Raman active. [ 6 ]
The above equations can be combined to form
which is used to model the SFG output over a range of wavenumbers. When the SFG system scans over a vibrational mode of the surface molecule, the output intensity is resonantly enhanced. [ 6 ] [ 9 ] In a graphical analysis of the output intensity versus wavenumber, this is represented by Lorentzian peaks. Depending on the system, inhomogeneous broadening and interference between peaks may occur. The Lorentz profile can be convoluted with a Gaussian intensity distribution to better fit the intensity distribution. [ 13 ]
From the second order susceptibility, it is possible to ascertain information about the orientation of molecules at the surface. χ ( 2 ) {\displaystyle \chi ^{(2)}} describes how the molecules at the interface respond to the input beam. A change in the net orientation of the polar molecules results in a change of sign of χ ( 2 ) {\displaystyle \chi ^{(2)}} . As a rank 3 tensor, the individual elements provide information about the orientation. For a surface that has azimuthal symmetry, i.e. assuming C ∞ {\displaystyle C_{\infty }} rod symmetry, only seven of the twenty seven tensor elements are nonzero (with four being linearly independent), which are
The tensor elements can be determined by using two different polarizers, one for the electric field vector perpendicular to the plane of incidence, labeled S, and one for the electric field vector parallel to the plane of incidence, labeled P. Four combinations are sufficient: PPP, SSP, SPS, PSS, with the letters listed in decreasing frequency, so the first is for the sum frequency, the second is for the visible beam, and the last is for the infrared beam. The four combinations give rise to four different intensities given by
where index i {\displaystyle i} is of the interfacial x y {\displaystyle xy} -plane, and f {\displaystyle f} and f ′ {\displaystyle f'} are the linear and nonlinear Fresnel factors.
By taking the tensor elements and applying the correct transformations, the orientation of the molecules on the surface can be found. [ 6 ] [ 9 ] [ 13 ]
Since SFG is a second-order nonlinear optical phenomenon, one of the main technical concerns in an experimental setup is being able to generate a signal strong enough to detect, with discernible peaks and narrow bandwidths. Picosecond and femtosecond pulse width lasers are often used due to the high peak field intensities. Common sources include Ti:Sapphire lasers , which can easily operate in the femtosecond regime, or Neodymium based lasers , for picosecond operation.
Whilst shorter pulses results in higher peak intensities, the spectral bandwidth of the laser pulse is also increased, which can place a limit on the spectral resolution of the output of an experimental setup. This can be compensated for by narrowing the bandwidth of the pump pulse, resulting in a tradeoff for desired properties.
In modern experimental setups, the tuneable range of the probe pulse is augmented by optical parametric generation (OPG), optical parametric oscillation (OPO), and optical parametric amplification (OPA) systems. [ 13 ]
Signal strength can be improved by using special geometries, such as a total internal reflection setup which uses a prism to change the angles so they are close to the critical angles, allowing the SFG signal to be generated at its critical angle, enhancing the signal. [ 13 ]
Common detector setups utilize a monochromator and a photomultiplier for filtering and detecting. [ 7 ] | https://en.wikipedia.org/wiki/Sum_frequency_generation_spectroscopy |
In a Euclidean space , the sum of angles of a triangle equals a straight angle (180 degrees , π radians , two right angles , or a half- turn ). A triangle has three angles, one at each vertex , bounded by a pair of adjacent sides .
The sum can be computed directly using the definition of angle based on the dot product and trigonometric identities , or more quickly by reducing to the two-dimensional case and using Euler's identity .
It was unknown for a long time whether other geometries exist, for which this sum is different. The influence of this problem on mathematics was particularly strong during the 19th century. Ultimately, the answer was proven to be positive: in other spaces (geometries) this sum can be greater or lesser, but it then must depend on the triangle. Its difference from 180° is a case of angular defect and serves as an important distinction for geometric systems.
In Euclidean geometry , the triangle postulate states that the sum of the angles of a triangle is two right angles . This postulate is equivalent to the parallel postulate . [ 1 ] In the presence of the other axioms of Euclidean geometry, the following statements are equivalent: [ 2 ]
An easy formula for these properties is that in any three points in any shape, there is a triangle formed. Triangle ABC (example) has 3 points, and therefore, three angles; angle A, angle B, and angle C. Angle A, B, and C will always, when put together, will form 360 degrees. So, ∠A + ∠B + ∠C = 360°
Spherical geometry does not satisfy several of Euclid's axioms , including the parallel postulate . In addition, the sum of angles is not 180° anymore.
For a spherical triangle, the sum of the angles is greater than 180° and can be up to 540°. The amount by which the sum of the angles exceeds 180° is called the spherical excess , denoted as E {\textstyle E} or Δ {\textstyle \Delta } . [ 4 ] The spherical excess and the area A {\textstyle A} of the triangle determine each other via the relation (called Girard's theorem ): E = A r 2 {\displaystyle E={\frac {A}{r^{2}}}} where r {\displaystyle r} is the radius of the sphere, equal to r = 1 κ {\textstyle r={\frac {1}{\sqrt {\kappa }}}} where κ > 0 {\textstyle \kappa >0} is the constant curvature.
The spherical excess can also be calculated from the three side lengths, the lengths of two sides and their angle, or the length of one side and the two adjacent angles (see spherical trigonometry ).
In the limit where the three side lengths tend to 0 {\displaystyle 0} , the spherical excess also tends to 0 {\displaystyle 0} : the spherical geometry locally resembles the euclidean one. More generally, the euclidean law is recovered as a limit when the area tends to 0 {\displaystyle 0} (which does not imply that the side lengths do so).
A spherical triangle is determined up to isometry by E {\textstyle E} , one side length and one adjacent angle. More precisely, according to Lexell's theorem , given a spherical segment [ A , B ] {\textstyle [A,B]} as a fixed side and a number 0 ∘ < E < 360 ∘ {\textstyle 0^{\circ }<E<360^{\circ }} , the set of points C {\textstyle C} such that the triangle A B C {\textstyle ABC} has spherical excess E {\displaystyle E} is a circle through the antipodes A ′ , B ′ {\textstyle A',B'} of A {\textstyle A} and B {\textstyle B} . Hence, the level sets of E {\textstyle E} form a foliation of the sphere with two singularities A ′ , B ′ {\displaystyle A',B'} , and the gradient vector of E {\textstyle E} is orthogonal to this foliation.
Hyperbolic geometry breaks Playfair's axiom, Proclus' axiom (the parallelism, defined as non-intersection, is intransitive in an hyperbolic plane), the equidistance postulate (the points on one side of, and equidistant from, a given line do not form a line), and Pythagoras' theorem. A circle [ 5 ] cannot have arbitrarily small curvature , [ 6 ] so the three points property also fails. The sum of angles is not 180° anymore, either.
Contrarily to the spherical case, the sum of the angles of a hyperbolic triangle is less than 180°, and can be arbitrarily close to 0°. Thus one has an angular defect D = 180 ∘ − sum of angles . {\displaystyle D=180^{\circ }-{\text{sum of angles}}.} As in the spherical case, the angular defect D {\textstyle D} and the area A {\textstyle A} determine each other: one has D = A r 2 {\displaystyle D={\frac {A}{r^{2}}}} where r = 1 − κ {\textstyle r={\frac {1}{\sqrt {-\kappa }}}} and κ < 0 {\textstyle \kappa <0} is the constant curvature . This relation was first proven by Johann Heinrich Lambert . [ 7 ] One sees that all triangles have area bounded by 180 ∘ × r 2 {\textstyle 180^{\circ }\times r^{2}} .
As in the spherical case, D {\textstyle D} can be calculated using the three side lengths, the lengths of two sides and their angle, or the length of one side and the two adjacent angles (see hyperbolic trigonometry ).
Once again, the euclidean law is recovered as a limit when the side lengths (or, more generally, the area) tend to 0 {\displaystyle 0} . Letting the lengths all tend to infinity, however, causes D {\textstyle D} to tend to 180°, i.e. the three angles tend to 0°. One can regard this limit as the case of ideal triangles , joining three points at infinity by three bi-infinite geodesics. Their area is the limit value A = 180 ∘ × r 2 {\textstyle A=180^{\circ }\times {r^{2}}} .
Lexell's theorem also has a hyperbolic counterpart: instead of circles, the level sets become pairs of curves called hypercycles , and the foliation is non-singular. [ 8 ]
In Taxicab Geometry , a type of non-Euclidean geometry where distance is measured using the Manhattan metric (only horizontal and vertical moves are allowed, like a grid), the concept of angle sum in a triangle becomes ambiguous. In some interpretations, the sum of angles in a taxicab triangle can still be 180°, but the way angles are measured differs from Euclidean space. Right angles can stretch or contract depending on the definition used, making the sum of angles a more flexible concept than in standard Euclidean geometry.
This discrepancy arises because, in taxicab geometry, the shortest path between two points is not necessarily a straight line in the Euclidean sense but rather a series of horizontal and vertical segments. As a result, the definition of angles depends on the chosen metric, leading to alternative ways of measuring them. For example, in some interpretations, a "right angle" may still resemble the familiar 90° turn, while in others, it may stretch depending on the path taken. This flexibility in angle measurement makes taxicab geometry a fascinating field of study, particularly in urban planning, computer science, and optimization problems, where grid-based movement is common.
Angles between adjacent sides of a triangle are referred to as interior angles in Euclidean and other geometries. Exterior angles can be also defined, and the Euclidean triangle postulate can be formulated as the exterior angle theorem . One can also consider the sum of all three exterior angles, that equals to 360° [ 9 ] in the Euclidean case (as for any convex polygon ), is less than 360° in the spherical case, and is greater than 360° in the hyperbolic case.
In the differential geometry of surfaces , the question of a triangle's angular defect is understood as a special case of the Gauss-Bonnet theorem where the curvature of a closed curve is not a function, but a measure with the support in exactly three points – vertices of a triangle. | https://en.wikipedia.org/wiki/Sum_of_angles_of_a_triangle |
The sum of four cubes problem [ 1 ] asks whether every integer is the sum of four cubes of integers. It is conjectured the answer is affirmative, but this conjecture has been neither proven nor disproven. [ 2 ] Some of the cubes may be negative numbers , in contrast to Waring's problem on sums of cubes, where they are required to be positive.
The substitutions X = T {\displaystyle X=T} , Y = T {\displaystyle Y=T} , and Z = − T + 1 {\displaystyle Z=-T+1} in the identity ( X + Y + Z ) 3 − X 3 − Y 3 − Z 3 = 3 ( X + Y ) ( X + Z ) ( Y + Z ) {\displaystyle (X+Y+Z)^{3}-X^{3}-Y^{3}-Z^{3}=3(X+Y)(X+Z)(Y+Z)} lead to the identity ( T + 1 ) 3 + ( − T ) 3 + ( − T ) 3 + ( T − 1 ) 3 = 6 T , {\displaystyle (T+1)^{3}+(-T)^{3}+(-T)^{3}+(T-1)^{3}=6T,} which shows that every integer multiple of 6 is the sum of four cubes. (More generally, the same proof shows that every multiple of 6 in every ring is the sum of four cubes.)
Since every integer is congruent to its own cube modulo 6, it follows that every integer is the sum of five cubes of integers.
In 1966, V. A. Demjanenko [ de ] proved that any integer that is congruent neither to 4 nor to −4 modulo 9 is the sum of four cubes of integers. For this, he used the following identities: 6 x = ( x + 1 ) 3 + ( x − 1 ) 3 − x 3 − x 3 6 x + 3 = x 3 + ( − x + 4 ) 3 + ( 2 x − 5 ) 3 + ( − 2 x + 4 ) 3 18 x + 1 = ( 2 x + 14 ) 3 + ( − 2 x − 23 ) 3 + ( − 3 x − 26 ) 3 + ( 3 x + 30 ) 3 18 x + 7 = ( x + 2 ) 3 + ( 6 x − 1 ) 3 + ( 8 x − 2 ) 3 + ( − 9 x + 2 ) 3 18 x + 8 = ( x − 5 ) 3 + ( − x + 14 ) 3 + ( − 3 x + 29 ) 3 + ( 3 x − 30 ) 3 . {\displaystyle {\begin{aligned}6x&=(x+1)^{3}+(x-1)^{3}-x^{3}-x^{3}\\6x+3&=x^{3}+(-x+4)^{3}+(2x-5)^{3}+(-2x+4)^{3}\\18x+1&=(2x+14)^{3}+(-2x-23)^{3}+(-3x-26)^{3}+(3x+30)^{3}\\18x+7&=(x+2)^{3}+(6x-1)^{3}+(8x-2)^{3}+(-9x+2)^{3}\\18x+8&=(x-5)^{3}+(-x+14)^{3}+(-3x+29)^{3}+(3x-30)^{3}\ .\end{aligned}}} These identities (and those derived from them by passing to opposites ) immediately show that any integer which is congruent neither to 4 nor to −4 modulo 9 and is congruent neither to 2 nor to −2 modulo 18 is a sum of four cubes of integers. Using more subtle reasonings, Demjanenko proved that integers congruent to 2 or to −2 modulo 18 are also sums of four cubes of integers. [ 3 ]
The problem therefore only arises for integers congruent to 4 or to −4 modulo 9. One example is 13 = 10 3 + 7 3 + 1 3 + ( − 11 ) 3 , {\displaystyle 13=10^{3}+7^{3}+1^{3}+(-11)^{3},} but it is not known if every such integer can be written as a sum of four cubes.
According to Henri Cohen 's translation [ 4 ] of Demjanenko's paper, these identities
54 x + 2 = ( 29484 x 2 + 2211 x + 43 ) 3 + ( − 29484 x 2 − 2157 x − 41 ) 3 + ( 9828 x 2 + 485 x + 4 ) 3 + ( − 9828 x 2 − 971 x − 22 ) 3 54 x + 20 = ( 3 x − 11 ) 3 + ( − 3 x + 10 ) 3 + ( x + 2 ) 3 + ( − x + 7 ) 3 216 x − 16 = ( 14742 x 2 − 2157 x + 82 ) 3 + ( − 14742 x 2 + 2211 x − 86 ) 3 + ( 4914 x 2 − 971 x + 44 ) 3 + ( − 4914 x 2 + 485 x − 8 ) 3 216 x + 92 = ( 3 x − 164 ) 3 + ( − 3 x + 160 ) 3 + ( x − 35 ) 3 + ( − x + 71 ) 3 {\displaystyle {\begin{aligned}54x+2&=(29484x^{2}+2211x+43)^{3}+(-29484x^{2}-2157x-41)^{3}+(9828x^{2}+485x+4)^{3}+(-9828x^{2}-971x-22)^{3}\\54x+20&=(3x-11)^{3}+(-3x+10)^{3}+(x+2)^{3}+(-x+7)^{3}\\216x-16&=(14742x^{2}-2157x+82)^{3}+(-14742x^{2}+2211x-86)^{3}+(4914x^{2}-971x+44)^{3}+(-4914x^{2}+485x-8)^{3}\\216x+92&=(3x-164)^{3}+(-3x+160)^{3}+(x-35)^{3}+(-x+71)^{3}\end{aligned}}} together with their complementary identities leave the 108x±38 case, proving the proposition. [ clarification needed ] He also proves the 108x±38 case in his paper. | https://en.wikipedia.org/wiki/Sum_of_four_cubes_problem |
In mathematics , the sum of two cubes is a cubed number added to another cubed number.
Every sum of cubes may be factored according to the identity a 3 + b 3 = ( a + b ) ( a 2 − a b + b 2 ) {\displaystyle a^{3}+b^{3}=(a+b)(a^{2}-ab+b^{2})} in elementary algebra . [ 1 ]
Binomial numbers generalize this factorization to higher odd powers.
Starting with the expression, a 2 − a b + b 2 {\displaystyle a^{2}-ab+b^{2}} and multiplying by a + b [ 1 ] ( a + b ) ( a 2 − a b + b 2 ) = a ( a 2 − a b + b 2 ) + b ( a 2 − a b + b 2 ) . {\displaystyle (a+b)(a^{2}-ab+b^{2})=a(a^{2}-ab+b^{2})+b(a^{2}-ab+b^{2}).} distributing a and b over a 2 − a b + b 2 {\displaystyle a^{2}-ab+b^{2}} , [ 1 ] a 3 − a 2 b + a b 2 + a 2 b − a b 2 + b 3 {\displaystyle a^{3}-a^{2}b+ab^{2}+a^{2}b-ab^{2}+b^{3}} and canceling the like terms, [ 1 ] a 3 + b 3 . {\displaystyle a^{3}+b^{3}.}
Similarly for the difference of cubes, ( a − b ) ( a 2 + a b + b 2 ) = a ( a 2 + a b + b 2 ) − b ( a 2 + a b + b 2 ) = a 3 + a 2 b + a b 2 − a 2 b − a b 2 − b 3 = a 3 − b 3 . {\displaystyle {\begin{aligned}(a-b)(a^{2}+ab+b^{2})&=a(a^{2}+ab+b^{2})-b(a^{2}+ab+b^{2})\\&=a^{3}+a^{2}b+ab^{2}\;-a^{2}b-ab^{2}-b^{3}\\&=a^{3}-b^{3}.\end{aligned}}}
The mnemonic "SOAP", short for "Same, Opposite, Always Positive", helps recall of the signs : [ 2 ] [ 3 ] [ 4 ]
Fermat's last theorem in the case of exponent 3 states that the sum of two non-zero integer cubes does not result in a non-zero integer cube. The first recorded proof of the exponent 3 case was given by Euler . [ 5 ]
A Taxicab number is the smallest positive number that can be expressed as a sum of two positive integer cubes in n distinct ways. The smallest taxicab number after Ta(1) = 1, is Ta(2) = 1729 (the Ramanujan number ), [ 6 ] expressed as
Ta(3), the smallest taxicab number expressed in 3 different ways, is 87,539,319, expressed as
A Cabtaxi number is the smallest positive number that can be expressed as a sum of two integer cubes in n ways, allowing the cubes to be negative or zero as well as positive. The smallest cabtaxi number after Cabtaxi(1) = 0, is Cabtaxi(2) = 91, [ 7 ] expressed as:
Cabtaxi(3), the smallest Cabtaxi number expressed in 3 different ways, is 4104, [ 8 ] expressed as | https://en.wikipedia.org/wiki/Sum_of_two_cubes |
In number theory , the sum of two squares theorem relates the prime decomposition of any integer n > 1 to whether it can be written as a sum of two squares , such that n = a 2 + b 2 for some integers a , b . [ 1 ]
An integer greater than one can be written as a sum of two squares if and only if its prime decomposition contains no factor p k , where prime p ≡ 3 ( mod 4 ) {\displaystyle p\equiv 3{\pmod {4}}} and k is odd .
In writing a number as a sum of two squares, it is allowed for one of the squares to be zero, or for both of them to be equal to each other, so all squares and all doubles of squares are included in the numbers that can be represented in this way. This theorem supplements Fermat's theorem on sums of two squares which says when a prime number can be written as a sum of two squares, in that it also covers the case for composite numbers .
A number may have multiple representations as a sum of two squares, counted by the sum of squares function ; for instance, every Pythagorean triple a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} gives a second representation for c 2 {\displaystyle c^{2}} beyond the trivial representation c 2 + 0 2 {\displaystyle c^{2}+0^{2}} .
The prime decomposition of the number 2450 is given by 2450 = 2 · 5 2 · 7 2 . Of the primes occurring in this decomposition, 2, 5, and 7, only 7 is congruent to 3 modulo 4. Its exponent in the decomposition, 2, is even . Therefore, the theorem states that it is expressible as the sum of two squares. Indeed, 2450 = 7 2 + 49 2 .
The prime decomposition of the number 3430 is 2 · 5 · 7 3 . This time, the exponent of 7 in the decomposition is 3, an odd number. So 3430 cannot be written as the sum of two squares.
The numbers that can be represented as the sums of two squares form the integer sequence [ 2 ]
They form the set of all norms of Gaussian integers ; [ 2 ] their square roots form the set of all lengths of line segments between pairs of points in the two-dimensional integer lattice .
The number of representable numbers in the range from 0 to any number n {\displaystyle n} is proportional to n log n {\displaystyle {\frac {n}{\sqrt {\log n}}}} , with a limiting constant of proportionality given by the Landau–Ramanujan constant , approximately 0.764. [ 3 ]
The product of any two representable numbers is another representable number. Its representation can be derived from representations of its two factors, using the Brahmagupta–Fibonacci identity .
Two-square theorem — Denote the number of divisors of n {\displaystyle n} as d ( n ) {\displaystyle d(n)} , and write d a ( n ) {\displaystyle d_{a}(n)} for the number of those divisors with d ≡ a mod 4 {\displaystyle d\equiv a{\bmod {4}}} . Let n = 2 f p 1 r 1 p 2 r 2 ⋯ q 1 s 1 q 2 s 2 ⋯ {\displaystyle n=2^{f}p_{1}^{r_{1}}p_{2}^{r_{2}}\cdots q_{1}^{s_{1}}q_{2}^{s_{2}}\cdots } where p i ≡ 1 mod 4 , q i ≡ 3 mod 4 {\displaystyle p_{i}\equiv 1{\bmod {4}},\ q_{i}\equiv 3{\bmod {4}}} .
Let r 2 ( n ) {\displaystyle r_{2}(n)} be the number of ways n {\displaystyle n} can be represented as the sum of two squares.
Then, r 2 ( n ) = 0 {\displaystyle r_{2}(n)=0} if any of the exponents s j {\displaystyle s_{j}} are odd. If all s j {\displaystyle s_{j}} are even, then r 2 ( n ) = 4 d ( p 1 r 1 p 2 r 2 ⋯ ) = 4 ( d 1 ( n ) − d 3 ( n ) ) {\displaystyle r_{2}(n)=4d(p_{1}^{r_{1}}p_{2}^{r_{2}}\cdots )=4(d_{1}(n)-d_{3}(n))}
Proved by Gauss using quadratic forms and Jacobi using elliptic functions . [ 4 ] An elementary proof is based on the unique factorization of the Gaussian integers . [ 4 ] Hirschhorn gives a short proof derived from the Jacobi triple product . [ 5 ] | https://en.wikipedia.org/wiki/Sum_of_two_squares_theorem |
In quantum mechanics , a sum rule is a formula for transitions between energy levels, in which the sum of the transition strengths is expressed in a simple form. Sum rules are used to describe the properties of many physical systems, including solids, atoms, atomic nuclei, and nuclear constituents such as protons and neutrons.
The sum rules are derived from general principles, and are useful in situations where the behavior of individual energy levels is too complex to be described by a precise quantum-mechanical theory. In general, sum rules are derived by using Heisenberg 's quantum-mechanical algebra to construct operator equalities, which are then applied to the particles or energy levels of a system.
Assume that the Hamiltonian H ^ {\displaystyle {\hat {H}}} has a complete
set of eigenfunctions | n ⟩ {\displaystyle |n\rangle } with eigenvalues E n {\displaystyle E_{n}} :
For the Hermitian operator A ^ {\displaystyle {\hat {A}}} we define the
repeated commutator C ^ ( k ) {\displaystyle {\hat {C}}^{(k)}} iteratively by:
The operator C ^ ( 0 ) {\displaystyle {\hat {C}}^{(0)}} is Hermitian since A ^ {\displaystyle {\hat {A}}} is defined to be Hermitian. The operator C ^ ( 1 ) {\displaystyle {\hat {C}}^{(1)}} is
anti-Hermitian:
By induction one finds:
and also
For a Hermitian operator we have
Using this relation we derive:
The result can be written as
For k = 1 {\displaystyle k=1} this gives:
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sum_rule_in_quantum_mechanics |
In quantum field theory , a sum rule is a relation between a static quantity and an integral over a dynamical quantity. Therefore, they have a form such as:
∫ A ( x ) d x = B {\displaystyle \int A(x)dx=B}
where A ( x ) {\displaystyle A(x)} is the dynamical quantity, for example a structure function characterizing a particle, and B {\displaystyle B} is the static quantity, for example the mass or the charge of that particle.
Quantum field theory sum rules should not be confused with sum rules in quantum chromodynamics or quantum mechanics .
Many sum rules exist. The validity of a particular sum rule can be sound if its derivation is based on solid assumptions, or on the contrary, some sum rules have been shown experimentally to be incorrect, due to unwarranted assumptions made in their derivation. The list of sum rules below illustrate this.
Sum rules are usually obtained by combining a dispersion relation with the optical theorem , [ 1 ] using the operator product expansion or current algebra . [ 2 ]
Quantum field theory sum rules are useful in a variety of ways. They permit to test the theory used to derive them, e.g. quantum chromodynamics , or an assumption made for the derivation, e.g. Lorentz invariance . They can be used to study a particle, e.g. how does the spins of partons make up the spin of the proton . They can also be used as a measurement method. If the static quantity B {\displaystyle B} is difficult to measure directly, measuring A ( x ) {\displaystyle A(x)} and integrating it offers a practical way to obtain B {\displaystyle B} (providing that the particular sum rule linking A ( x ) {\displaystyle A(x)} to B {\displaystyle B} is reliable).
Although in principle, B {\displaystyle B} is a static quantity, the denomination of sum rule has been extended to the case where B {\displaystyle B} is a probability amplitude , e.g. the probability amplitude of Compton scattering , [ 1 ] see the list of sum rules below.
(The list is not exhaustive) | https://en.wikipedia.org/wiki/Sum_rules_(quantum_field_theory) |
Sumitomo Chemical Co., Ltd. ( 住友化学株式会社 , Sumitomo Kagaku Kabushiki-gaisha ) is a major Japanese chemical company . The company is listed on the first section of the Tokyo Stock Exchange and is a constituent of the on the Nikkei 225 [ 3 ] stock index . It's a member of the Sumitomo group and was founded in 1913 as a fertilizer manufacturing plant. | https://en.wikipedia.org/wiki/Sumitomo_Chemical |
In mathematics, a summability kernel is a family or sequence of periodic integrable functions satisfying a certain set of properties, listed below. Certain kernels, such as the Fejér kernel , are particularly useful in Fourier analysis . Summability kernels are related to approximation of the identity ; definitions of an approximation of identity vary, [ 1 ] but sometimes the definition of an approximation of the identity is taken to be the same as for a summability kernel.
Let T := R / Z {\displaystyle \mathbb {T} :=\mathbb {R} /\mathbb {Z} } . A summability kernel is a sequence ( k n ) {\displaystyle (k_{n})} in L 1 ( T ) {\displaystyle L^{1}(\mathbb {T} )} that satisfies
Note that if k n ≥ 0 {\displaystyle k_{n}\geq 0} for all n {\displaystyle n} , i.e. ( k n ) {\displaystyle (k_{n})} is a positive summability kernel , then the second requirement follows automatically from the first.
With the more usual convention T = R / 2 π Z {\displaystyle \mathbb {T} =\mathbb {R} /2\pi \mathbb {Z} } , the first equation becomes 1 2 π ∫ T k n ( t ) d t = 1 {\displaystyle {\frac {1}{2\pi }}\int _{\mathbb {T} }k_{n}(t)\,dt=1} , and the upper limit of integration on the third equation should be extended to π {\displaystyle \pi } , so that the condition 3 above should be
∫ δ ≤ | t | ≤ π | k n ( t ) | d t → 0 {\displaystyle \int _{\delta \leq |t|\leq \pi }|k_{n}(t)|\,dt\to 0} as n → ∞ {\displaystyle n\to \infty } , for every δ > 0 {\displaystyle \delta >0} .
This expresses the fact that the mass concentrates around the origin as n {\displaystyle n} increases.
One can also consider R {\displaystyle \mathbb {R} } rather than T {\displaystyle \mathbb {T} } ; then (1) and (2) are integrated over R {\displaystyle \mathbb {R} } , and (3) over | t | > δ {\displaystyle |t|>\delta } .
Let ( k n ) {\displaystyle (k_{n})} be a summability kernel, and ∗ {\displaystyle *} denote the convolution operation. | https://en.wikipedia.org/wiki/Summability_kernel |
In mathematics , summation is the addition of a sequence of numbers , called addends or summands ; the result is their sum or total . Beside numbers, other types of values can be summed as well: functions , vectors , matrices , polynomials and, in general, elements of any type of mathematical objects on which an operation denoted "+" is defined.
Summations of infinite sequences are called series . They involve the concept of limit , and are not considered in this article.
The summation of an explicit sequence is denoted as a succession of additions. For example, summation of [1, 2, 4, 2] is denoted 1 + 2 + 4 + 2 , and results in 9, that is, 1 + 2 + 4 + 2 = 9 . Because addition is associative and commutative , there is no need for parentheses, and the result is the same irrespective of the order of the summands. Summation of a sequence of only one summand results in the summand itself. Summation of an empty sequence (a sequence with no elements), by convention, results in 0.
Very often, the elements of a sequence are defined, through a regular pattern, as a function of their place in the sequence. For simple patterns, summation of long sequences may be represented with most summands replaced by ellipses. For example, summation of the first 100 natural numbers may be written as 1 + 2 + 3 + 4 + ⋯ + 99 + 100 . Otherwise, summation is denoted by using Σ notation , where ∑ {\textstyle \sum } is an enlarged capital Greek letter sigma . For example, the sum of the first n natural numbers can be denoted as
For long summations, and summations of variable length (defined with ellipses or Σ notation), it is a common problem to find closed-form expressions for the result. For example, [ a ]
Although such formulas do not always exist, many summation formulas have been discovered—with some of the most common and elementary ones being listed in the remainder of this article.
Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol , ∑ {\textstyle \sum } , an enlarged form of the upright capital Greek letter sigma . [ 1 ] This is defined as
where i is the index of summation ; a i is an indexed variable representing each term of the sum; m is the lower bound of summation , and n is the upper bound of summation . The " i = m " under the summation symbol means that the index i starts out equal to m . The index, i , is incremented by one for each successive term, stopping when i = n . [ b ]
This is read as "sum of a i , from i = m to n ". The term finite series is sometimes used when discussing summation presented here, to contrast with infinite series .
Here is an example showing the summation of squares:
In general, while any variable can be used as the index of summation (provided that no ambiguity is incurred), some of the most common ones include letters such as i {\displaystyle i} , [ c ] j {\displaystyle j} , k {\displaystyle k} , and n {\displaystyle n} ; the latter is also often used for the upper bound of a summation.
Alternatively, index and bounds of summation are sometimes omitted from the definition of summation if the context is sufficiently clear. This applies particularly when the index runs from 1 to n . [ 2 ] For example, one might write that:
Generalizations of this notation are often used, in which an arbitrary logical condition is supplied, and the sum is intended to be taken over all values satisfying the condition. For example:
is an alternative notation for ∑ k = 0 99 f ( k ) , {\textstyle \sum _{k=0}^{99}f(k),} the sum of f ( k ) {\displaystyle f(k)} over all ( integers ) k {\displaystyle k} in the specified range. Similarly,
is the sum of f ( x ) {\displaystyle f(x)} over all elements x {\displaystyle x} in the set S {\displaystyle S} , and
is the sum of μ ( d ) {\displaystyle \mu (d)} over all positive integers d {\displaystyle d} dividing n {\displaystyle n} . [ d ]
There are also ways to generalize the use of many sigma signs. For example,
is the same as
A similar notation is used for the product of a sequence , where ∏ {\textstyle \prod } , an enlarged form of the Greek capital letter pi , is used instead of ∑ . {\textstyle \sum .}
It is possible to sum fewer than 2 numbers:
These degenerate cases are usually only used when the summation notation gives a degenerate result in a special case.
For example, if n = m {\displaystyle n=m} in the definition above, then there is only one term in the sum; if n = m − 1 {\displaystyle n=m-1} , then there is none.
The phrase 'algebraic sum' refers to a sum of terms which may have positive or negative signs. Terms with positive signs are added, while terms with negative signs are subtracted. e.g.
+1 −1
Summation may be defined recursively as follows:
In the notation of measure and integration theory, a sum can be expressed as a definite integral ,
where [ a , b ] {\displaystyle [a,b]} is the subset of the integers from a {\displaystyle a} to b {\displaystyle b} , and where μ {\displaystyle \mu } is the counting measure over the integers.
Given a function f that is defined over the integers in the interval [ m , n ] , the following equation holds:
This is known as a telescoping series and is the analogue of the fundamental theorem of calculus in calculus of finite differences , which states that:
where
is the derivative of f .
An example of application of the above equation is the following:
Using binomial theorem , this may be rewritten as:
The above formula is more commonly used for inverting of the difference operator Δ {\displaystyle \Delta } , defined by:
where f is a function defined on the nonnegative integers.
Thus, given such a function f , the problem is to compute the antidifference of f , a function F = Δ − 1 f {\displaystyle F=\Delta ^{-1}f} such that Δ F = f {\displaystyle \Delta F=f} . That is, F ( n + 1 ) − F ( n ) = f ( n ) . {\displaystyle F(n+1)-F(n)=f(n).} This function is defined up to the addition of a constant, and may be chosen as [ 3 ]
There is not always a closed-form expression for such a summation, but Faulhaber's formula provides a closed form in the case where f ( n ) = n k {\displaystyle f(n)=n^{k}} and, by linearity , for every polynomial function of n .
Many such approximations can be obtained by the following connection between sums and integrals , which holds for any increasing function f :
and for any decreasing function f :
For more general approximations, see the Euler–Maclaurin formula .
For summations in which the summand is given (or can be interpolated) by an integrable function of the index, the summation can be interpreted as a Riemann sum occurring in the definition of the corresponding definite integral. One can therefore expect that for instance
since the right-hand side is by definition the limit for n → ∞ {\displaystyle n\to \infty } of the left-hand side. However, for a given summation n is fixed, and little can be said about the error in the above approximation without additional assumptions about f : it is clear that for wildly oscillating functions the Riemann sum can be arbitrarily far from the Riemann integral.
The formulae below involve finite sums; for infinite summations or finite summations of expressions involving trigonometric functions or other transcendental functions , see list of mathematical series .
More generally, one has Faulhaber's formula for p > 1 {\displaystyle p>1}
where B k {\displaystyle B_{k}} denotes a Bernoulli number , and ( p k ) {\displaystyle {\binom {p}{k}}} is a binomial coefficient .
In the following summations, a is assumed to be different from 1.
There exist very many summation identities involving binomial coefficients (a whole chapter of Concrete Mathematics is devoted to just the basic techniques). Some of the most basic ones are the following.
In the following summations, n P k {\displaystyle {}_{n}P_{k}} is the number of k -permutations of n .
The following are useful approximations (using theta notation ): | https://en.wikipedia.org/wiki/Summation |
In telecommunications , the term summation check ( sum check ) has the following meanings:
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Summation_check |
In mathematics , a summation equation or discrete integral equation is an equation in which an unknown function appears under a summation sign. The theories of summation equations and integral equations can be unified as integral equations on time scales [ 1 ] using time scale calculus . A summation equation compares to a difference equation as an integral equation compares to a differential equation .
The Volterra summation equation is: x ( t ) = f ( t ) + ∑ s = m n k ( t , s , x ( s ) ) {\displaystyle x(t)=f(t)+\sum _{s=m}^{n}k{\bigl (}t,s,x(s){\bigr )}} where x is the unknown function, s , t are integers , and f , k are known functions.
This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Summation_equation |
In metabolic control analysis , a variety of theorems have been discovered and discussed in the literature. [ 1 ] [ 2 ] [ 3 ] [ 4 ] The most well known of these are flux and concentration control coefficient summation relationships. These theorems are the result of the stoichiometric structure and mass conservation properties of biochemical networks. [ 5 ] [ 6 ] Equivalent theorems have not been found, for example, in electrical or economic systems.
The summation of the flux and concentration control coefficients were discovered independently by the Kacser/Burns group [ 7 ] and the Heinrich/Rapoport group [ 8 ] in the early 1970s and late 1960s.
If we define the control coefficients using enzyme concentration, then the summation theorems are written as:
However these theorems depend on the assumption that reaction rates are proportional to enzyme concentration. An alternative way to write the theorems is to use control coefficients that are defined with respect to the local rates which is therefore independent of how rates respond to changes in enzyme concentration:
Although originally derived for simple linear chains of enzyme catalyzed reactions, it became apparent that the theorems applied to pathways of any structure including pathways with complex regulation involving feedback control. [ 9 ] [ 10 ]
There are different ways to derive the summation theorems. One is analytical and rigorous using a combination of linear algebra and calculus. [ 11 ] The other is less rigorous, but more operational and intuitive. The latter derivation is shown here.
Consider the two-step pathway:
X o ⟶ v 1 S ⟶ v 2 X 1 {\displaystyle {\text{X}}_{o}{\stackrel {v_{1}}{\longrightarrow }}{\text{S}}{\stackrel {v_{2}}{\longrightarrow }}{\text{X}}_{1}}
where X o {\displaystyle X_{o}} and X 1 {\displaystyle X_{1}} are fixed species so that the system can achieve a steady-state .
Let the pathway be at steady-state and imagine increasing the concentration of enzyme, e 1 {\displaystyle e_{1}} , catalyzing the first step, v 1 {\displaystyle v_{1}} , by an amount, δ e 1 {\displaystyle \delta e_{1}} . The effect of this is to increase the steady-state levels of S and flux, J. Let us now increase the level of e 2 {\displaystyle e_{2}} by δ e 2 {\displaystyle \delta e_{2}} such that the change in S is restored to the original value it had at steady-state.
The net effect of these two changes is by definition, δ s = 0 {\displaystyle \delta s=0} .
There are two ways to look at this thought experiment, from the perspective of the system and from the perspective of local changes. For the system we can compute the overall change in flux or species concentration by adding the two control coefficient terms, thus:
δ J J = C e 1 J δ e 1 e 1 + C e 2 J δ e 2 e 2 {\displaystyle {\frac {\delta J}{J}}=C_{e_{1}}^{J}{\frac {\delta e_{1}}{e_{1}}}+C_{e_{2}}^{J}{\frac {\delta e_{2}}{e_{2}}}}
δ s s = C e 1 s δ e 1 e 1 + C e 2 s δ e 2 e 2 = 0 {\displaystyle {\frac {\delta s}{s}}=C_{e_{1}}^{s}{\frac {\delta e_{1}}{e_{1}}}+C_{e_{2}}^{s}{\frac {\delta e_{2}}{e_{2}}}=0}
We can also look at what is happening locally at every reaction step for which there will be two: one for v 1 {\displaystyle v_{1}} , and another for v 2 {\displaystyle v_{2}} . Since the thought experiment guarantees that δ s = 0 {\displaystyle \delta s=0} , the local equations are quite simple:
δ v 1 v 1 = ε e 1 v 1 δ e 1 e 1 {\displaystyle {\frac {\delta v_{1}}{v_{1}}}=\varepsilon _{e_{1}}^{v_{1}}{\frac {\delta e_{1}}{e_{1}}}}
δ v 2 v 2 = ε e 1 v 1 δ e 2 e 2 {\displaystyle {\frac {\delta v_{2}}{v_{2}}}=\varepsilon _{e_{1}}^{v_{1}}{\frac {\delta e_{2}}{e_{2}}}}
where the ε {\displaystyle \varepsilon } terms are the elasticities. However, because the enzyme elasticity is equal to one , these reduce to:
δ v 1 v 1 = δ e 1 e 1 {\displaystyle {\frac {\delta v_{1}}{v_{1}}}={\frac {\delta e_{1}}{e_{1}}}}
δ v 2 v 2 = δ e 2 e 2 {\displaystyle {\frac {\delta v_{2}}{v_{2}}}={\frac {\delta e_{2}}{e_{2}}}}
Because the pathway is linear, at steady-state, v 1 = v 2 = J {\displaystyle v_{1}=v_{2}=J} . We can substitute these expressions into the system equations to give:
δ J J = C e 1 J δ v 1 v 1 + C e 2 J δ v 2 v 2 {\displaystyle {\frac {\delta J}{J}}=C_{e_{1}}^{J}{\frac {\delta v_{1}}{v_{1}}}+C_{e_{2}}^{J}{\frac {\delta v_{2}}{v_{2}}}}
δ s s = C e 1 s δ v 1 v 1 + C e 2 s δ v 2 v 2 = 0 {\displaystyle {\frac {\delta s}{s}}=C_{e_{1}}^{s}{\frac {\delta v_{1}}{v_{1}}}+C_{e_{2}}^{s}{\frac {\delta v_{2}}{v_{2}}}=0}
Note that at steady state the change in v 1 {\displaystyle v_{1}} and v 2 {\displaystyle v_{2}} must be the same, therefore δ v 1 / v 1 = δ v 2 / v 2 {\displaystyle \delta v_{1}/v_{1}=\delta v_{2}/v_{2}} .
Setting α = δ J / J = δ v 1 / v 1 = δ v 2 / v 2 {\displaystyle \alpha =\delta J/J=\delta v_{1}/v_{1}=\delta v_{2}/v_{2}} , we can rewrite the above equations as:
α = C e 1 J α + C e 2 J α = α ( C e 1 J + C e 2 J ) {\displaystyle \alpha =C_{e_{1}}^{J}\alpha +C_{e_{2}}^{J}\alpha =\alpha (C_{e_{1}}^{J}+C_{e_{2}}^{J})}
0 = C e 1 s α + C e 2 s α = α ( C e 1 s + C e 2 s ) {\displaystyle 0=C_{e_{1}}^{s}\alpha +C_{e_{2}}^{s}\alpha =\alpha (C_{e_{1}}^{s}+C_{e_{2}}^{s})}
We then conclude through cancelation of α {\displaystyle \alpha } since α ≠ 0 {\displaystyle \alpha \neq 0} , that:
1 = C e 1 J + C e 2 J {\displaystyle 1=C_{e_{1}}^{J}+C_{e_{2}}^{J}}
0 = C e 1 s + C e 2 s {\displaystyle 0=C_{e_{1}}^{s}+C_{e_{2}}^{s}}
The summation theorems can be interpreted in various ways. The first is that the influence enzymes have over steady-state fluxes and concentrations is not necessarily concentrated at one location. In the past, control of a pathway was considered to be located at one point only, called the master reaction or rate limiting step . The summation theorem suggests this does not necessarily have to be the case.
The flux summation theorem also suggests that there is a total amount of flux control in a pathway such that if one step gains control another step most lose control.
Although flux control is shared, this doesn't imply that control is evenly distributed. For a large network, the average flux control will, according to the flux summation theorem, be equal to 1 / n {\displaystyle 1/n} , that is a small number. In order for a biological cell to have any appreciable control over a pathway via changes in gene expression, some concentration of flux control at a small number of sites will be necessary. For example, in mammalian cancer cell lines, it has been shown [ 12 ] that flux control is concentrated at four sites: glucose import , hexokinase , phosphofructokinase , and lactate export .
Moreover, Kacser and Burns [ 13 ] suggested that since the flux–enzyme relationship is somewhat hyperbolic, and that for most enzymes, the wild-type diploid level of enzyme activity occurs where the curve is reaching a point in the curve where changes have little effect, then since a heterozygote of the wild-type with a null mutant will have half the enzyme activity it will not exhibit a noticeably reduced flux. Therefore, the wild type appears dominant and the mutant recessive because of the system characteristics of a metabolic pathway. Although originally suggested by Sewall Wright, [ 14 ] [ 15 ] the development of metabolic control analysis put the idea on a more sound theoretical footing. The flux summation theorem in particular is consistent with the flux summation theorem for large systems. Not all dominance properties can be explained in this way but it does offers an explanation for dominance at least at the metabolic level. [ 16 ]
In contrast to the flux summation theorem, the concentration summation theorem sums to zero. The implications of this are that some enzymes will cause a given metabolite to increase while others, in order to satisfy the summation to zero, must cause the same metabolite to decrease. This is particularly noticeable in a linear chain of enzyme reactions where, given a metabolite located in the center of the pathway, an increase in expression of any enzyme upstream of the metabolite will cause the metabolite to increase in concentration. In contrast, an increase in expression of any enzyme downstream of the metabolite will cause the given metabolite to decrease in concentration. [ 17 ] | https://en.wikipedia.org/wiki/Summation_theorems_(biochemistry) |
A sump , or siphon , is a passage in a cave that is submerged under water. [ 1 ] A sump may be static, with no inward or outward flow, or active, with continuous through-flow. Static sumps may also be connected underwater to an active stream passage. When short in length, a sump may be called a duck ; however, this can also refer to a section or passage with some (minimal) airspace above the water.
Depending on hydrological factors specific to a cave – such as the sea tide , changes in river flow, or the relationship with the local water table – sumps and ducks may fluctuate in water level and depth (and sometimes in length, due to the shape of adjacent passage).
Short sumps may be passed simply by holding one's breath while ducking through the submerged section (for example, Sump 1 in Swildon's Hole ). This is known as "free diving" and can only be attempted if the sump is known to be short and not technically difficult (e.g. constricted or requiring navigation). Longer and more technically difficult sumps can only be passed by cave diving (as happened repeatedly in the exploration of Krubera Cave ).
When practical, a sump can also be drained using buckets, pumps or siphons . Pumping the water away requires the inward flow of water into the sump to be less than the rate at which the pump empties it, as well as a suitable place to collect the emptied water. Upstream sumps have been successfully emptied using hoses to siphon water out of them, such as at the Sinkhole Dersios during exploration in 2005. The water was sent deeper into the sinkhole , and the emptied sumps revealed virgin passage behind them. During a rescue from beyond a downstream sump at Sarkhos Cave in 2002, water was pumped upstream into a dam constructed a few metres above the flooded passage.
Some manuals also mention the use of explosives or other forms of force to empty sumps, but the ecological damage done to the fragile cave environment usually rules out the use of such methods. | https://en.wikipedia.org/wiki/Sump_(cave) |
A sump buster is a device installed within a bus route to limit that thoroughfare to buses. It discourages traffic from entering a lane by promising to destroy the oil pan of any vehicle with insufficient ground clearance to get over it, making them similar in use (but not design) to rising bollards . [ 1 ] A sump buster can also be known as a "sump breaker" or "sump trap". [ citation needed ] Sump busters were first used in the 1980s. [ 2 ]
The sump buster uses a non- mechanical solid mass of concrete , or sometimes other aggregates or metal , to demobilise a vehicle when access to a restricted area is attempted. When a vehicle attempts to traverse the sump buster, the device will demolish the vehicle's oil pan (literally "busting the sump "). The track and ground clearance on permitted vehicles, usually buses, is such that they may clear the device with ease. [ 3 ] In some cases, advisory or mandatory speed limits are given.
A major purpose of the sump buster is to avoid road systems to be used as rat runs and, to a certain extent, joyriding . For this reason, devices have been vandalised (either through annoyance at their existence or to attempt to gain passage), resulting in accidents (and injuries) to legitimate road users. [ 4 ]
In January 2005, Devon County Council dismissed an application by the Stagecoach Group for the installation of a sump buster on Tan Lane (a restricted access road) in Exeter . The Exeter Highways and Traffic Orders Committee stated that "...[using a sump buster] is not an option that the County Council could support [as] it would not differentiate between high clearance vehicles and for example cars and vans that are authorised to use the link under the current Traffic Regulation Order". [ 5 ]
Sump busters have led to serious injuries to scooter drivers and cyclists who fail to notice them. [ 6 ] [ 7 ] Municipalities in the Netherlands have been sued for tort after damage or injuries caused by insufficiently marked sump busters. [ 2 ] [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Sump_buster |
In additive combinatorics , the sumset (also called the Minkowski sum ) of two subsets A {\displaystyle A} and B {\displaystyle B} of an abelian group G {\displaystyle G} (written additively) is defined to be the set of all sums of an element from A {\displaystyle A} with an element from B {\displaystyle B} . That is,
The n {\displaystyle n} -fold iterated sumset of A {\displaystyle A} is
where there are n {\displaystyle n} summands.
Many of the questions and results of additive combinatorics and additive number theory can be phrased in terms of sumsets. For example, Lagrange's four-square theorem can be written succinctly in the form
where ◻ {\displaystyle \Box } is the set of square numbers . A subject that has received a fair amount of study is that of sets with small doubling , where the size of the set A + A {\displaystyle A+A} is small (compared to the size of A {\displaystyle A} ); see for example Freiman's theorem .
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sumset |
The Sumudu transform is an integral transform introduced in 1990 by G K Watagala. [ 1 ] [ 2 ] [ 3 ] It is defined over the set of functions [ 4 ] [ 5 ] [ 6 ]
A = { f ( t ) :∋ M , p , q > 0 , | f ( t ) | = M exp ( 1 / u ) } {\displaystyle A=\{f(t):\ni M,p,q>0,|f(t)|=M\exp(1/u)\}}
where p ≤ u ≤ q {\displaystyle p\leq u\leq q} , the Sumudu transform is defined as
S [ f ( t ) ] = 1 u ∫ 0 ∞ f ( t ) exp ( − 1 u ) d t . {\displaystyle S[f(t)]={\frac {1}{u}}\int _{0}^{\infty }f(t)\exp \left(-{\frac {1}{u}}\right)\,dt.}
Sumudu transform is 1/ u Laplace transform S [ f ( t ) ] ( u ) = 1 u L [ f ( t ) ] ( 1 u ) {\displaystyle S[f(t)](u)={\frac {1}{u}}L[f(t)]({\frac {1}{u}})} And with u 2 Elzaki transform S [ f ( t ) ] ( u ) = u 2 E [ f ( t ) ] ( u ) {\displaystyle S[f(t)](u)=u^{2}E[f(t)](u)} | https://en.wikipedia.org/wiki/Sumudu_transform |
Sun-Earth Day is a joint educational program established in 2000 by NASA and ESA . The goal of the program is to popularize the knowledge about the Sun , and the way it influences life on Earth , among students and the public. [ 1 ] The day itself is mainly celebrated in the United States near the time of the spring equinox . However, the Sun-Earth Day event actually runs throughout the year, with a different theme being chosen each year. [ 2 ]
The selection of each year's theme often corresponds to events for that year. [ 3 ] Every theme is supported by free educational plans for both informal and formal educators. [ 2 ] Here is a list of themes by year:
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sun-Earth_Day |
SunPass is an electronic toll collection system within the state of Florida, United States. It was created in 1999 by the Florida Department of Transportation 's (FDOT's) Office of Toll Operations, operating as a division of Florida's Turnpike Enterprise (FTE). The system utilizes windshield-mounted RFID transponders manufactured by TransCore and lane equipment designed by companies including TransCore , SAIC , and Raytheon .
SunPass was introduced on April 24, 1999, and by October 1 of the same year, more than 100,000 SunPass transponders had been sold. [ 1 ] [ 2 ]
In early 2009, all Easy Pay customers automatically became SunPass Plus customers if they opt-in and have the privilege of using their transponders to pay for airport parking at Tampa , Orlando , Palm Beach , Fort Lauderdale and Miami airports. Customers were also able to opt out of the program. [ 3 ]
The Mini was introduced on July 1, 2008, and became available at retail locations. The Mini is a RFID passive transponder, about the size of a credit card, and uses no batteries. The transponder must be mounted on the glass windshield of the vehicle to work properly and, once applied, cannot be removed from a windshield without destroying the pass. The SunPass Mini sticker will not work on motorcycle windshields as they are not made of glass. [ 4 ] [ 5 ]
SunPass Portable (or SunPass Pro) transponders can be transferred between vehicles.
SunPass-only toll lanes on most toll roads in Florida allow a vehicle to proceed through the tollbooth at speeds of up to 25 mph (40 km/h) as a safety precaution. The Turnpike utilizes all-electronic tolling (AET) and toll by plate which handles highway speeds. The mainline toll barriers have dedicated lanes capable of full-speed automatic toll collection at up to 65 mph (105 km/h).
Florida's Turnpike Enterprise converted the Homestead Extension of Florida's Turnpike , the Sawgrass Expressway , and the Veterans Expressway to open road tolling , utilizing the SunPass transponders, in September 2010, February 2011, April 2014, and June 2014 respectively, ceasing cash collection. This allows free-flowing movement on both toll roads, moving through toll gantries at the former toll plazas. Motorists without a SunPass are billed through toll by plate . [ 6 ] [ 7 ] [ 8 ] Toll-by-Plate uses cameras and sends a bill to the registered owner of the vehicle. The bill consists of the toll and an administrative fee. [ 9 ] If the person fails to pay the toll and accompanying fees at all, the person would be fined $100 plus the tolls owed; in some cases, court costs, points against the driver's license, and the suspension of the license and registration would also be levied. [ 10 ]
SunPass is fully interoperable with E-Pass (from the Central Florida Expressway Authority ), O-Pass (from Osceola County , which has been folded into E-Pass), LeeWay (from Lee County toll bridges) and the Miami-Dade Expressway Authority (MDX) toll roads.
SunPass, like other electronic toll collection (ETC) systems in Florida, was not initially compatible with systems outside of Florida. The federal MAP-21 transportation bill passed in July 2012 required all toll facilities to have interoperable road tolling systems by October 1, 2016, but it was not met. [ 11 ] SunPass announced in 2012 for plans to eventually become interoperable with E-ZPass . [ 12 ] As a step towards this, the older battery-powered SunPass transponders were phased out by the end of 2015; new batteryless models can work with tolling equipment in other states. [ 13 ] [ 14 ]
On July 29, 2013, Florida's Turnpike Enterprise made an interoperability agreement with North Carolina Turnpike Authority and its NC Quick Pass, allowing SunPass holders to utilize North Carolina's toll roads and lanes. [ 15 ] [ 16 ]
On November 12, 2014, an interoperability agreement was made with Georgia's Peach Pass , allowing SunPass holders to utilize the I-85 Express lanes and any future toll roads or lanes in the state. [ 17 ] [ 18 ]
The C-Pass system operated by Miami-Dade County Public Works on the Rickenbacker and Venetian Causeways was replaced by SunPass and pay-by-plate on September 23, 2014. [ 19 ]
In July 2020, E-ZPass announced that SunPass would be compatible with E-ZPass by the end of 2020, along with Peach Pass in 2021. On May 28, 2021, the Florida Turnpike Enterprise announced that its SunPass facilities would begin accepting E-ZPass. In addition, E-ZPass facilities began accepting SunPass Pro transponders (but not earlier SunPass transponders such as the SunPass Portable and SunPass Mini). [ 20 ] [ 21 ]
On February 27, 2023, it was announced that SunPass was compatible with toll roads in Kansas and Oklahoma, as well as on certain toll roads in Texas. [ 22 ] [ 23 ] Both the SunPass Mini and SunPass Pro transponders are supported. Certain transponders from these three states can be used on all roads operated by the Florida Turnpike Enterprise. However, Kansas, Oklahoma, and Texas transponders cannot be used on any tolled roads maintained by the Central Florida Expressway Authority.
In March 2025, the Harris County Toll Road Authority reached an interoperability agreement with the Florida Turnpike Enterprise. SunPass is accepted statewide in Texas, and both EZ Tag & TxTag are accepted on all roads operated by the Florida Turnpike Enterprise. [ 24 ] | https://en.wikipedia.org/wiki/SunPass |
The Sun Radio Interferometer Space Experiment ( SunRISE ), is a set of CubeSats designed to study solar activity by acting as an aperture synthesis radio telescope . [ 1 ] It is intended to monitor giant solar particle storms . [ 2 ]
The satellites will occupy a supersynchronous geosynchronous Earth orbit . [ 3 ]
The participants in the experiment include JPL , the University of Colorado Boulder and the University of Michigan . [ 1 ] [ 3 ] [ 4 ] It is due to be launched in 2025.
As of November 2023 [update] , the six satellites have been built and are going into storage to await their Vulcan Centaur launch vehicle. [ 5 ]
This spacecraft or satellite related article is a stub . You can help Wikipedia by expanding it .
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sun_Radio_Interferometer_Space_Experiment |
Sun SPOT (Sun Small Programmable Object Technology) was a sensor node for a wireless sensor network developed by Sun Microsystems announced in 2007. The device used the IEEE 802.15.4 standard for its networking, and unlike other available sensor nodes, used the Squawk Java virtual machine .
After the acquisition of Sun Microsystems by Oracle Corporation , the SunSPOT platform was supported but its forum was shut down in 2012. [ 1 ] A mirror of the old site is maintained for posterity. [ 2 ]
The completely assembled device fit in the palm of a hand.
Its first processor board included an ARM architecture 32 bit CPU with ARM920T core running at 180 MHz. It had 512 KB RAM and 4 MB flash memory . A 2.4 GHz IEEE 802.15.4 radio had an integrated antenna and a USB interface was included. [ 3 ]
A sensor board included a three-axis accelerometer (with 2G and 6G range settings), temperature sensor, light sensor, 8 tri-color LEDs, analog and digital inputs, two momentary switches, and 4 high current output pins. [ 3 ]
The unit used a 3.7V rechargeable 750 mAh lithium-ion battery , had a 30 uA deep sleep mode, and battery management provided by software. [ 3 ]
The device's use of Java device drivers is unusual since Java is generally hardware-independent. Sun SPOT uses a small Java ME Squawk which ran directly on the processor without an operating system . Both the Squawk VM and the Sun SPOT code are open source. [ 4 ] Standard Java development environments such as NetBeans can be used to create SunSPOT applications.
The management and deployment of application are handled by ant scripts, which can be called from a development environment, command line, or the tool provided with the SPOT SDK, "solarium". [ citation needed ]
The nodes communicate using the IEEE 802.15.4 standard including the base-station approach to sensor networking. Protocols such as Zigbee can be built on 802.15.4.
Sun Labs reported implementations of RSA and elliptic curve cryptography (ECC) optimized for small embedded devices.
Sun Microsystems Laboratories started research on sensor networks around 2004. After some initial experience using "Motes" from Crossbow Technology , a project began under Roger Meike to design an integrated hardware and software system. [ 5 ] Sun sponsored a project at the Art Center College of Design called Autonomous Light Air Vessels in 2005. [ 6 ] The first limited-production run of Sun SPOT development kits were released April 2, 2007, after months of delays. This introduction kit included two Sun SPOT demo sensor boards, a Sun SPOT base station, the software development tools, and a USB cable. The software was compatible with Windows XP, Mac OS X 10.4, and common Linux distributions. Some demonstration code was provided. [ citation needed ]
A developer from Sun gave a demonstration in September 2007. [ 3 ] After investigating commercial use, Sun moved to focus on educational users.
The entire project, hardware, operating environment, Java virtual machine, drivers and applications, was available as open source in January 2008. [ 4 ] [ 7 ] [ 8 ]
Oracle Corporation acquired Sun Microsystems in 2010 and continued Sun SPOT development, through release 8 of the hardware (with Sun-Oracle logo) by March 2011. [ 9 ] The 2011 version included larger memories and a faster processor, but with fewer inputs. [ 10 ]
In 2012 the forum said it would be "down for maintenance" until "mid-June". [ 1 ] A new forum was started on the Oracle Technology Network on May 7, 2013. [ 11 ] David G. Simmons, one of the SunSPOT developers for Sun Microsystems, maintained a blog through the end of 2010. [ 12 ] He opened an alternative developers forum in July 2013 not connected to Oracle. [ 13 ]
When the project was shut down, the lead hardware engineer for the SunSPOT project, Bob Alkire, archived the hardware design on his personal website. [ 14 ] | https://en.wikipedia.org/wiki/Sun_SPOT |
A Sun sensor is a navigational instrument used by spacecraft to detect the position of the Sun . [ 1 ] [ 2 ] Sun sensors are used for attitude control , solar array pointing, gyro updating, and fail-safe recovery. [ 3 ] [ 4 ]
In addition to spacecraft, Sun sensors find use in ground-based weather stations and Sun-tracking systems, and aerial vehicles including balloons and UAVs . [ 2 ]
There are various types of Sun sensors, which differ in their technology and performance characteristics. Sun presence sensors provide a binary output, indicating when the Sun is within the sensor's field of view . Analog and digital Sun sensors, in contrast, indicate the angle of the Sun by continuous and discrete signal outputs, respectively. [ 2 ]
In typical Sun sensors, a thin slit at the top of a rectangular chamber allows a line of light to fall on an array of photodetector cells at the bottom of the chamber. A voltage is induced in these cells, which is registered electronically. By orienting two sensors perpendicular to each other, the direction of the Sun can be fully determined. [ 2 ]
Often, multiple sensors will share processing electronics. [ 3 ]
There are a number of design and performance criteria which dictate the selection of a Sun sensor model: | https://en.wikipedia.org/wiki/Sun_sensor |
The Sunbury Research Centre -- also known as ICBT Sunbury—is a main research institute of BP in north-east Surrey .
It began in 1917 as the Sunbury Research Station. Research began with the employment of two chemists to look into the viscosity of fuel oil for the Navy in the First World War , and the production of toluene .
The two first organic chemists were Dr Albert Ernest Dunstan and Dr Ferdinand Bernard Thole, in the basement of a country house called 'Meadhurst', formerly the home of Sir George William Kekewich . Both of these chemists had together worked at East Ham Technical College. [ 2 ] Albert received a PhD in Viscosity from UCL in October 1910, working under the Irish physicist Frederick Thomas Trouton FRS, and Scottish chemist Sir William Ramsay . [ 3 ] Together they had written 'A Text Book of Practical Chemistry for Technical Institutes', published by Methuen in November 1911, [ 4 ] and 'The Viscosity of Liquids', published by Longman Green in 1914. [ 5 ] [ 6 ] Thole was awarded the OBE in the 1960 Birthday Honours . Albert died aged 85 in 1963, having been Chief Chemist of BP for thirty years. His son, Bernard Dunstan , grew up in the 'Meadhurst' house, and died aged 97 in 2017; Bernard's wife Diana Armfield is 104. [ 7 ] Albert's grandson is David Dunstan, Professor of Physics at Queen Mary University of London . [ 8 ] In the 1950s, Dr Thole worked for the Ministry of Fuel and Power . [ 9 ] Both left the technical college in September 1917. [ 10 ]
In the 1920s research took place into cracking , at the plant at Uphall in Scotland ( West Lothian ). The first new building opened in July 1931. 76 staff were there in 1929, 99 in 1934 and 197 in 1939.
The first laboratory was demolished in July 1936. The main refinery of the company was at Abadan in Iran. Removing sulphur, from the refining process, was developed from the early 1930s, but this would additionally cause corrosion. Thermal cracking of refinery products would move to catalytic cracking methods from 1937, after work by Eugene Houdry in the US.
Leaded petrol was introduced as 'BP Plus' on 15 April 1931, taking the octane number from 66 to 74, becoming 'BP Ethyl' in August 1933.
Aviation fuels were difficult to make, as a high octane number was required. By making di-isobutene , an 88-octane fuel could be made, with production at Abadan from 1937. By adding hydrogen under pressure, with a catalyst, it made the saturated iso-octane ( 2,2,4-Trimethylpentane ), made at Abadan from 1938 for high-octane aviation fuel.
In July 1936, at the research centre's annual conference on chemistry, ways to make iso-octane were looked at, with a chemist Dr Thomas Tait accidentally inventing the alkylation process, via the addition of sulphuric acid in a pentane solvent, which was much quicker than hydrogenation over a catalyst.
This process is still an important part of aviation and automobile fuel refining. It was discovered that isobutane would react with the butenes present. It was largely an accidental discovery, but an important one. Sunbury patented the process in January 1938, and had built an experimental production facility by December 1936, being first made at Abadan from January 1939. The process was given the name 'alkylation' in December 1939.
As well as the Abadan Refinery , oil came from the Haifa oil refinery in Mandatory Palestine , a British-run territory, which was of great importance until Italy (Sicily) was invaded in July 1943, so allowing more exports from Iran again. There was also the Alwand refinery in Iraq. The Llandarcy Oil Refinery was bombed by the Luftwaffe on 10 July and 1 September in 1940, and 18 February 1941.
By July 1941 oil supplies from Iran were rapidly dwindling, so the BP head office required around 100,000 tonnes of crude oil, to be sourced from within England. Only around 25,000 tonnes was being found each year, from within England. By September 1942, Eakring in Nottinghamshire was producing the 100,000 tons, with a field at Caunton discovered in March 1943, with production from May, and another field at Nocton , in Kesteven, also discovered in 1943, with production from December. Onshore oil in the UK, in 1943, produced 115,000 tons.
From work at Sunbury, more aviation fuel could be made at Abadan from 1943, with a new patented process. From work by Merrell Fenske of Pennsylvania State University , known for the Fenske equation , Sunbury chemists developed superfractionation , for aviation fuel manufacture at Abadan, from 1943. Manufacture of 100-octane aviation fuel at Abadan went from around 70,000 tons in 1941 to over a million tons by 1945.
But it was the Baton Rouge Refinery , in Louisiana , owned by Standard Oil (Esso), that provided most high-octane aviation fuel (British Air Ministry 100 octane) for the RAF during the Battle of Britain , from July 1940. The fuel was developed at the Standard Oil Development Company in Linden, New Jersey and at the Esso Research Centre in England, by chemical engineers Bill Sweeney and Alexander Ogston, who was British. Rod Banks had made the first calculations of effect of the better fuel in the Merlin engine. The Luftwaffe Me 109 pilots such as Adolf Galland , who had 87-octane fuel, could not comprehend where such a sudden increase in power of the RAF fighter aircraft came from. 100-octane fuel allowed the Merlin engine to reach maximum horse-power on take-offs and climbs, giving the engine 30% more power, than previously possible. Additionally, the Spitfire could make the type of tight manoeuvres that would cause the Me 109 airframe to disintegrate, such as when pulling out of dives.
Staff were around 200 in 1939, but was much reduced until 1944. In 1943 an aero-engine test facility was built, with a Bristol Hercules engine. Until 1943, many head office staff had moved to the Sunbury site. Sunbury also developed the Fog Investigation and Dispersal Operation system for RAF airfields.
By the 1950s, BP Research was in a 39-acre site in Sunbury. [ 11 ]
Sunbury trained Iranian engineers in the early 1950s. The company became BP in 1954, with a new logo in 1958. The BP Corrosion Control System was developed at the site, to limit corrosion by sulphur, by introducing aqueous ammonia into flues, to react with sulphur trioxide , to form ammonium sulphate . [ 12 ]
Geophysical research had also taken place at Kirklington Hall Research Station in Nottinghamshire , until 1957. The geophysical laboratory opened at the end of 1957, after the relevant staff from Kirklington had moved out in November. [ 13 ] By early 1958, Kirklington Hall had been sold.
Products that the British Petroleum Company made in the 1950s were BP Motor Spirit and BP Energol ( visco-static motor oil ), developed at Sunbury, which reduced engine wear by 80%.
Around 1958, the site was expanded with a new Physics laboratory and five other buildings. A linear electron accelerator was installed. By 1960 the site was 19 acres, with 1300 staff. On Saturday 3 December 1960 there was an explosion in a laboratory, with around fifty firemen attending for more than an hour, with three scientists injured. [ 14 ] [ 15 ]
The first two laboratories, costing £500,000, for the BP Chemicals division opened in September 1961. [ 16 ] On Wednesday 22 September 1965 the £500,000 four-storey Dunstan Laboratory was opened by Albert's daughter, Mary Dunstan. The site now had 1700 staff, and research cost £4m per year. [ 17 ]
The huge BP Baglan Bay chemical plant opened in October 1963, with feedstock from Llandarcy oil refinery.
0n Thursday 10 December 1970 three thieves, driving a Rover 3.5 ( Rover P6 ), rammed a car taking the payroll money to the site, smashing the windscreen, and taking £10,000. Thieves were showered in blue dye, on Cadbury Road. [ 18 ] [ 19 ]
Britain would not produce much oil of its own until the mid-1970s when North Sea oil arrived at the Forties Oil Field .
Three new buildings were built from 1998 as part of Phase 1. Since 2001, four new buildings were built as part of Phase 2.
A new catalytic hydrofining process, called ferrofining, for lubricants was developed in 1961, in conjunction with the French division of BP (Société Française des Pétroles) at the Dunkirk refinery in northern France, with ferrofiner units being installed at the Llandarcy Oil Refinery , the Kent Refinery , [ 20 ] and the Kwinana Oil Refinery in Australia. [ 21 ]
A process was developed for the new BP Ruhr refinery at Dinslaken in West Germany, to make high purity paraffins for the chemical industry. [ 22 ]
Research into the chemical composition of North Sea gas began in November 1965. [ 23 ]
Air pollution research began from the late 1960s. [ 24 ]
More energy efficient oil refineries were developed from the early 1980s, in conjunction with GKN Birwelco, a metal fabrication company of Halesowen. [ 25 ] [ 26 ]
With North Sea oil becoming important the centre developed ways to seal cracks in oil pipelines, [ 27 ] and ways to extend the life of the BP Forties Oil Field by ten years were evaluated by the head of BP research, Sir John Cadogan in 1984, with tests at Bothamsall in Nottinghamshire. [ 28 ]
In 1984 BP Research International looked at reducing corrosion of paint. [ 29 ]
Solar panels were extensively researched from the 1980s.
In 1991 Panos Papagervos developed methods to quickly extinguish aviation fuel fires. [ 30 ]
In the early 1990s it opened a £12m clean fuel laboratory. [ 31 ]
New methods in computer science are undertaken in the Information Science department, working with the French computer scientist Jean-Raymond Abrial in the 1980s, with the B-Method
In the past two decades, hydrogen technologies have been investigated.
Air BP , for aviation fuel, is headquartered on the site. [ 32 ]
It is situated off the A244 (via the A308 ) in the north of Sunbury-on-Thames , and Surrey, on the Surrey boundary with London. To the east nearby is Sunbury Common.
The retail division of BP UK is at Witan Gate House . BP employs around 15,000 people in the UK.
It has an enhanced oil recovery laboratory . [ 33 ] | https://en.wikipedia.org/wiki/Sunbury_Research_Centre |
Sundaland [ 1 ] (also called Sundaica or the Sundaic region ) is a biogeographical region of Southeast Asia corresponding to a larger landmass that was exposed throughout the last 2.6 million years during periods when sea levels were lower. It includes Bali , Borneo , Java , and Sumatra in Indonesia , and their surrounding small islands, as well as the Malay Peninsula on Mainland Southeast Asia .
The area of Sundaland encompasses the Sunda Shelf , a tectonically stable extension of Southeast Asia's continental shelf that was exposed during glacial periods of the last 2 million years. [ 2 ] [ 3 ]
The extent of the Sunda Shelf is approximately equal to the 120-meter isobath . [ 4 ] In addition to the Malay Peninsula and the islands of Borneo, Java, and Sumatra, it includes the Java Sea , the Gulf of Thailand , and portions of the South China Sea . [ 5 ] In total, the area of Sundaland is approximately 1,800,000 km 2 . [ 6 ] [ 4 ] The area of exposed land in Sundaland has fluctuated considerably during the past recent 2 million years; the modern land area is approximately half of its maximum extent. [ 3 ]
The western and southern borders of Sundaland are clearly marked by the deeper waters of the Sunda Trench – some of the deepest in the world – and the Indian Ocean . [ 4 ] The eastern boundary of Sundaland is the Wallace Line , identified by Alfred Russel Wallace as the eastern boundary of the range of Asia's land mammal fauna, and thus the boundary of the Indomalayan and Australasian realms . The islands east of the Wallace line are known as Wallacea , a separate biogeographical region that is considered part of Australasia. The Wallace Line corresponds to a deep-water channel that has never been crossed by any land bridges. [ 4 ] The northern border of Sundaland is more difficult to define in bathymetric terms; a phytogeographic transition at approximately 9ºN is considered to be the northern boundary. [ 4 ]
Greater portions of Sundaland were most recently exposed during the last glacial period from approximately 110,000 to 12,000 years ago. [ 7 ] [ 6 ] When the sea level was decreased by 30–40 meters or more, land bridges connected the islands of Borneo, Java, and Sumatra to the Malay Peninsula and mainland Asia. [ 2 ] Because the sea level was 30 meters or more lower throughout much of the last 800,000 years, the current status of Borneo, Java, and Sumatra as islands has been a relatively rare occurrence throughout the Pleistocene. [ 8 ] In contrast, the sea level was higher during the late Pliocene , and the exposed area of Sundaland was smaller than what is observed at present. [ 4 ] Sundaland was partially submerged starting around 18,000 years ago and continuing until about 5000 BC. [ 9 ] [ 10 ] During the Last Glacial Maximum the sea level fell by approximately 120 meters, and the entire Sunda Shelf was exposed. [ 2 ]
All of Sundaland is within the tropics ; the equator runs through central Sumatra and Borneo. Like elsewhere in the tropics, rainfall, rather than temperature, is the major determinant of regional variation. Most of Sundaland is classified as perhumid, or everwet, with over 2,000 millimeters of rain annually; [ 4 ] rainfall exceeds evapotranspiration throughout the year and there are no predictable dry seasons like elsewhere in Southeast Asia. [ 11 ]
The warm and shallow seas of the Sunda Shelf (averaging 28 °C or more) are part of the Indo-Pacific Warm Pool/ Western Pacific Warm Pool [ 12 ] and an important driver of the Hadley circulation and the El Niño-Southern Oscillation (ENSO), particularly in January when it is a major heat source to the atmosphere. [ 4 ] ENSO also has a major influence on the climate of Sundaland; strong positive ENSO events result in droughts throughout Sundaland and tropical Asia .
The high rainfall supports closed canopy evergreen forests throughout the islands of Sundaland, [ 11 ] transitioning to deciduous forest and savanna woodland with increasing latitude. [ 4 ] The remaining primary (unlogged) lowland forest is known for giant dipterocarp trees and orangutans ; after logging, forest structure and community composition change to be dominated by shade intolerant trees and shrubs. [ 13 ] Dipterocarps are notable for mast fruiting events , where tree fruiting is synchronized at unpredictable intervals resulting in predator satiation. [ 14 ] Higher elevation forests are shorter and dominated by trees in the oak family . [ 11 ] Botanists often include Sundaland, the adjacent Philippines , Wallacea and New Guinea in a single floristic province of Malesia , based on similarities in their flora, which is predominantly of Asian origin. [ 11 ]
During the last glacial period , sea levels were lower and all of Sundaland was an extension of the Asian continent. As a result, the modern islands of Sundaland are home to many Asian mammals including elephants , monkeys , apes , tigers , tapirs , and rhinoceros . The flooding of Sundaland separated species that had once shared the same environment. One example is the river threadfin ( Polydactylus macrophthalmus , Bleeker 1858), which once thrived in a river system now called "North Sunda River" or "Molengraaff river". [ 15 ] The fish is now found in the Kapuas River on the island of Borneo, and in the Musi and Batanghari rivers in Sumatra. [ 16 ] Selective pressure (in some cases resulting in extinction ) has operated differently on each of the islands of Sundaland, and as a consequence, a different assemblage of mammals is found on each island. [ 17 ] However, the current species assemblage on each island is not simply a subset of a universal Sundaland or Asian fauna, as the species that inhabited Sundaland before flooding did not all have ranges encompassing the entire Sunda Shelf. [ 17 ] Island area and number of terrestrial mammal species are related, with the largest islands of Sundaland (Borneo and Sumatra) having the highest diversity. [ 7 ]
The name "Sunda" goes back to antiquity, appearing in Ptolemy 's Geography , written around 150 AD. [ 18 ] In an 1852 publication, English navigator George Windsor Earl advanced the idea of a "Great Asiatic Bank", based in part on common features of mammals found in Java, Borneo and Sumatra. [ 19 ]
Explorers and scientists began measuring and mapping the seas of Southeast Asia in the 1870s, primarily using depth sounding . [ 20 ] In 1921 Gustaaf Molengraaff , a Dutch geologist, postulated that the nearly uniform sea depths of the shelf indicated an ancient peneplain that was the result of repeated flooding events as ice caps melted, with the peneplain becoming more perfect with each successive flooding event. [ 20 ] Molengraaff also identified ancient, now submerged, drainage systems that drained the area during periods of lower sea levels.
The name "Sundaland" for the peninsular shelf was first proposed by Reinout Willem van Bemmelen in his Geography of Indonesia in 1949, based on his research during World War II . The ancient drainage systems described by Molengraaff were verified and mapped by Tjia in 1980 [ 21 ] and described in greater detail by Emmel and Curray in 1982 complete with river deltas , floodplains and backswamps. [ 22 ] [ 23 ]
The climate and ecology of Sundaland throughout the Quaternary has been investigated by analyzing foraminifera l δ 18 O and pollen from cores drilled into the ocean bed, δ 18 O in speleothems from caves, and δ 13 C and δ 15 N in bat guano from caves, as well as species distribution models, phylogenetic analysis, and community structure and species richness analysis.
Perhumid climate has existed in Sundaland since the early Miocene ; though there is evidence for several periods of drier conditions, a perhumid core persisted in Borneo. [ 11 ] The presence of fossil coral reefs dating to the late Miocene and early Pliocene suggests that, as the Indian monsoon grew more intense, seasonality increased in some portions of Sundaland during these epochs. [ 11 ] Palynological evidence from Sumatra suggests that temperatures were cooler during the late Pleistocene; mean annual temperatures at high elevation sites may have been as much as 5 °C cooler than present. [ 24 ]
Most recent research agrees that Indo-Pacific sea surface temperatures were at most 2-3 °C lower during the Last Glacial Maximum . [ 4 ] Snow was found much lower than at present (approximately 1,000 meters lower) and there is evidence that glaciers existed on Borneo and Sumatra around 10,000 years before present. [ 25 ] However, debate continues on how precipitation regimes changed throughout the Quaternary. Some authors argue that rainfall decreased with the area of ocean available for evaporation as sea levels fell with ice sheet expansion. [ 26 ] [ 5 ] Others posit that changes in precipitation have been minimal [ 27 ] and an increase in land area in the Sunda Shelf alone (due to lowered sea level) is not enough to decrease precipitation in the region. [ 28 ]
One possible explanation for the lack of agreement on hydrologic change throughout the Quaternary is that there was significant heterogeneity in climate during the Last Glacial Maximum throughout Indonesia. [ 28 ] Alternatively, the physical and chemical processes that underlie the method of inferring precipitation from δ 18 O records may have operated differently in the past. [ 28 ] Some authors working primarily with pollen records have also noted the difficulties of using vegetation records to detect changes in precipitation regimes in such a humid environment, as water is not a limiting factor in community assemblage. [ 24 ]
Sundaland, and in particular Borneo, has been an evolutionary hotspot for biodiversity since the early Miocene due to repeated immigration and vicariance events. [ 3 ] The modern islands of Borneo, Java, and Sumatra have served as refugia for the flora and fauna of Sundaland during multiple glacial periods in the last million years, and are serving the same role at present. [ 3 ] [ 29 ]
Dipterocarp trees characteristic of modern Southeast Asian tropical rainforest have been present in Sundaland since before the Last Glacial Maximum . [ 30 ] There is also evidence for savanna vegetation, particularly in now submerged areas of Sundaland, throughout the last glacial period . [ 31 ] However, researchers disagree on the spatial extent of savanna that was present in Sundaland. There are two opposing theories about the vegetation of Sundaland, particularly during the last glacial period: (1) that there was a continuous savanna corridor connecting modern mainland Asia to the islands of Java and Borneo, and (2) that the vegetation of Sundaland was instead dominated by tropical rainforest, with only small, discontinuous patches of savanna vegetation. [ 4 ]
The presence of a savanna corridor—even if fragmented—would have allowed for savanna-dwelling fauna (as well as early humans) to disperse between Sundaland and the Indochinese biogeographic region; emergence of a savanna corridor during glacial periods and subsequent disappearance during interglacial periods would have facilitated speciation through both vicariance ( allopatric speciation ) and geodispersal . [ 32 ] Morley and Flenley (1987) and Heaney (1991) were the first to postulate the existence of a continuous corridor of savanna vegetation through the center of Sundaland (from the modern Malay Peninsula to Borneo) during the last glacial period , based on palynological evidence. [ 33 ] [ 14 ] [ 3 ] [ 34 ] [ 19 ] Using the modern distribution of primates, termites, rodents, and other species, other researchers infer that the extent of tropical forest contracted—replaced by savanna and open forest —during the last glacial period. [ 4 ] Vegetation models using data from climate simulations show varying degrees of forest contraction; Bird et al. (2005) noted that although no single model predicts a continuous savanna corridor through Sundaland, many do predict open vegetation between modern Java and southern Borneo. Combined with other evidence, they suggest that a 50–150 kilometer wide savanna corridor ran down the Malay Peninsula, through Sumatra and Java, and across to Borneo. [ 3 ] Additionally, Wurster et al. (2010) analyzed stable carbon isotope composition in bat guano deposits in Sundaland and found strong evidence for the expansion of savanna in Sundaland. [ 14 ] Similarly, stable isotope composition of fossil mammal teeth supports the existence of the savanna corridor. [ 35 ]
In contrast, other authors argue that Sundaland was primarily covered by tropical rainforest. [ 4 ] Using species distribution models, Raes et al. (2014) suggest that Dipterocarp rainforest persisted throughout the last glacial period. [ 30 ] Others have observed that the submerged rivers of the Sunda Shelf have obvious, incised meanders, which would have been maintained by trees on river banks. [ 11 ] Pollen records from sediment cores around Sundaland are contradictory; for example, cores from highland sites suggest that forest cover persisted throughout the last glacial period, but other cores from the region show pollen from savanna-woodland species increasing through glacial periods. [ 4 ] And in contrast to previous findings, Wurster et al. (2017) again used stable carbon isotope analysis of bat guano, but found that at some sites rainforest cover was maintained through much of the last glacial period. [ 36 ] Soil type, rather than long-term existence of a savanna corridor, has also been posited as an explanation for species distribution differences within Sundaland; Slik et al. (2011) suggest that the sandy soils of the now submerged seabed are a more likely dispersal barrier. [ 37 ]
Before Sundaland emerged during the late Pliocene and early Pleistocene (~2.4 million years ago), there were no mammals on Java. As sea level lowered, species such as the dwarf elephantoid Sinomastodon bumiajuensis colonized Sundaland from mainland Asia. [ 38 ] Later fauna included tigers, Sumatran rhinoceros, and Indian elephant, which were found throughout Sundaland; smaller animals were also able to disperse across the region. [ 7 ]
According to the most widely accepted theory, [ citation needed ] the ancestors of the modern-day Austronesian populations of the Maritime Southeast Asia and adjacent regions are believed to have migrated southward, from the East Asia mainland to Taiwan , and then to the rest of Maritime Southeast Asia . An alternative theory points to the now-submerged Sundaland as the possible cradle of Austronesian languages: thus the "Out of Sundaland" theory . However, this view is an extreme minority view among professional archaeologists, linguists, and geneticists. The Out of Taiwan model (though not necessarily the Express Train Out of Taiwan model) is accepted by the vast majority of professional researchers. [ citation needed ]
A study from Leeds University and published in Molecular Biology and Evolution , examining mitochondrial DNA lineages, suggested that shared ancestry between Taiwan and Southeast Asian resulted from earlier migrations. Population dispersals seem to have occurred at the same time as sea levels rose, which may have resulted in migrations from the Philippine Islands to as far north as Taiwan within the last 10,000 years. [ 39 ]
The population migrations were most likely to have been driven by climate change — the effects of the drowning of an ancient continent. Rising sea levels in three massive pulses may have caused flooding and the submerging of the Sunda continent, creating the Java and South China Seas and the thousands of islands that make up Indonesia and the Philippines today. The changing sea levels would have caused these humans to move away from their coastal homes and culture, and farther inland throughout southeast Asia. This forced migration would have caused these humans to adapt to the new forest and mountainous environments, developing farms and domestication, and becoming the predecessors to future human populations in these regions. [ 40 ]
Genetic similarities were found between populations throughout Asia and an increase in genetic diversity from northern to southern latitudes. Although the Chinese population is very large, it has less variation than the smaller number of individuals living in Southeast Asia, because the Chinese expansion occurred fairly recently, from the mid to late-Holocene.
Stephen Oppenheimer locates the origin of the Austronesians in Sundaland and its upper regions. [ 41 ] From the standpoint of historical linguistics , the home of the Austronesian languages is the main island of Taiwan , also known by its unofficial Portuguese name of Formosa; on this island the deepest divisions in Austronesian are found, among the families of the native Formosan languages . [ citation needed ]
Africa
Antarctica
Asia
Australia
Europe
North America
South America
Afro-Eurasia
Americas
Eurasia
Oceania | https://en.wikipedia.org/wiki/Sundaland |
Sundew was a large electrically powered dragline excavator used in mining operations in Rutland and Northamptonshire in the United Kingdom from 1957. It was the lead ship of a series of four Type W1400-series dragline excavators. [ 1 ]
Built by Ransomes & Rapier and named after the winning horse of the 1957 Grand National , it began work in a Rutland iron ore quarry belonging to the United Steel Companies (Ore Mining Branch) that year. At the time of its construction Sundew was the largest walking dragline in the world, weighing 1,675 long tons (1,702 t). With a reach of 86 metres (282 ft) and a bucket capacity of 27 long tons (27 t) the machine was able to move a substantial amount of material in a relatively short period. [ 2 ]
Propulsion was via two large movable feet which could be used to "walk" the dragline forwards and backwards, while directional control was provided by a large circular turntable under the body of the machine.
Sundew remained until operations at the quarry ceased in 1974 and plans were then devised to relocate the machine to a recently opened British Steel Corporation quarry near Corby . At a cost of £250,000 and taking two years to complete, it was decided that dismantling, moving and reconstructing the machine was not a viable option, and so over an eight-week period in 1974 Sundew walked 13 miles (21 km) from its home in Exton Park near the village of Exton in Rutland to a site north of Corby. During the walk the dragline crossed three water mains, four water courses, thirteen power lines, ten roads, a railway line, two gas mains, seven telephone lines, 74 hedges, and the River Welland before reaching its new home.
As part of a major restructuring of British Steel in the late 1970s Corby Steelworks was closed down, and there was no longer any need for a large dragline to assist in the recovery of iron ore. On 4 July 1980 Sundew walked to its final resting place and the huge boom was lowered onto a purpose-built earth mound. There it remained for seven years until being scrapped from January to June 1987. The cab and bucket are preserved at Rutland Railway Museum which is now known as Rocks By Rail – The Living Ironstone Museum. In 2014 the Heritage Lottery Fund awarded £8,100 for the restoration of the cab. [ 3 ] | https://en.wikipedia.org/wiki/Sundew_(dragline) |
In the mathematical fields of set theory and extremal combinatorics , a sunflower or Δ {\displaystyle \Delta } -system [ 1 ] is a collection of sets in which all possible distinct pairs of sets share the same intersection . This common intersection is called the kernel of the sunflower.
The naming arises from a visual similarity to the botanical sunflower, arising when a Venn diagram of a sunflower set is arranged in an intuitive way. Suppose the shared elements of a sunflower set are clumped together at the centre of the diagram, and the nonshared elements are distributed in a circular pattern around the shared elements. Then when the Venn diagram is completed, the lobe-shaped subsets, which encircle the common elements and one or more unique elements, take on the appearance of the petals of a flower.
The main research question arising in relation to sunflowers is: under what conditions does there exist a large sunflower (a sunflower with many sets) in a given collection of sets? The Δ {\displaystyle \Delta } -lemma , sunflower lemma , and the Erdős-Rado sunflower conjecture give successively weaker conditions which would imply the existence of a large sunflower in a given collection, with the latter being one of the most famous open problems of extremal combinatorics. [ 2 ]
Suppose W {\displaystyle W} is a set system over U {\displaystyle U} , that is, a collection of subsets of a set U {\displaystyle U} . The collection W {\displaystyle W} is a sunflower (or Δ {\displaystyle \Delta } -system ) if there is a subset S {\displaystyle S} of U {\displaystyle U} such that for each distinct A {\displaystyle A} and B {\displaystyle B} in W {\displaystyle W} , we have A ∩ B = S {\displaystyle A\cap B=S} . In other words, a set system or collection of sets W {\displaystyle W} is a sunflower if all sets in W {\displaystyle W} share the same common subset of elements. An element in U {\displaystyle U} is either found in the common subset S {\displaystyle S} or else appears in at most one of the W {\displaystyle W} elements. No element of U {\displaystyle U} is shared by just some of the W {\displaystyle W} subset, but not others. Note that this intersection, S {\displaystyle S} , may be empty ; a collection of pairwise disjoint subsets is also a sunflower. Similarly, a collection of sets each containing the same elements is also trivially a sunflower.
The study of sunflowers generally focuses on when set systems contain sunflowers, in particular, when a set system is sufficiently large to necessarily contain a sunflower.
Specifically, researchers analyze the function f ( k , r ) {\displaystyle f(k,r)} for nonnegative integers k , r {\displaystyle k,r} , which is defined to be the smallest nonnegative integer n {\displaystyle n} such that, for any set system W {\displaystyle W} such that every set S ∈ W {\displaystyle S\in W} has cardinality at most k {\displaystyle k} , if W {\displaystyle W} has more than n {\displaystyle n} sets, then W {\displaystyle W} contains a sunflower of r {\displaystyle r} sets. Though it is not obvious that such an n {\displaystyle n} must exist, a basic and simple result of Erdős and Rado , the Delta System Theorem, indicates that it does.
Erdos-Rado Delta System Theorem(corollary of the Sunflower lemma):
For each k > 0 {\displaystyle k>0} , r > 0 {\displaystyle r>0} , there is an integer f ( k , r ) {\displaystyle f(k,r)} such that if a set system F {\displaystyle F} of k {\displaystyle k} -sets is of cardinality greater than f ( k , r ) {\displaystyle f(k,r)} , then F {\displaystyle F} contains a sunflower of size r {\displaystyle r} .
In the literature, W {\displaystyle W} is often assumed to be a set rather than a collection, so any set can appear in W {\displaystyle W} at most once. By adding dummy elements, it suffices to only consider set systems W {\displaystyle W} such that every set in W {\displaystyle W} has cardinality k {\displaystyle k} , so often the sunflower lemma is equivalently phrased as holding for " k {\displaystyle k} -uniform" set systems. [ 3 ]
Erdős & Rado (1960 , p. 86) proved the sunflower lemma , which states that [ 4 ]
That is, if k {\displaystyle k} and r {\displaystyle r} are positive integers , then a set system W {\displaystyle W} of cardinality greater than or equal to k ! ( r − 1 ) k {\displaystyle k!(r-1)^{k}} of sets of cardinality k {\displaystyle k} contains a sunflower with at least r {\displaystyle r} sets.
The Erdős-Rado sunflower lemma can be proved directly through induction. First, f ( 1 , r ) ≤ r − 1 {\displaystyle f(1,r)\leq r-1} , since the set system W {\displaystyle W} must be a collection of distinct sets of size one, and so r {\displaystyle r} of these sets make a sunflower. In the general case, suppose W {\displaystyle W} has no sunflower with r {\displaystyle r} sets. Then consider A 1 , A 2 , … , A t ∈ W {\displaystyle A_{1},A_{2},\ldots ,A_{t}\in W} to be a maximal collection of pairwise disjoint sets (that is, A i ∩ A j {\displaystyle A_{i}\cap A_{j}} is the empty set unless i = j {\displaystyle i=j} , and every set in W {\displaystyle W} intersects with some A i {\displaystyle A_{i}} ). Because we assumed that W {\displaystyle W} had no sunflower of size r {\displaystyle r} , and a collection of pairwise disjoint sets is a sunflower, t < r {\displaystyle t<r} .
Let A = A 1 ∪ A 2 ∪ ⋯ ∪ A t {\displaystyle A=A_{1}\cup A_{2}\cup \cdots \cup A_{t}} . Since each A i {\displaystyle A_{i}} has cardinality k {\displaystyle k} , the cardinality of A {\displaystyle A} is bounded by k t ≤ k ( r − 1 ) {\displaystyle kt\leq k(r-1)} . Define W a {\displaystyle W_{a}} for some a ∈ A {\displaystyle a\in A} to be
Then W a {\displaystyle W_{a}} is a set system, like W {\displaystyle W} , except that every element of W a {\displaystyle W_{a}} has k − 1 {\displaystyle k-1} elements. Furthermore, every sunflower of W a {\displaystyle W_{a}} corresponds to a sunflower of W {\displaystyle W} , simply by adding back a {\displaystyle a} to every set. This means that, by our assumption that W {\displaystyle W} has no sunflower of size r {\displaystyle r} , the size of W a {\displaystyle W_{a}} must be bounded by f ( k − 1 , r ) − 1 {\displaystyle f(k-1,r)-1} .
Since every set S ∈ W {\displaystyle S\in W} intersects with one of the A i {\displaystyle A_{i}} 's, it intersects with A {\displaystyle A} , and so it corresponds to at least one of the sets in a W a {\displaystyle W_{a}} :
Hence, if | W | ≥ k ( r − 1 ) f ( k − 1 , r ) {\displaystyle |W|\geq k(r-1)f(k-1,r)} , then W {\displaystyle W} contains an r {\displaystyle r} set sunflower of size k {\displaystyle k} sets. Hence, f ( k , r ) ≤ k ( r − 1 ) f ( k − 1 , r ) {\displaystyle f(k,r)\leq k(r-1)f(k-1,r)} and the theorem follows. [ 2 ]
The sunflower conjecture is one of several variations of the conjecture of Erdős & Rado (1960 , p. 86) that for each r > 2 {\displaystyle r>2} , f ( k , r ) ≤ C k {\displaystyle f(k,r)\leq C^{k}} for some constant C > 0 {\displaystyle C>0} depending only on r {\displaystyle r} . The conjecture remains wide open even for fixed low values of r {\displaystyle r} ; for example r = 3 {\displaystyle r=3} ; it is not known whether f ( k , 3 ) ≤ C k {\displaystyle f(k,3)\leq C^{k}} for some C > 0 {\displaystyle C>0} . [ 5 ] A 2021 paper by Alweiss, Lovett, Wu, and Zhang gives the best progress towards the conjecture, proving that f ( k , r ) ≤ C k {\displaystyle f(k,r)\leq C^{k}} for C = O ( r 3 log ( k ) log log ( k ) ) {\displaystyle C=O(r^{3}\log(k)\log \log(k))} . [ 6 ] [ 7 ] A month after the release of the first version of their paper, Rao sharpened the bound to C = O ( r log ( r k ) ) {\displaystyle C=O(r\log(rk))} ; [ 8 ] the current best-known bound is C = O ( r log k ) {\displaystyle C=O(r\log k)} . [ 9 ]
Erdős and Rado proved the following lower bound on f ( k , r ) {\displaystyle f(k,r)} . It is equal to the statement that the original sunflower lemma is optimal in r {\displaystyle r} .
Theorem. ( r − 1 ) k ≤ f ( k , r ) . {\displaystyle (r-1)^{k}\leq f(k,r).}
Proof.
For k = 1 {\displaystyle k=1} a set of r − 1 {\displaystyle r-1} sequence of distinct elements is not a sunflower.
Let h ( k − 1 , r ) {\displaystyle h(k-1,r)} denote the size of the largest set of k − 1 {\displaystyle k-1} -sets with no r {\displaystyle r} sunflower. Let H {\displaystyle H} be such a set. Take an additional set of r − 1 {\displaystyle r-1} elements and add one element to each set in one of r − 1 {\displaystyle r-1} disjoint copies of H {\displaystyle H} . Take the union of the r − 1 {\displaystyle r-1} disjoint copies with the elements added and denote this set H ∗ {\displaystyle H^{*}} . The copies of H {\displaystyle H} with an element added form an r − 1 {\displaystyle r-1} partition of H ∗ {\displaystyle H^{*}} . We have that, ( r − 1 ) | H | ≤ | H ∗ | {\displaystyle (r-1)|H|\leq |H^{*}|} . H ∗ {\displaystyle H^{*}} is sunflower free since any selection of r {\displaystyle r} sets if in one of the disjoint partitions is sunflower free by assumption of H being sunflower free. Otherwise, if r {\displaystyle r} sets are selected from across multiple sets of the partition, then two must be selected from one partition since there are only r − 1 {\displaystyle r-1} partitions. This implies that at least two sets and not all the sets will have an element in common. Hence this is not a sunflower of r {\displaystyle r} sets.
A stronger result is the following theorem:
Theorem. f ( a + b , r ) ≥ ( f ( a , r ) − 1 ) ( f ( b , r ) − 1 ) {\displaystyle f(a+b,r)\geq (f(a,r)-1)(f(b,r)-1)}
Proof. Let F {\displaystyle F} and F ∗ {\displaystyle F^{*}} be two sunflower free families. For each set A {\displaystyle A} in F, append every set in F ∗ {\displaystyle F^{*}} to A {\displaystyle A} to produce | F ∗ | {\displaystyle |F^{*}|} many sets. Denote this family of sets F A {\displaystyle F_{A}} . Take the union of F A {\displaystyle F_{A}} over all A {\displaystyle A} in F {\displaystyle F} . This produces a family of | F ∗ | | F | {\displaystyle |F^{*}||F|} sets which is sunflower free.
The best existing lower bound for the Erdos-Rado Sunflower problem for r = 3 {\displaystyle r=3} is 10 k 2 ≤ f ( k , 3 ) {\displaystyle 10^{\frac {k}{2}}\leq f(k,3)} , due to Abott, Hansen, and Sauer. [ 10 ] [ 11 ] This bound has not been improved in over 50 years.
The sunflower lemma has numerous applications in theoretical computer science . For example, in 1986, Razborov used the sunflower lemma to prove that the Clique language required n log ( n ) {\displaystyle n^{\log(n)}} (superpolynomial) size monotone circuits, a breakthrough result in circuit complexity theory at the time. Håstad, Jukna, and Pudlák used it to prove lower bounds on depth- 3 {\displaystyle 3} A C 0 {\displaystyle AC_{0}} circuits. It has also been applied in the parameterized complexity of the hitting set problem , to design fixed-parameter tractable algorithms for finding small sets of elements that contain at least one element from a given family of sets. [ 12 ]
A version of the Δ {\displaystyle \Delta } -lemma which is essentially equivalent to the Erdős-Rado Δ {\displaystyle \Delta } -system theorem states that a countable collection of k-sets contains a countably infinite sunflower or Δ {\displaystyle \Delta } -system.
The Δ {\displaystyle \Delta } -lemma states that every uncountable collection of finite sets contains an uncountable Δ {\displaystyle \Delta } -system.
The Δ {\displaystyle \Delta } -lemma is a combinatorial set-theoretic tool used in proofs to impose an upper bound on the size of a collection of pairwise incompatible elements in a forcing poset . It may for example be used as one of the ingredients in a proof showing that it is consistent with Zermelo–Fraenkel set theory that the continuum hypothesis does not hold. It was introduced by Shanin ( 1946 ).
If W {\displaystyle W} is an ω 2 {\displaystyle \omega _{2}} -sized collection of countable subsets of ω 2 {\displaystyle \omega _{2}} , and if the continuum hypothesis holds, then there is an ω 2 {\displaystyle \omega _{2}} -sized Δ {\displaystyle \Delta } -subsystem. Let ⟨ A α : α < ω 2 ⟩ {\displaystyle \langle A_{\alpha }:\alpha <\omega _{2}\rangle } enumerate W {\displaystyle W} . For cf ( α ) = ω 1 {\displaystyle \operatorname {cf} (\alpha )=\omega _{1}} , let f ( α ) = sup ( A α ∩ α ) {\displaystyle f(\alpha )=\sup(A_{\alpha }\cap \alpha )} . By Fodor's lemma , fix S {\displaystyle S} stationary in ω 2 {\displaystyle \omega _{2}} such that f {\displaystyle f} is constantly equal to β {\displaystyle \beta } on S {\displaystyle S} .
Build S ′ ⊆ S {\displaystyle S'\subseteq S} of cardinality ω 2 {\displaystyle \omega _{2}} such that whenever i < j {\displaystyle i<j} are in S ′ {\displaystyle S'} then A i ⊆ j {\displaystyle A_{i}\subseteq j} . Using the continuum hypothesis, there are only ω 1 {\displaystyle \omega _{1}} -many countable subsets of β {\displaystyle \beta } , so by further thinning we may stabilize the kernel. | https://en.wikipedia.org/wiki/Sunflower_(mathematics) |
Sunflower trypsin inhibitor (SFTI) is a small, circular peptide produced in sunflower seeds , and is a potent inhibitor of trypsin . It is the smallest known member of the Bowman-Birk family of serine protease inhibitors. [ 1 ]
One example of Sunflower trypsin inhibitor is Sunflower trypsin inhibitor-1 (SFTI-1). Sunflower trypsin inhibitor-1 is a potent Bowman-Birk inhibitor. Sunflower trypsin inhibitor-1 is the simplest cysteine-rich peptide scaffold because it is a bicyclic 14 amino acid peptide and only has one disulfide bond. The disulfide bond divides the peptide into two loops. One loop is a functional trypsin inhibitory and the second loop is a nonfunctional loop. [ 2 ] The nonfunctional loop can be replaced by a bioactive loop. It is extracted from a seed of a sunflower called Helianthus annuus . The synthesis of SFTI is not known however, it can evolutionarily linked to a gene-coded product from classic Bowman-Birk inhibitors. [ 3 ] STFI is used in radiopharmaceutical, antimicrobial, and pro-angiogenic peptides. [ 2 ]
By modifying the amino acid sequence of sunflower trypsin inhibitor, more specifically, sunflower trypsin inhibitor-1 (SFTI-1), researchers have been able to develop synthetic serine protease inhibitors that have specificity and improved inhibitory activity towards certain serine proteases that are found in the human body, such as tissue kallikreins and human matriptase-1. For instance, researchers from the Institute of Child Health and the Department of Chemistry of the University College London , have created two SFTI-1 analogs (I10G and I10H) by substituting residue 10 of SFTI-1 ( isoleucine , I) with glycine (G) and histidine (H), respectively. Out of the two analogs, SFTI-I10H was found to be the more potent KLK5 inhibitor. [ 4 ] Another group of researchers from the previously mentioned institute and department of the University College London, conducted further research on the development of synthetic kallikrein inhibitors by modifying the amino acid sequence of SFTI-I10H. Out of the six SFTI-I10H variants that were constructed by modifying SFTI-I10H, the first and second variant (K5R_I10H and I10H_F12W) demonstrated improved KLK5 inhibition and the sixth variant (K5R_I10H_F12W) showed dual-inhibition of KLK5 and KLK7 , improved KLK5 inhibition potency, and specificity for KLK5 and KLK14 . The first variant (K5R_I10H) was made by replacing residue 5 of SFTI-I10H ( lysine , K) with arginine (R), and in order to get the second variant (I10H_F12W) residue 12 ( phenylalanine , F) was replaced with tryptophan (W). Lastly, the sixth variant (K5R_I10H_F12W) was developed by combining the amino acid substitutions of the first and second variants. [ 5 ]
Moreover, researchers from the Clemens-Schöpf Institute of Organic Chemistry and Biochemistry and Helmholtz-Institute for Pharmaceutical Research Saarland, developed potent synthetic human matriptase-1 inhibitors based on a different SFTI-1 variant, SDMI-1. SFTI-1 derived matriptase inhibitor-1 (SDMI-1) was previously developed by replacing residue 10 of SFTI-1 (isoleucine, I) with arginine (R) and residue 12 (phenylalanine, F) with histidine (H). Further modifications of SDMI-1 resulted in synthetic matriptase-1 inhibitors with improved inhibitory activity, matriptase binding, and inhibition potency. The SDMI-1 variant that resulted in enhanced inhibitory activity was developed by replacing residue 1 of SDMI-1 (glycine, G) with lysine (K) and by keeping it as a monocyclic structure. The SDMI-1 variant that resulted in improved matriptase binding was created by using the same amino acid substitutions of the previously mentioned SDMI-1 variant and by attaching a bulky fluorescein moiety to the side chain of lysine. Lastly, the SDMI-1 variant that had enhanced inhibition potency was developed by applying the same amino acid substitutions of the previous variants, cleaving the proline - aspartic acid sequence found at the C-terminus (PD-OH), and by making it a bicyclic compound via tail-to-side-chain cyclization. [ 6 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sunflower_trypsin_inhibitor |
In economics and business decision-making , a sunk cost (also known as retrospective cost ) is a cost that has already been incurred and cannot be recovered. [ 1 ] [ 2 ] Sunk costs are contrasted with prospective costs , which are future costs that may be avoided if action is taken. [ 3 ] In other words, a sunk cost is a sum paid in the past that is no longer relevant to decisions about the future. Even though economists argue that sunk costs are no longer relevant to future rational decision-making, people in everyday life often take previous expenditures in situations, such as repairing a car or house, into their future decisions regarding those properties.
According to classical economics and standard microeconomic theory, only prospective (future) costs are relevant to a rational decision. [ 4 ] At any moment in time, the best thing to do depends only on current alternatives. [ 5 ] The only things that matter are the future consequences. [ 6 ] Past mistakes are irrelevant. [ 5 ] Any costs incurred prior to making the decision have already been incurred no matter what decision is made. They may be described as "water under the bridge", [ 7 ] and making decisions on their basis may be described as "crying over spilt milk". [ 8 ] In other words, people should not let sunk costs influence their decisions; sunk costs are irrelevant to rational decisions. Thus, if a new factory was originally projected to cost $100 million, and yield $120 million in value, and after $30 million is spent on it the value projection falls to $65 million, the company should abandon the project rather than spending an additional $70 million to complete it. Conversely, if the value projection falls to $75 million, the company, as a rational actor, should continue the project. This is known as the bygones principle [ 6 ] [ 9 ] or the marginal principle . [ 10 ]
The bygones principle is grounded in the branch of normative decision theory known as rational choice theory , particularly in expected utility hypothesis . Expected utility theory relies on a property known as cancellation , which says that it is rational in decision-making to disregard (cancel) any state of the world that yields the same outcome regardless of one's choice. [ 11 ] Past decisions—including sunk costs—meet that criterion.
The bygones principle can also be formalised as the notion of "separability". Separability requires agents to take decisions by comparing the available options in eventualities that can still occur, uninfluenced by how the current situation was reached or by eventualities that are precluded by that history. In the language of decision trees, it requires the agent's choice at a particular choice node to be independent of unreachable parts of the tree. This formulation makes clear how central the principle is to standard economic theory by, for example, founding the folding-back algorithm for individual sequential decisions and game-theoretical concepts such as sub-game perfection. [ 12 ]
Until a decision-maker irreversibly commits resources, the prospective cost is an avoidable future cost and is properly included in any decision-making process. [ 9 ] For instance, if someone is considering pre-ordering movie tickets, but has not actually purchased them yet, the cost remains avoidable.
Both retrospective and prospective costs could be either fixed costs (continuous for as long as the business is operating and unaffected by output volume) or variable costs (dependent on volume). [ 13 ] However, many economists consider it a mistake to classify sunk costs as "fixed" or "variable". For example, if a firm sinks $400 million on an enterprise software installation, that cost is "sunk" because it was a one-time expense and cannot be recovered once spent. A "fixed" cost would be monthly payments made as part of a service contract or licensing deal with the company that set up the software. The upfront irretrievable payment for the installation should not be deemed a "fixed" cost, with its cost spread out over time. Sunk costs should be kept separate. The "variable costs" for this project might include data centre power usage, for example.
There are cases in which taking sunk costs into account in decision-making, violating the bygones principle, is rational. [ 14 ] For example, for a manager who wishes to be perceived as persevering in the face of adversity, or to avoid blame for earlier mistakes, it may be rational to persist with a project for personal reasons even if it is not the benefit of their company. Or, if they hold private information about the undesirability of abandoning a project, it is fully rational to persist with a project that outsiders think displays the fallacy of sunk cost. [ 15 ]
The bygones principle does not always accord with real-world behavior. Sunk costs often influence people's decisions, [ 7 ] [ 14 ] with people believing that investments (i.e., sunk costs) justify further expenditures. [ 16 ] People demonstrate "a greater tendency to continue an endeavor once an investment in money, effort, or time has been made". [ 17 ] [ 18 ] This is the sunk cost fallacy , and such behavior may be described as "throwing good money after bad", [ 19 ] [ 14 ] while refusing to succumb to what may be described as "cutting one's losses". [ 14 ] People can remain in failing relationships because they "have already invested too much to leave". Other people are swayed by arguments that a war must continue because lives will have been sacrificed in vain unless victory is achieved. Individuals caught up in psychologically manipulative scams will continue investing time, money and emotional energy into the project, despite doubts or suspicions that something is not right. [ 20 ] These types of behaviour do not seem to accord with rational choice theory and are often classified as behavioural errors. [ 21 ]
Rego, Arantes, and Magalhães point out that the sunk cost effect exists in committed relationships. They devised two experiments, one of which showed that people in a relationship which they had invested money and effort in were more likely to keep that relationship going than end it; and in the second experiment, while people are in a relationship which they had invested enough time in, they tended to devote more time to the relationship. [ 22 ] It also means people fall into the sunk cost fallacy. Although people should ignore sunk costs and make rational decisions when planning for the future, time, money, and effort often make people continue to maintain this relationship, which is equivalent to continuing to invest in failed projects.
According to evidence reported by De Bondt and Makhija (1988), managers of many utility companies in the United States have been overly reluctant to terminate economically unviable nuclear plant projects. [ 23 ] In the 1960s, the nuclear power industry promised "energy too cheap to meter". Nuclear power lost public support in the 1970s and 1980s, when public service commissions around the nation ordered prudency reviews. From these reviews, De Bondt and Makhija find evidence that the commissions denied many utility companies even partial recovery of nuclear construction costs on the grounds that they had been mismanaging the nuclear construction projects in ways consistent with throwing good money after bad. [ 24 ]
There is also evidence of government representatives failing to ignore sunk costs. [ 21 ] The term "Concorde fallacy" [ 25 ] derives from the fact that the British and French governments continued to fund the joint development of the costly Concorde supersonic airplane even after it became apparent that there was no longer an economic case for the aircraft. The British government privately regarded the project as a commercial disaster that should never have been started. Political and legal issues made it impossible for either government to pull out. [ 9 ]
The idea of sunk costs is often employed when analyzing business decisions. A common example of a sunk cost for a business is the promotion of a brand name. This type of marketing incurs costs that cannot normally be recovered. [ citation needed ] It is not typically possible to later "demote" one's brand names in exchange for cash. [ citation needed ] A second example is research and development (R&D) costs. Once spent, such costs are sunk and should have no effect on future pricing decisions. [ citation needed ] A pharmaceutical company's attempt to justify high prices because of the need to recoup R&D expenses would be fallacious. [ citation needed ] The company would charge a high price whether R&D cost one dollar or one million. [ citation needed ] R&D costs and the ability to recoup those costs are a factor in deciding whether to spend the money on R&D in the first place. [ 26 ]
Dijkstra and Hong proposed that part of a person's behavior is influenced by a person's current emotions. Their experiments showed that emotional responses benefit from the sunk cost fallacy. Negative influences lead to the sunk cost fallacy. For example, anxious people face the stress brought about by the sunk cost fallacy. When stressed, they are more motivated to invest in failed projects rather than take additional approaches. Their report shows that the sunk cost fallacy will have a greater impact on people under high load conditions and people's psychological state and external environment will be the key influencing factors. [ 27 ]
The sunk cost effect may cause cost overrun . In business, an example of sunk costs may be an investment into a factory or research that now has a lower value or none. For example, $20 million has been spent on building a power plant; the value now is zero because it is incomplete (and no sale or recovery is feasible). The plant can be completed for an additional $10 million or abandoned and a different but equally valuable facility built for $5 million. Abandonment and construction of the alternative facility is the more rational decision, even though it represents a total loss of the original expenditure—the original sum invested is a sunk cost. If decision-makers are irrational or have the "wrong" (different) incentives, the completion of the project may be chosen. For example, politicians or managers may have more incentive to avoid the appearance of a total loss. In practice, there is considerable ambiguity and uncertainty in such cases, and decisions may in retrospect appear irrational that were, at the time, reasonable to the economic actors involved and in the context of their incentives. A decision-maker might make rational decisions according to their incentives, outside of efficiency or profitability. This is considered to be an incentive problem and is distinct from a sunk cost problem. Some research has also noted circumstances where the sunk cost effect is reversed; that is, where individuals appear irrationally eager to write off earlier investments in order to take up a new endeavor. [ 28 ]
A related phenomenon is plan continuation bias, [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ 33 ] which is recognised as a subtle cognitive bias that tends to force the continuation of a plan or course of action even in the face of changing conditions. In the field of aerospace it has been recognised as a significant causal factor in accidents, with a 2004 NASA study finding that in 9 out of the 19 accidents studied, aircrew exhibited this behavioural bias. [ 29 ]
This is a hazard for ships' captains or aircraft pilots who may stick to a planned course even when it is leading to fatal disaster and they should abort instead. A famous example is the Torrey Canyon oil spill in which a tanker ran aground when its captain persisted with a risky course rather than accepting a delay. [ 34 ] It has been a factor in numerous air crashes and an analysis of 279 approach and landing accidents (ALAs) found that it was the fourth most common cause, occurring in 11% of cases. [ 35 ] Another analysis of 76 accidents found that it was a contributory factor in 42% of cases. [ 36 ]
There are also two predominant factors that characterise the bias. The first is an overly optimistic estimate of probability of success, possibly to reduce cognitive dissonance having made a decision. The second is that of personal responsibility: when you are personally accountable, it is difficult for you to admit that you were wrong. [ 29 ]
Projects often suffer cost overruns and delays due to the planning fallacy and related factors including excessive optimism, an unwillingness to admit failure , groupthink and aversion to loss of sunk costs. [ 37 ]
Evidence from behavioral economics suggests that there are at least four specific psychological factors underlying the sunk cost effect:
Taken together, these results suggest that the sunk cost effect may reflect non-standard measures of utility , which is ultimately subjective and unique to the individual.
The framing effect which underlies the sunk cost effect builds upon the concept of extensionality where the outcome is the same regardless of how the information is framed. This is in contradiction to the concept of intentionality, which is concerned with whether the presentation of information changes the situation in question.
Take two mathematical functions:
While these functions are framed differently, regardless of the input "x", the outcome is analytically equivalent. Therefore, if a rational decision maker were to choose between these two functions, the likelihood of each function being chosen should be the same. However, a framing effect places unequal biases towards preferences that are otherwise equal.
The most common type of framing effect was theorised in Kahneman & Tversky, 1979 in the form of valence framing effects. [ 40 ] This form of framing signifies types of framing. The first type can be considered positive where the "sure thing" option highlights the positivity whereas if it is negative, the "sure thing" option highlights the negativity, while both being analytically identical. For example, saving 200 people from a sinking ship of 600 is equivalent to letting 400 people drown. The former framing type is positive and the latter is negative.
Ellingsen, Johannesson, Möllerström and Munkammar [ 41 ] have categorised framing effects in a social and economic orientation into three broad classes of theories. Firstly, the framing of options presented can affect internalised social norms or social preferences - this is called variable sociality hypothesis. Secondly, the social image hypothesis suggests that the frame in which the options are presented will affect the way the decision maker is viewed and will in turn affect their behaviour. Lastly, the frame may affect the expectations that people have about each other's behaviour and will in turn affect their own behaviour.
In 1968, Knox and Inkster [ 42 ] approached 141 horse bettors : 72 of the people had just finished placing a $2.00 bet within the past 30 seconds, and 69 people were about to place a $2.00 bet in the next 30 seconds. Their hypothesis was that people who had just committed themselves to a course of action (betting $2.00) would reduce post-decision dissonance by believing more strongly than ever that they had picked a winner. Knox and Inkster asked the bettors to rate their horse's chances of winning on a 7-point scale. What they found was that people who were about to place a bet rated the chance that their horse would win at an average of 3.48 which corresponded to a "fair chance of winning" whereas people who had just finished betting gave an average rating of 4.81 which corresponded to a "good chance of winning". Their hypothesis was confirmed: after making a $2.00 commitment, people became more confident their bet would pay off. Knox and Inkster performed an ancillary test on the patrons of the horses themselves and managed (after normalization) to repeat their finding almost identically. Other researchers have also found evidence of inflated probability estimations. [ 43 ] [ 44 ]
In a study of 96 business students, Staw and Fox [ 45 ] gave the subjects a choice between making an R&D investment either in an underperforming company department, or in other sections of the hypothetical company. Staw and Fox divided the participants into two groups: a low responsibility condition and a high responsibility condition. In the high responsibility condition, the participants were told that they, as manager, had made an earlier, disappointing R&D investment. In the low responsibility condition, subjects were told that a former manager had made a previous R&D investment in the underperforming division and were given the same profit data as the other group. In both cases, subjects were then asked to make a new $20 million investment. There was a significant interaction between assumed responsibility and average investment, with the high responsibility condition averaging $12.97 million and the low condition averaging $9.43 million. Similar results have been obtained in other studies. [ 46 ] [ 43 ] [ 47 ]
A ticket buyer who purchases a ticket in advance to an event they eventually turn out not to enjoy makes a semi-public commitment to watching it. To leave early is to make this lapse of judgment manifest to strangers, an appearance they might otherwise choose to avoid. As well, the person may not want to leave the event because they have already paid, so they may feel that leaving would waste their expenditure. Alternatively, they may take a sense of pride in having recognised the opportunity cost of the alternative use of time.
In recent years, there has been a resurgence in studies of how the brain processes information with respect to sunk costs. Measuring sensitivity to sunk costs in laboratory studies can be challenging, as it is often difficult to disentangle the influence of sunk costs from future returns on investment. In a cross-species study in humans, rats, and mice, Sweis et al [ 48 ] discovered a conserved evolutionary history to sensitivity to sunk costs across species.
This has opened up more questions as to what might the evolutionary drivers be behind why the brain is capable of processing information in this way, what utility, if any, sensitivity to sunk costs may confer, and how might distinct circuits in the brain [ 49 ] give rise to this sort of valuation depending on the framing of the question, circumstances of the environment, or state of the individual. [ 50 ] [ 51 ] [ 52 ] Ongoing work is characterizing how neurons encode sensitivity to sunk costs, how sunk costs appear only after certain types of choices, and how sunk costs could contribute to mood burden. | https://en.wikipedia.org/wiki/Sunk_cost |
In photography, the sunny 16 rule (also known as the sunny f /16 rule ) is a method of estimating correct daylight exposures without a light meter . Apart from the advantage of independence from a light meter, the sunny 16 rule can also aid in achieving correct exposure of difficult subjects. As the rule is based on incident light, rather than reflected light as with most camera light meters, very bright or very dark subjects are compensated for. The rule serves as a mnemonic for the camera settings obtained on a sunny day using the exposure value (EV) system.
The basic rule is, "On a sunny day set aperture to f /16 and shutter speed to the [reciprocal of the] ISO film speed [or ISO setting] for a subject in direct sunlight." [ 1 ] In simplest terms, bright sun = f:16 @ 1/ film-speed -number (aperture and shutter speed, respectively).
For example:
Shutter speeds can be changed as long as the f-number is adjusted accordingly, e.g. 1 / 250 second at f /11 gives equivalent exposure to 1 / 125 second at f /16 . Exposure adjustments are done in a manner that retains the EV . As the aperture is opened (f:11, f:5.6, f:4, etc.) the shutter-speed/exposure-time is reduced by a factor of approximately one-half (1/250, 1/500, 1/1000, etc.) This follows the mathematical relationship between aperture and shutter speed where exposure is inversely proportional to the square of the aperture ratio and proportional to exposure time; thus, to maintain a constant level of exposure, a change in aperture by a factor c requires a change in exposure time by a factor 1 / c 2 and vice versa. A change in the aperture of 1 stop always corresponds to a factor close to the square root of 2 , thus the above rule.
The sunny 16 rule can be used in varying light by setting the shutter speed nearest to the ISO film speed and f-number according to a generalized exposure table, as: [ 3 ] [ 4 ] | https://en.wikipedia.org/wiki/Sunny_16_rule |
The sunrise equation or sunset equation can be used to derive the time of sunrise or sunset for any solar declination and latitude in terms of local solar time when sunrise and sunset actually occur.
It is formulated as:
where:
The Earth rotates at an angular velocity of 15°/hour. Therefore, the expression ω ∘ / 15 ∘ {\displaystyle \omega _{\circ }/\mathrm {15} ^{\circ }} , where ω ∘ {\displaystyle \omega _{\circ }} is in degree, gives the interval of time in hours from sunrise to local solar noon or from local solar noon to sunset .
The sign convention is typically that the observer latitude ϕ {\displaystyle \phi } is 0 at the equator , positive for the Northern Hemisphere and negative for the Southern Hemisphere , and the solar declination δ {\displaystyle \delta } is 0 at the vernal and autumnal equinoxes when the sun is exactly above the equator, positive during the Northern Hemisphere summer and negative during the Northern Hemisphere winter.
The expression above is always applicable for latitudes between the Arctic Circle and Antarctic Circle . North of the Arctic Circle or south of the Antarctic Circle, there is at least one day of the year with no sunrise or sunset. Formally, there is a sunrise or sunset when − 90 ∘ + δ < ϕ < 90 ∘ − δ {\displaystyle -90^{\circ }+\delta <\phi <90^{\circ }-\delta } during the Northern Hemisphere summer, and when − 90 ∘ − δ < ϕ < 90 ∘ + δ {\displaystyle -90^{\circ }-\delta <\phi <90^{\circ }+\delta } during the Northern Hemisphere winter. For locations outside these latitudes, it is either 24-hour daytime or 24-hour nighttime .
In the equation given at the beginning, the cosine function on the left side gives results in the range [-1, 1], but the value of the expression on the right side is in the range [ − ∞ , ∞ ] {\displaystyle [-\infty ,\infty ]} . An applicable expression for ω ∘ {\displaystyle \omega _{\circ }} in the format of Fortran 90 is as follows:
omegao = acos(max(min(-tan(delta*rpd)*tan(phi*rpd), 1.0), -1.0))*dpr
where omegao is ω ∘ {\displaystyle \omega _{\circ }} in degree, delta is δ {\displaystyle \delta } in degree, phi is ϕ {\displaystyle \phi } in degree, rpd is equal to π 180 {\displaystyle {\frac {\pi }{180}}} , and dpr is equal to 180 π {\displaystyle {\frac {180}{\pi }}} .
The above expression gives results in degree in the range [ 0 ∘ , 180 ∘ ] {\displaystyle [0^{\circ },180^{\circ }]} . When ω ∘ = 0 ∘ {\displaystyle \omega _{\circ }=0^{\circ }} , it means it is polar night, or 0-hour daylight; when ω ∘ = 180 ∘ {\displaystyle \omega _{\circ }=180^{\circ }} , it means it is polar day, or 24-hour daylight.
Suppose ϕ N {\displaystyle \phi _{N}} is a given latitude in Northern Hemisphere, and ω ∘ N {\displaystyle \omega _{\circ N}} is the corresponding sunrise hour angle that has a negative value, and similarly, ϕ S {\displaystyle \phi _{S}} is the same latitude but in Southern Hemisphere, which means ϕ S = − ϕ N {\displaystyle \phi _{S}=-\phi _{N}} , and ω ∘ S {\displaystyle \omega _{\circ S}} is the corresponding sunrise hour angle, then it is apparent that
which means
The above relation implies that on the same day, the lengths of daytime from sunrise to sunset at ϕ N {\displaystyle \phi _{N}} and ϕ S {\displaystyle \phi _{S}} sum to 24 hours if ϕ S = − ϕ N {\displaystyle \phi _{S}=-\phi _{N}} , and this also applies to regions where polar days and polar nights occur. This further suggests that the global average of length of daytime on any given day is 12 hours without considering the effect of atmospheric refraction.
The equation above neglects the influence of atmospheric refraction (which lifts the solar disc — i.e. makes the solar disc appear higher in the sky — by approximately 0.6° when it is on the horizon) and the non-zero angle subtended by the solar disc — i.e. the apparent diameter of the sun — (about 0.5°). The times of the rising and the setting of the upper solar limb as given in astronomical almanacs correct for this by using the more general equation
with the altitude angle (a) of the center of the solar disc set to about −0.83° (or −50 arcminutes).
The above general equation can be also used for any other solar altitude. The NOAA provides additional approximate expressions for refraction corrections at these other altitudes. [ 1 ] There are also alternative formulations, such as a non-piecewise expression by G.G. Bennett used in the U.S. Naval Observatory's "Vector Astronomy Software". [ 2 ]
The generalized equation relies on a number of other variables which need to be calculated before it can itself be calculated. These equations have the solar-earth constants substituted with angular constants expressed in degrees.
where:
where:
where:
where:
where:
where:
where:
Alternatively, the Sun's declination could be approximated [ 4 ] as:
where:
This is the equation from above with corrections for atmospherical refraction and solar disc diameter.
where:
For observations on a sea horizon needing an elevation-of-observer correction, add − 1.15 ∘ elevation in feet / 60 {\displaystyle -1.15^{\circ }{\sqrt {\text{elevation in feet}}}/60} , or − 2.076 ∘ elevation in metres / 60 {\displaystyle -2.076^{\circ }{\sqrt {\text{elevation in metres}}}/60} to the −0.833° in the numerator's sine term. This corrects for both apparent dip and terrestrial refraction. For example, for an observer at 10,000 feet, add (−115°/60) or about −1.92° to −0.833°. [ 5 ]
where: | https://en.wikipedia.org/wiki/Sunrise_equation |
In information technology (IT), to sunset a server , service, software feature, etc. is to plan to intentionally remove or discontinue it. In most cases, the term also connotes that this discontinuation is announced to users in advance, generally with an expected timeline. After sunsetting is announced, usually very few changes are made to the hardware or software in question, as such work would be counterproductive, when its termination is soon to follow. In some cases, however, individual features of an application, server, or service may be phased out at different times, leading up to the eventual full shutdown. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sunset_(computing) |
Sunset yellow FCF (also known as orange yellow S , or C.I. 15985 ) is a petroleum -derived orange azo dye with a pH -dependent maximum absorption at about 480 nm at pH 1 and 443 nm at pH 13, with a shoulder at 500 nm. [ 1 ] [ 2 ] : 463 When added to foods sold in the United States, it is known as FD&C Yellow 6 ; when sold in Europe, it is denoted by E Number E110 . [ 3 ]
Sunset yellow is used in foods, cosmetics, and drugs. Sunset yellow FCF is used as an orange or yellow-orange dye. [ 4 ] [ 5 ] [ 6 ] [ 7 ] : 4 For example, it is used in candy, desserts, snacks, sauces, and preserved fruits. [ 1 ] : 463–465 Sunset yellow is often used in conjunction with E123, amaranth , to produce a brown colouring in both chocolates and caramel. [ 8 ]
The acceptable daily intake (ADI) is 0–4 mg/kg under both EU and WHO/FAO guidelines. [ 1 ] : 465 [ 9 ] Sunset yellow FCF has no carcinogenicity, genotoxicity, or developmental toxicity in the amounts at which it is used. [ 1 ] : 465 [ 9 ]
It has been claimed since the late 1970s, under the advocacy of Benjamin Feingold , that sunset yellow FCF causes food intolerance and ADHD -like behavior in children, but there is little scientific evidence to support these broad claims. [ 10 ] : 452 It is possible that certain food colorings may act as a trigger in those who are genetically predisposed, but the evidence is weak. [ 11 ] [ 12 ]
"European Parliament and Council Directive 94/36/EC of 30 June 1994 on colours for use in foodstuffs" harmonized rules and approved Sunset Yellow FCF for use in foodstuffs in the whole of the European Union. Before that time, approved amounts was up to each country, but naming and composition was standardized.
Sunset yellow FCF was not approved in Norway before 2001. That was the time when the 94/36/EC directive of 1994 was included in EFTA (now EEC) rules and came into effect, after years of delaying tactics from the Norwegian side and a heated political debate. [ 13 ]
In 2008, the Food Standards Agency of the UK called for food manufacturers to voluntarily stop using six food additive colours, tartrazine , allura red , ponceau 4R , quinoline yellow WS , sunset yellow and carmoisine (dubbed the "Southampton 6") by 2009, [ 14 ] and provided a document to assist in replacing the colors with other colors. [ 15 ]
An EU regulation came into effect in 2010 mandating that food manufacturers include a label on foods containing the Southampton 6 stating: "may have an adverse effect on activity and attention in children". [ 14 ]
Sunset yellow FCF is known as FD&C yellow No. 6 in the US and is approved for use in coloring food, drugs, and cosmetics with an acceptable daily intake of 3.75 mg/kg. [ 12 ] : 2, 7
Since the 1970s and the well-publicized advocacy of Benjamin Feingold , there has been public concern that food colorings may cause ADHD -like behavior in children. [ 12 ] These concerns have led the FDA and other food safety authorities to regularly review the scientific literature, and led the UK FSA to commission a study by researchers at Southampton University of the effect of a mixture of the "Southampton 6" and sodium benzoate (a preservative) on children in the general population who consumed them in beverages; the study was published in 2007. [ 12 ] [ 14 ] The study found "a possible link between the consumption of these artificial colours and a sodium benzoate preservative and increased hyperactivity" in the children; [ 12 ] [ 14 ] the advisory committee to the FSA that evaluated the study also determined that because of study limitations, the results could not be extrapolated to the general population, and further testing was recommended". [ 12 ]
The European regulatory community, with a stronger emphasis on the precautionary principle , required labelling and temporarily reduced the acceptable daily intake (ADI) for the food colorings; the UK FSA called for voluntary withdrawal of the colorings by food manufacturers. [ 12 ] [ 14 ] However, in 2009 the EFSA re-evaluated the data at hand and determined that "the available scientific evidence does not substantiate a link between the color additives and behavioral effects" [ 12 ] [ 16 ] and in 2014 after further review of the data, the EFSA restored the prior ADI levels. [ 9 ]
The US FDA did not make changes following the publication of the Southampton study, but following a citizen petition filed by the Center for Science in the Public Interest in 2008, requesting the FDA to ban several food additives, the FDA commenced a review of the available evidence, and still made no changes. [ 12 ] | https://en.wikipedia.org/wiki/Sunset_yellow_FCF |
The Sunshine Project was an international NGO dedicated to upholding prohibitions against biological warfare and, particularly, to preventing military abuse of biotechnology . It was directed by Edward Hammond.
With offices in Austin, Texas , and Hamburg, Germany , the Sunshine Project worked by exposing research on biological and chemical weapons . Typically, it accessed documents under the Freedom of Information Act and other open records laws, publishing reports and encouraging action to reduce the risk of biological warfare . It tracked the construction of high containment laboratory facilities and the dual-use activities of the U.S. biodefense program . Another focus was on documenting government-sponsored research and development of incapacitating " non-lethal " weapons, such as the chemical used by Russia to end the Moscow theater hostage crisis in 2002. The Sunshine Project was also active in meetings of the Biological Weapons Convention , the main international treaty prohibiting biological warfare.
An announcement was posted on The Sunshine Project website, "As of 1 February 2008, the Sunshine Project is suspending its operations", due to a lack of funding. [ 1 ] [ 2 ] Its website remained online for some time after this date and could be used as an archive of its activities and publications from 2000 through 2008. However, as of October 2013 the Sunshine Project website was offline. The domain for the website was then reappropriated by a Thai reforestation volunteer organization until September 2023. It now redirects to the internet pornography website 33porn.
The Sunshine Project Biosafety Bites (v.2) #14 6 June 2006
This article about a scientific organization is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sunshine_Project |
The Sunway BlueLight ( 神威蓝光 ) is a Chinese massively parallel supercomputer . It is the first publicly announced PFLOPS supercomputer using Sunway processors solely developed by the People's Republic of China. [ 1 ] [ 2 ]
It ranked #2 in the 2011 China HPC Top100, [ 3 ] [ 4 ] #14 on the November 2011 TOP500 list, [ 5 ] and #39 on the November 2011 Green500 List. [ 6 ] The machine was installed at National Supercomputing Jǐnán Center ( 国家超算济南中心 ) in September 2011 [ 1 ] [ 2 ] and was developed by National Parallel Computer Engineering Technology Research Center ( 国家并行计算机工程技术研究中心 ) and supported by Technology Department ( 科技部 ) 863 project. The water-cooled 9-rack system has 8704 ShenWei SW1600 processors (For the Top100 run 8575 CPUs were used, at 975 MHz each [ 3 ] ) organized as 34 super nodes (each consisting of 256 compute nodes), 150 TB main memory, 2 PB external storage, peak performance of 1.07016 PFLOPS, sustained performance of 795.9 TFLOPS, LINPACK efficiency 74.37%, and total power consumption 1074 kW. [ 3 ] [ 7 ]
The Sunway BlueLight is ranked 103rd [ 8 ] as of the November 2015 TOP500 list [update] (ranked highest at 14th when it appeared on the list in November 2011; then 65th in the November 2014)
This computing article is a stub . You can help Wikipedia by expanding it .
This China -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sunway_BlueLight |
The SW26010 is a 260-core manycore processor designed by the Shanghai Integrated Circuit Technology and Industry Promotion Center (ICC for short)( Chinese : 上海集成电路技术与产业促进中心 (简称ICC)). It implements the Sunway architecture , a 64-bit reduced instruction set computing (RISC) architecture designed in China . [ 1 ] The SW26010 has four clusters of 64 Compute-Processing Elements (CPEs) which are arranged in an eight-by-eight array. The CPEs support SIMD instructions and are capable of performing eight double-precision floating-point operations per cycle. Each cluster is accompanied by a more conventional general-purpose core called the Management Processing Element (MPE) that provides supervisory functions. [ 1 ] Each cluster has its own dedicated DDR3 SDRAM controller and a memory bank with its own address space . [ 2 ] [ 3 ] The processor runs at a clock speed of 1.45 GHz. [ 4 ]
The CPE cores feature 64 KB of scratchpad memory for data and 16 KB for instructions , and communicate via a network on a chip , instead of having a traditional cache hierarchy . [ 5 ] The MPEs have a more traditional setup, with 32 KB L1 instruction and data caches and a 256 KB L2 cache . [ 1 ] Finally, the on-chip network connects to a single system interconnection interface that connects the chip to the outside world.
The SW26010 is used in the Sunway TaihuLight supercomputer , which between March and June 2018, was the world's fastest supercomputer as ranked by the TOP500 project. [ 6 ] The system uses 40,960 SW26010s to obtain 93.01 PFLOPS on the LINPACK benchmark .
SW26010P includes 6 core groups (CGs), each of which includes one management processing element (MPE), and one 8×8 computing processing element (CPE) cluster. Each CG has its memory controller (MC), connecting to 16 GB of DDR4 memory with a bandwidth of 51.2 GB/s. The data exchange between every two CPEs in the same CPE cluster is achieved through the Remote Memory Access (RMA) interface (a replacement of the register communication feature in the previous generation). Each CPE has a fast local data memory (LDM) of 256 KB. Each SW26010P processor consists of 390 processing elements. [ 7 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sunway_SW26010 |
Sunwise , sunward or deasil (sometimes spelled deosil ), are terms meaning to go clockwise or in the direction of the sun, as seen from the Northern Hemisphere . The opposite term is widdershins ( Middle Low German ), or tuathal ( Scottish Gaelic ). [ 1 ] In Scottish culture , this turning direction is also considered auspicious, while the converse is true for counter-clockwise motion.
During the days of Gaelic Ireland and of the Irish clans , the Psalter known as An Cathach was used as both a rallying cry and protector in battle by the Chiefs of Clan O'Donnell . Before a battle it was customary for a chosen monk or holy man (usually attached to the Clan McGroarty and who was in a state of grace ) to wear the Cathach and the cumdach , or book shrine, around his neck and then walk three times sunwise around the warriors of Clan O'Donnell. [ 2 ]
According to folklorist Kevin Danaher , on St. John's Eve in Ulster and Connaught , it was customary to light a bonfire at sunset and to walk sunwise around the fire while praying the rosary . Those who could not afford a rosary would keep tally by holding a small pebble during each prayer and throwing it into the bonfire as each prayer was completed. [ 3 ]
Similar praying of the rosary or other similar prayers while walking sunwise around Christian pilgrimage shrines or holy wells is also traditional in Irish culture during pattern days . [ 4 ]
This is descriptive of the ceremony observed by the druids , of walking round their temples by the south, in the course of their directions, always keeping their temples on their right. This course ( diasil or deiseal ) was deemed propitious, while the contrary course is perceived as fatal, or at least unpropitious. From this ancient superstition are derived several Gaelic customs which were still observed around the turn of the twentieth century, such as drinking over the left thumb, as Toland expresses it, or according to the course of the sun.
Similarly to the pre-battle use of the Cathach of St. Columba in Gaelic Ireland , the Brecbannoch of St Columba , a reliquary containing the partial human remains of the Saint, was traditionally carried three times sunwise around Scottish armies before they gave battle. The most famous example of this was during the Scottish Wars of Independence , shortly before the Scots under Robert the Bruce faced the English army at the Battle of Bannockburn in 1314. [ 5 ]
Martin Martin says:
Some of the poorer sort of people in the Western Isles retain the custom of performing these circles sunwise about the persons of their benefactors three times, when they bless them, and wish good success to all their enterprises. Some are very careful when they set out to sea, that the boat be first rowed sunwise, and if this be neglected, they are afraid their voyage may prove unfortunate. I had this ceremony paid me when in Islay by a poor woman, after I had given her an alms . I desired her to let alone that compliment, for that I did not care for it; but she insisted to make these three ordinary turns, and then prayed that God and MacCharmaig, the patron saint of the island, might bless and prosper me in all my affairs. When a Gael goes to drink out of a consecrated fountain, he approaches it by going round the place from east to west, and at funerals, the procession observes the same direction in drawing near the grave. Hence also is derived the old custom of describing sunwise a circle, with a burning brand, about houses, cattle, corn and corn-fields, to prevent their being burnt or in any way injured by evil spirits, or by witchcraft. The fiery circle was also made around women, as soon as possible after parturition, and also around newly-born babes. These circles were, in later times, described by midwives, and were described effectual against the intrusion of ‘daoine-sìth’ or ‘sìthichean’, who were particularly on the alert in times of childhood, and not infrequently carried infants away, according to vulgar legends, and restored them afterwards, but sadly altered in features and personal appearance. Infants stolen by fairies are said to have voracious appetites, constantly craving for food. In this case it was usual for those who believed their children had been taken away, to dig a grave in the fields on quarter-day and there to lay the fairy skeleton till next morning, at which time the parents went to the place, where they doubted not to find their own child in place of the skeleton. [ This quote needs a citation ]
The use of the sunwise circle was also traditional in the Highlands during Christian pilgrimages in honour of St Máel Ruba , particularly to the shrine where he is said to have established a hermitage upon Isle Maree . [ 6 ]
Wicca uses the spelling deosil , which violates the Gaelic orthography principle that a consonant must be surrounded by either broad vowels (a, o, u) or slender vowels (e, i). The Oxford English Dictionary gives precedence to the spelling "deasil", which violates the same principle, but acknowledges "deiseal", "deisal", and "deisul" as well.
This distinction exists in traditional Tibetan religion. Tibetan Buddhists go round their shrines sunwise, but followers of Bonpo go widdershins. The former consider Bonpo to be merely a perversion of their practice, but Bonpo adherents claim that their religion, as the indigenous one of Tibet, was doing this prior to the arrival of Buddhism in the country.
The Hindu pradakshina , the auspicious circumambulation of a temple, is also made clockwise. | https://en.wikipedia.org/wiki/Sunwise |
Super-LumiNova is a brand name under which strontium aluminate –based non- radioactive and nontoxic photoluminescent or afterglow pigments for illuminating markings on watch dials , hands and bezels , etc. in the dark are marketed. When activated with a suitable dopant ( Europium and Dysprosium ), it acts as a photoluminescent phosphor with long persistence of phosphorescence . This technology offers up to ten times higher brightness than previous zinc sulfide –based materials.
These types of phosphorescent pigments, often called lume , operate like a rechargeable light battery. After sufficient activation by sunlight, fluorescent, LED, UV (blacklight), incandescent and other light sources, they glow in the dark for hours. Electrons within the pigment are being "excited" by ultraviolet light exposure—the excitation wavelengths for strontium aluminate range from 200 to 450 nm electromagnetic radiation —to a higher energetic state and after the excitation source is removed, fall back to their normal energetic state by releasing the energy loss as visible light over a period of time. Although fading over time, appropriately thick applicated larger markings remain visible for dark adapted human eyes for the whole night. This Ultraviolet light exposure induced activation and subsequent light emission process can be repeated again and again.
Nemoto & Co., Ltd. – a global manufacturer of phosphorescent pigments and other specialized phosphors – was founded by Kenzo Nemoto in December 1941 as a luminous paint processing company and has supplied and developed luminous paint to the watch and clock and aviation instruments industry since.
Super-LumiNova is based on LumiNova branded pigments, invented in 1993 by the Nemoto staff members Yoshihiko Murayama, Nobuyoshi Takeuchi, Yasumitsu Aoki and Takashi Matsuzawa as a safe replacement for radium -based luminous paints . [ 1 ] The invention was patented in 1994 by Nemoto & Co., Ltd. and licensed to other manufacturers and watch brands. [ 2 ]
In 1998 Nemoto & Co. established a join-venture with RC Tritec AG called LumiNova AG, Switzerland to manufacture 100 percent Swiss made afterglow pigments branded as Super-LumiNova. After that, the production of radioactive luminous compounds by RC Tritec AG was completely stopped. According to RC Tritec AG the Swiss watch brands all use their Super-LumiNova pigments.
Over time, RC Tritec AG developed other afterglow color variations than the original Nemoto & Co. C3 green and higher grades of afterglow pigments.
Any other Super-LumiNova emission color offering than C3 is achieved by adding colorants that adsorb light and hence limit the amount of light the afterglow pigment can absorb and emit. After the green glowing and pale yellow-green in daylight appearing C3 (emission at 515 nm) variant, the blue-green glowing and in daylight white appearing BGW9 (emission at 485 nm, close to the turquoise wavelength) color variant is the second most effective variant regarding pure afterglow brightness. Different colors can however be chosen to optimize (perceived) light emission, dictated by the human eye luminous efficiency function variance. Maximal light emission around wavelengths of 555 nm ( green ) is important for obtaining optimal photopic vision using the eye cone cells for observation in – or just coming from – well-lit conditions. Maximal light emission around wavelengths of 498 nm ( cyan ) is important for obtaining optimal scotopic vision using the eye rod cells for observation in low-light conditions. Besides technical and human eye dictated reasons, esthetic or other reasons can also influence Super-LumiNova color choices. [ 3 ]
Super-LumiNova is offered in three grade levels; Standard, A and X1. The initial brightness of these grades does not significantly vary, but the light intensity decay over time of the A and X1 grades is significantly reduced. This means the X1 grade takes the longest to become too dim to be useful for the human eye. Not all Super-LumiNova color variations are available in three grades. Super-Luminova technology has the introduction of Grade X2, enhancing watch readability in low-light conditions. [ 4 ]
Due to the fact that no chemical change occurs after a charge-discharge cycle, the pigments theoretically retain their afterglow properties indefinitely. A reduction in light intensity only occurs very slowly, almost imperceptibly. This reduction increases with the degree of coloring of the pigments. Intensely colored types lose their intensity more quickly than neutral ones. High temperatures of up to several hundred degrees Celsius are not a problem. The only thing that needs to be avoided is prolonged contact with water or high humidity, as this creates a hydroxide layer that negatively affects the light emission intensity. [ 5 ] [ 6 ] [ 7 ] [ 8 ]
Besides being used in timepieces by industry and hobbyists, [ 9 ] Super-LumiNova is also marketed for application on:
Super-LumiNova granulated pigments are applied either by manual application, screen printing or pad printing . RC Tritec AG recommends up to 0.30 mm (0.012 in) application thickness in one or multiple layer(s). Over that, the ultraviolet light starts getting problems to effectively reach and activate the bottom of the deposited pigment, diminishing the returns for additional application thickness. The pigments and binders are produced separately, as there is no optimal binder for differing applications. This forces RC Tritec AG to offer many solvent and non-solvent based binder systems to maximally concentrate the granulated pigments in the mixture for application on various surfaces.
Alternatively, RC Tritec AG offers Lumicast pieces, which are highly concentrated luminous Super-LumiNova 3D-castings. According to RC Tritec AG these ceramic parts can be made in any customer desired shape and result in a higher light emission brightness when compared to the common application methods. Lumicast pieces can be glued or form fitted on various surfaces.
By the late 1960s, radium was phased out and replaced with safer alternatives. [ 10 ] Tritium was used on and the original Panerai Luminor dive watch Radiomir and almost all Swiss watches from 1960 to 1998 when it was banned. [ 11 ] [ 12 ] Tritium-based substances ceased to be used by Omega SA in 1997. [ 13 ]
In the 21st century, one radioluminescent alternative for afterglow pigments requiring radiation protection is being produced and used for watches and other uses. These are tritium -based devices called "gaseous tritium light source" ( GTLS ). GTLS are made using sturdy (often glass) containers internally coated with a phosphor layer and filled with tritium gas before the containers are permanently sealed. They have the advantage of being self-powered and producing a consistent luminosity that does not gradually fade during the night. However, GTLS contain radioactive tritium gas that has a half-life of slightly over 12.3 years. [ 14 ] Additionally, phosphor degradation will cause the brightness of a tritium container to drop by more during that period. The more tritium that is initially inserted in the container, the brighter it is to begin with, and the longer its useful life. This means the intensity of the tritium-powered light source will slowly fade, generally becoming too dim to be useful for dark adapted human eyes after 20 to 30 years. | https://en.wikipedia.org/wiki/Super-LumiNova |
Cell differentiation in multicellular organisms with different cell types is determined, in each cell type, by the expression of genes under the regulatory control of typical enhancers and super-enhancers.
A typical enhancer (TE), as illustrated in the top panel of the Figure, is a several hundred base pair region of DNA [ 1 ] [ 2 ] that can bind transcription factors to sequence motifs on the enhancer. The typical enhancer can come in proximity to its target gene through a large chromosome loop. A Mediator a complex (consisting of about 26 proteins in an interacting structure) communicates regulatory signals from the enhancer-located DNA-bound transcription factors to the promoter of a gene, regulating RNA transcription of the target gene.
A super-enhancer , illustrated in the lower panel of the Figure, is a region of the mammalian genome comprising multiple typical enhancers that is collectively bound by an array of transcription factor proteins to drive transcription of genes involved in cell identity , [ 3 ] [ 4 ] [ 5 ] or of genes involved in cancer. [ 6 ] Because super-enhancers frequently occur near genes important for controlling and defining cell identity, they may be used to quickly identify key nodes regulating cell identity. [ 5 ] [ 7 ] Super-enhancers are also central to mediating dysregulation of signaling pathways and promoting cancer cell growth. [ 6 ] [ 8 ] Super-enhancers differ from typical enhancers, however, in that they are strongly dependent on additional specialized proteins that create and maintain their formation, including BRD4 (shown in the lower panel of Figure) and co-factors including p300 . [ 9 ]
Enhancers have several quantifiable traits that have a range of values, and these traits are generally elevated at super-enhancers. Super-enhancers are bound by higher levels of transcription-regulating proteins and are associated with genes that are more highly expressed. [ 3 ] [ 10 ] [ 11 ] [ 12 ] Expression of genes associated with super-enhancers is particularly sensitive to perturbations, which may facilitate cell state transitions or explain sensitivity of super-enhancer—associated genes to small molecules that target transcription. [ 3 ] [ 10 ] [ 11 ] [ 13 ] [ 14 ]
In many cell types, only a minority of activated enhancers are located in Super-Enhancers (SEs). For specialized tissue, such as skeletal muscle, a reduced number of genes are expressed and a low number of specialized and activated super-enhancers are found. In human skeletal muscle , there are nine identified types of cells. On average, the number of expressed genes in these nine cell types is 1,331. [ 15 ] There are also about 22 super-enhancers specific to skeletal muscle cells among the nine types of skeletal muscle cells, indicating that specialized super-enhancers in these cells are about 1.7% of the number of typical enhancers (TEs). [ 16 ] In immune-system B cells, a study identified 140 SEs and 4,290 TEs in non-stimulated B cells (SEs were 3.2% of activated transcription areas). In stimulated B cells SEs were 3.6% of activated transcription areas. [ 17 ] Similarly, in mouse embryonic stem cells, 231 SEs were found, compared to 8,794 TEs, with SEs comprising 2.6% of activated chromatin regions. [ 18 ] A study of neural stem cells found 445 SEs and 9436 TEs, so that SEs were 4.7% of active enhancer regions. [ 19 ]
Hundreds of thousands of sites in the human genome can potentially act as enhancers. In one large 2020 study, 78 different types of human cells were examined for links between activated enhancers and genes coding for messenger RNA to produce gene products. Distributed among the 78 types of cells there were a total of 449,627 activated enhancers linked to 17,643 protein-coding genes. [ 20 ] With this large number of potentially active enhancers, there are some genome regions with a cluster of enhancers that, when all are activated they can all loop to the same promoter and produce a super-enhancer, driving a gene to have very high messenger RNA output.
One well-studied gene, MYC, has amplified expression in as many as 70% of all cancers. [ 21 ] While about 28% of its over-expressions are due to genetic focal amplifications or translocations, [ 22 ] the majority of cases of over-expression of MYC are due to activated super-enhancers. [ 23 ] There are more than 10 different super-enhancers that can cause MYC over-expression. For each of 4 tumor types of cells grown in culture (HCT-116, MCF7, K562 and Jurkat) there were three to five super-enhancers specific to each tumor cell type.
In one 2013 study, [ 24 ] the length of typical enhancers was found to be about 700 base pairs while in the case of super-enhancers the length was about 9,000 base pairs (encompassing multiple single enhancers). A later study, in 2020, indicated that typical enhancers were about 200 nucleotides long and that there may be as many as 3.6 million potentially active enhancers occupying 21.55% of the human genome. [ 25 ]
In the nucleus of mammalian cells, almost all the DNA is wrapped around regularly spaced protein complexes, called nucleosomes (see top panel in Figure "Chromatin"). [ 26 ] The protein complexes are composed of 4 pairs of histones , H2A, H2B, H3 and H4. The DNA plus these protein complexes is called chromatin (see Figure illustrating chromatin). Enhancer regions, as described above, are several hundred nucleotides long. To be activated, the enhancer region must have the nucleosomes evicted from the DNA so that the multiple transcription factors that bind to that enhancer DNA would have access to their binding sites (see bottom panel in Figure "Chromatin"). (To be an active enhancer, more than 10 different binding sites must be occupied by different transcription factors in the enhancer. [ 25 ] )
In eviction of nucleosomes from enhancer DNA, a pioneer transcription factor first loosens up the attachment of DNA to the nucleosome of an enhancer region. For instance, one transcription factor that does this is the pioneer transcription factor NF-kB . [ 28 ] Five steps follow this: (1) NF-kB is acetylated by p300/CBP . (2) Acetylated NF-kB recruits a specific histone acetyltransferase enzyme, BRD4 . [ 29 ] (3) BRD4 acetylates histone 3 at histone 3 lysine 122 (see Figure “Nucleosome at enhancer with H3K122 acetylated”). (4) When histone 3 lysine 122 is acetylated the nucleosome is evicted from the enhancer sequence. [ 30 ] (5) Opening up the enhancer DNA allows binding of the other transcription factors needed to form an activated enhancer. Presumably, when the activating signal for NF-kB is very strong, much more NF-kB is activated, and then greatly increased NF-kB can start the process of activating multiple nearby enhancers at the same time, forming a super-enhancer.
As described above, in forming a super-enhancer, BRD4 is complexed with NF-kB. This complex also recruits and forms a further complex with cyclin T1 and Cdk9 . Cyclin T1/Cdk9 is also known as P-TEFb . P-TEFb acts as a kinase that phosphorylates RNA polymerase II (RNAP II), which then activates (in conjunction with the Mediator complex described below) the polymerase on the promoter of a gene to initiate transcription and to continue transcription (instead of pausing). [ 31 ]
The transcription factors, bound to their sites on each enhancer within the super-enhancer, recruit the Mediator complex between each enhancer and the RNA polymerase II that will initiate transcription of the gene to be actively transcribed (see Figure at top of article that illustrates a super-enhancer). The Mediator complex in humans is 1.4 MDa in size and includes 26 sub-units. [ 32 ] The tail modules of the Mediator complex protein sub-units interact with the activation domains of transcription factors bound at enhancers and the head and middle modules interact with the pre-initiation complex (PIC) at gene promoters. [ 33 ] The Mediator complex, when certain sub-units are phosphorylated and up-activated by particular cyclin-dependent kinases (Cdk8, Cdk9, Cdk19, etc.) it will then promote higher levels of transcription.
The regulation of transcription by enhancers has been studied since the 1980s. [ 34 ] [ 35 ] [ 36 ] [ 37 ] [ 38 ] Large or multi-component transcription regulators with a range of mechanistic properties, including locus control regions , clustered open regulatory elements, and transcription initiation platforms, were observed shortly thereafter. [ 39 ] [ 40 ] [ 41 ] [ 42 ] More recent research has suggested that these different categories of regulatory elements may represent subtypes of super-enhancer. [ 5 ] [ 43 ]
In 2013, two labs identified large enhancers near several genes especially important for establishing cell identities. While Richard A. Young and colleagues identified super-enhancers, Francis Collins and colleagues identified stretch enhancers. [ 3 ] [ 4 ] Both super-enhancers and stretch enhancers are clusters of enhancers that control cell-specific genes and may be largely synonymous. [ 4 ] [ 44 ]
As currently defined, the term “super-enhancer” was introduced by Young’s lab to describe regions identified in mouse embryonic stem cells (ESCs). [ 3 ] These particularly large, potent enhancer regions were found to control the genes that establish the embryonic stem cell identity, including Oct-4 , Sox2 , Nanog , Klf4 , and Esrrb . Perturbation of the super-enhancers associated with these genes showed a range of effects on their target genes’ expression. [ 44 ] Super-enhancers have been since identified near cell identity-regulators in a range of mouse and human tissues. [ 4 ] [ 5 ] [ 45 ] [ 46 ] [ 47 ] [ 48 ] [ 49 ] [ 50 ] [ 51 ] [ 52 ] [ 53 ] [ 54 ] [ 55 ] [ 56 ] [ 57 ] [ 58 ] [ 59 ] [ 60 ] [ 61 ]
The enhancers comprising super-enhancers share the functions of enhancers, including binding transcription factor proteins, looping to target genes, and activating transcription. [ 3 ] [ 5 ] [ 43 ] [ 44 ] Three notable traits of enhancers comprising super-enhancers are their clustering in genomic proximity, their exceptional signal of transcription-regulating proteins, and their high frequency of physical interaction with each other. Perturbing the DNA of enhancers comprising super-enhancers showed a range of effects on the expression of cell identity genes, suggesting a complex relationship between the constituent enhancers. [ 44 ] Super-enhancers separated by tens of megabases cluster in three-dimensions inside the nucleus of mouse embryonic stem cells. [ 62 ] [ 63 ]
High levels of many transcription factors and co-factors are seen at super-enhancers (e.g., CDK7 , BRD4 , and Mediator ). [ 3 ] [ 5 ] [ 10 ] [ 11 ] [ 13 ] [ 14 ] [ 43 ] This high concentration of transcription-regulating proteins suggests why their target genes tend to be more highly expressed than other classes of genes. However, housekeeping genes tend to be more highly expressed than super-enhancer—associated genes. [ 3 ]
Super-enhancers may have evolved at key cell identity genes to render the transcription of these genes responsive to an array of external cues. [ 44 ] The enhancers comprising a super-enhancer can each be responsive to different signals, which allows the transcription of a single gene to be regulated by multiple signaling pathways. [ 44 ] Pathways seen to regulate their target genes using super-enhancers include Wnt , TGFb , LIF , BDNF , and NOTCH . [ 44 ] [ 64 ] [ 65 ] [ 66 ] [ 67 ] The constituent enhancers of super-enhancers physically interact with each other and their target genes over a long range sequence-wise. [ 12 ] [ 46 ] [ 68 ] Super-enhancers that control the expression of major cell surface receptors with a crucial role in the function of a given cell lineage have also been defined. This is notably the case for B-lymphocytes, the survival, the activation and the differentiation of which rely on the expression of membrane-form immunoglobulins (Ig). The Ig heavy chain locus super-enhancer is a very large (25kb) cis-regulatory region, including multiple enhancers and controlling several major modifications of the locus (notably somatic hypermutation , class-switch recombination and locus suicide recombination).
Mutations in super-enhancers have been noted in various diseases, including cancers, type 1 diabetes, Alzheimer’s disease, lupus, rheumatoid arthritis, multiple sclerosis, systemic scleroderma, primary biliary cirrhosis, Crohn’s disease, Graves disease, vitiligo, and atrial fibrillation. [ 4 ] [ 5 ] [ 11 ] [ 49 ] [ 56 ] [ 59 ] [ 69 ] [ 70 ] [ 71 ] [ 72 ] [ 73 ] A similar enrichment in disease-associated sequence variation has also been observed for stretch enhancers. [ 4 ]
Super-enhancers may play important roles in the misregulation of gene expression in cancer. During tumor development, tumor cells acquire super-enhancers at key oncogenes, which drive higher levels of transcription of these genes than in healthy cells. [ 5 ] [ 10 ] [ 68 ] [ 69 ] [ 74 ] [ 75 ] [ 76 ] [ 77 ] [ 78 ] [ 79 ] [ 80 ] [ 81 ] [ 82 ] [ 83 ] Altered super-enhancer function is also induced by mutations of chromatin regulators. [ 84 ] Acquired super-enhancers may thus be biomarkers that could be useful for diagnosis and therapeutic intervention. [ 44 ]
Proteins enriched at super-enhancers include the targets of small molecules that target transcription-regulating proteins and have been deployed against cancers. [ 10 ] [ 11 ] [ 49 ] [ 85 ] For instance, super-enhancers rely on exceptional amounts of CDK7, and, in cancer, multiple papers report the loss of expression of their target genes when cells are treated with the CDK7 inhibitor THZ1. [ 10 ] [ 13 ] [ 14 ] [ 86 ] Similarly, super-enhancers are enriched in the target of the JQ1 small molecule, BRD4, so treatment with JQ1 causes exceptional losses in expression for super-enhancer—associated genes. [ 11 ]
Super-enhancers have been most commonly identified by locating genomic regions that are highly enriched in ChIP-Seq signal. ChIP-Seq experiments targeting master transcription factors and co-factors like Mediator or BRD4 have been used, but the most frequently used is H3K27ac -marked nucleosomes . [ 3 ] [ 5 ] [ 11 ] [ 87 ] [ 88 ] [ 89 ] The program “ROSE” (Rank Ordering of Super-Enhancers) is commonly used to identify super-enhancers from ChIP-Seq data. This program stitches together previously identified enhancer regions and ranks these stitched enhancers by their ChIP-Seq signal. [ 3 ] The stitching distance selected to combine multiple individual enhancers into larger domains can vary. Because some markers of enhancer activity also are enriched in promoters , regions within promoters of genes can be disregarded. ROSE separates super-enhancers from typical enhancers by their exceptional enrichment in a mark of enhancer activity. Homer is another tool that can identify super-enhancers. [ 90 ] | https://en.wikipedia.org/wiki/Super-enhancer |
Super-resolution dipole orientation mapping (SDOM) is a form of fluorescence polarization microscopy (FPM) that achieved super resolution through polarization demodulation. It was first described by Karl Zhanghao and others in 2016. [ 1 ] Fluorescence polarization (FP) is related to the dipole orientation of chromophores , making fluorescence polarization microscopy possible to reveal structures and functions of tagged cellular organelles and biological macromolecules. In addition to fluorescence intensity, wavelength, and lifetime, the fourth dimension of fluorescence—polarization—can also provide intensity modulation without the restriction to specific fluorophores; its investigation in super-resolution microscopy is still in its infancy.
In 2013, Hafi et al. [ 2 ] developed a novel super-resolution technique through sparse deconvolution of polarization-modulated fluorescent images (SPoD). Because the fluorescent dipole is an inherent feature of fluorescence, and its polarization intensity can be easily modulated with rotating linear polarized excitation, the polarization-based super-resolution technique therefore holds great promise with regard to a wide range of biological applications due to its compatibility with conventional fluorescent specimen labeling. The SPoD data, consisting of sequences of diffraction-limited images illuminated with varying linearly polarized light, were reconstructed with a deconvolution algorithm termed SPEED (sparsity penalty – enhanced estimation by demodulation). Although super resolution can be achieved, the dipole orientation information is lost during SPoD reconstruction.
In 2016, Keller et al. [ 3 ] argue that the improvement in resolution observed with the SPoD method is a deconvolution effect. That is, the super-resolution in the images that Hafi shows is achieved by SPEED algorithm not the SPoD method. So the polarization information does not contribute substantially to the final image. They concluded that polarization can't add further super-resolution information.
At the same time, Waller et al. [ 4 ] replied to the debate and they admit the question raised by Keller. They did some new experiments to support SPoD could bring further information. They prove that raw modulation information in SPoD also separated sub-diffractional details without SPEED. However, whether it works for heterogeneously and densely labeled samples is unsure and still need further studies.
Afterwards, Karl Zhanghao et al. [ 1 ] proposed a new approach called SDOM that resolves the effective dipole orientation from a much smaller number of fluorescent molecules within a sub-diffraction focal area. They also applied this method to resolve structural details in both fixed and live cells. Their results showed that polarization does provide further structural information on top of the super-resolution image, thereby providing a timely answer to the key question raised by the debate mentioned above.
As a fundamental physical dimension of fluorescence, polarization has been applied extensively in biological research. Through fluorescence polarization microscopy (FPM), the dipole orientation as well as the intensity of fluorescent probes could be measured. Compared with X-ray crystallography or electron microscopy which could elucidate ultra-high resolution of individual proteins or macromolecule assemblies, FPM doesn't require complex sample preparation which makes it suitable for live cell imaging. Near-field imaging techniques, such as Atomic Force Microscopy (AFM) could also provide structural information, which however, is limited only to samples on the surface. FPM is capable of imaging orientations in dynamic samples at the time scale of seconds or milliseconds, thus it can serve as a complementary method for the measurement of subcellular organelle structures.
FPM has been evolving over the past decades, [ when? ] from manual or mechanical switching of polarization detection or excitation to simultaneously detection and fast polarization modulation via electro-optic devices. With faster imaging speed and higher imaging quality, FPM has been incorporated with various imaging modalities, such as wide-field, [ 5 ] [ 6 ] confocal microscopy , [ 7 ] [ 8 ] two-photon confocal , [ 9 ] total internal reflection fluorescence microscope , [ 10 ] FRAP, etc. However, as an optical imaging technique, the development of fluorescence polarization microscopy (FPM) is barricaded by the diffraction limit . Compared to the abundant super-resolution techniques on fluorescence intensity imaging, super-resolution techniques in FPM are still in its infancy.
Recently, three forms of FPM have emerged and it has been proved that they can achieve super-resolution. They are SPoD, SDOM and polar-dSTORM (polarization-resolved direct stochastic optical reconstruction microscopy). [ 11 ]
Polar-dSTORM [ 11 ] used On-Off modulation of the fluorescent probes and acquired adequate frames for a reconstruction of super resolution image. The imaging resolution of polar-dSTORM is high, with localization precision in tens of nanometers. Single dipole average orientation is directly measured separately and the wobbling angle is statistically calculated from neighboring emitters. The drawback of polar-dSTORM is a long imaging time of 2–40 min, which requires a stationary sample during the imaging period. The sample preparation of dSTORM also makes it hard for live cell samples.
SDOM [ 1 ] has achieved super resolution dipole orientation mapping with a spatial resolution of 150 nm and sub-second temporal resolution. It has been applied to both fixed cell and live cell imaging, which shows great advantages over diffraction limited FPM techniques on both revealing sub-diffractional structures and measuring local dipole orientations. In comparison with polar-dSTROM, SDOM still measures average dipoles and could not separate the signal of the wobbling of single fluorophores from the variation of orientation distribution of fluorophores with the resolvable area. As with SPoD, [ 2 ] the power of SDOM would be weakened if the fluorescent probes are distributed too homogeneously or too dense.
Thanks to the intrinsic polarization of chromophores , fluorescence polarization reveals the structures and functions of the biological macromolecules. With incorporation with various optical imaging modalities, FPM has played an irreplaceable role in solving many questions. Fast and non-invasive imaging of the samples makes it a complementary tool for X-crystallography which typically applies to individual proteins, or sub-complexes, or EM which requires invasive sample preparation, or AFM which could measure the surface of the sample. Compared to these methods, the specific labeling of the fluorescent probes provides better focus on the structure of interest.
As the development of FPM techniques, its power has spread from uniform oriented fluorophores to fluorescent dipoles with organized orientation or on complex bio-structures. The detection accuracy has improved from measuring the bulk volume polarization to sub-diffraction area measurement and single dipole measurement. Imaging resolution of FPM matters not only for intensity image but also for the accuracy of dipole orientation detection. Recently developed super resolution FPM techniques still have their limitations though demonstrating great successes in their imaging results. Spatial 3D super resolution FPM techniques and 3D orientation measurement of fluorescent dipoles are still missing. In the future, more inventions are anticipated which could achieve both high-resolution measurement and fast temporal resolution, allowing imaging samples of live cells. This may be done by introducing existing super resolution principles into FPM, or by better exploiting the intensity fluctuation with polarization modulation, or other alternative means.
Unlike other super-resolution methods, such as STED , SIM , PLAM and STORM , SDOM can achieve super-resolution based on a wide-field epi- fluorescence illumination microscope . The key point of SDOM is polarized excitation. The SDOM imaging system can be seen in figure A. The rotary linear polarized excitation is realized by continuously rotating a half-wave plate in front of a laser. Then, the illumination beam is focused onto the back focal plane of the objective to generate uniform illumination with rotating polarization light. The series of fluorescence images excited from different angles of polarized excitation are collected by an EMCCD camera.
All organic fluorescent dyes and fluorescent proteins are dipoles, whose orientations are closely related to the structure of their labeled target proteins. Because both the excitation absorption and fluorescence emission of dipoles have polarization features, FPM has been widely used to study dipole orientation. As illustrated in the inset schematic figure B, the fluorophores (such as GFP) are linked to the target protein via the C terminus (connected to GFP's N terminus ), and the dipole angle of the fluorophore will reflect the orientation of the target protein. Therefore, the SDOM can be used to study the structure of the protein.
Figure C illustrates the principle of the SDOM super-resolution technique. Two neighboring fluorophores with 100 nm distance and different dipole orientations (pseudocolor in red and green) emit periodic signals excited by rotating polarized light. By rotating the polarization of excitation, the emission ratio between the two molecules is modulated accordingly, resulting in their separation in the polarization domain. The sparsity deconvolution can achieve a super-resolution image of effective dipole intensities under polarization modulation; with least-squares fitting, the dipole orientation can be determined. Arrows indicate the directions of dipole orientations. The super-resolution was achieved in the polarization domain.
The SDOM result of two intersecting lines is shown in figure D, with arrows on top of the super-resolution image, illustrating the dipole orientation. Figure E shows the corresponding data are represented in ( X , Y , θ ) coordinates, in which the XY plane is the super-resolved intensity image. From both D and E, we can see that as SDOM introduces a new dimension, the molecules that are not able to be resolved in the super-resolution intensity image can be completely separated in the dipole orientation domain. | https://en.wikipedia.org/wiki/Super-resolution_dipole_orientation_mapping |
Super-resolution microscopy is a series of techniques in optical microscopy that allow such images to have resolutions higher than those imposed by the diffraction limit , [ 1 ] [ 2 ] which is due to the diffraction of light. [ 3 ] Super-resolution imaging techniques rely on the near-field (photon-tunneling microscopy [ 4 ] as well as those that use the Pendry Superlens and near field scanning optical microscopy ) or on the far-field . Among techniques that rely on the latter are those that improve the resolution only modestly (up to about a factor of two) beyond the diffraction-limit, such as confocal microscopy with closed pinhole or aided by computational methods such as deconvolution [ 5 ] or detector-based pixel reassignment (e.g. re-scan microscopy, [ 6 ] pixel reassignment [ 7 ] ), the 4Pi microscope , and structured-illumination microscopy technologies such as SIM [ 8 ] [ 9 ] and SMI .
There are two major groups of methods for super-resolution microscopy in the far-field that can improve the resolution by a much larger factor: [ 10 ]
On 8 October 2014, the Nobel Prize in Chemistry was awarded to Eric Betzig , W.E. Moerner and Stefan Hell for "the development of super-resolved fluorescence microscopy ", which brings " optical microscopy into the nanodimension ". [ 11 ] [ 12 ] The different modalities of super-resolution microscopy are increasingly being adopted by the biomedical research community, and these techniques are becoming indispensable tools to understanding biological function at the molecular level. [ 13 ]
By 1978, the first theoretical ideas had been developed to break the Abbe limit , which called for using a 4Pi microscope as a confocal laser-scanning fluorescence microscope where the light is focused from all sides to a common focus that is used to scan the object by 'point-by-point' excitation combined with 'point-by-point' detection. [ 14 ] However the publication from 1978 [ 15 ] had drawn an improper physical conclusion (i.e. a point-like spot of light) and had completely missed the axial resolution increase as the actual benefit of adding the other side of the solid angle. [ 16 ]
Some of the following information was gathered (with permission) from a chemistry blog's review of sub-diffraction microscopy techniques. [ 17 ] [ 18 ]
In 1986, a super-resolution optical microscope based on stimulated emission was patented by Okhonin. [ 19 ]
Near-field optical random mapping (NORM) microscopy is a method of optical near-field acquisition by a far-field microscope through the observation of nanoparticles' Brownian motion in an immersion liquid. [ 21 ] [ 22 ]
NORM uses object surface scanning by stochastically moving nanoparticles. Through the microscope, nanoparticles look like symmetric round spots. The spot width is equivalent to the point spread function (~ 250 nm) and is defined by the microscope resolution. Lateral coordinates of the given particle can be evaluated with a precision much higher than the resolution of the microscope. By collecting the information from many frames one can map out the near field intensity distribution across the whole field of view of the microscope. In comparison with NSOM and ANSOM this method does not require any special equipment for tip positioning and has a large field of view and a depth of focus. Due to the large number of scanning "sensors" one can achieve image acquisition in a shorter time.
A 4Pi microscope is a laser-scanning fluorescence microscope with an improved axial resolution . The typical value of 500–700 nm can be improved to 100–150 nm, which corresponds to an almost spherical focal spot with 5–7 times less volume than that of standard confocal microscopy .
The improvement in resolution is achieved by using two opposing objective lenses, both of which are focused to the same geometric location. Also, the difference in optical path length through each of the two objective lenses is carefully minimized. By this, molecules residing in the common focal area of both objectives can be illuminated coherently from both sides, and the reflected or emitted light can be collected coherently, i.e. coherent superposition of emitted light on the detector is possible. The solid angle Ω {\displaystyle \Omega } that is used for illumination and detection is increased and approaches the ideal case, where the sample is illuminated and detected from all sides simultaneously. [ 23 ] [ 24 ]
Up to now, the best quality in a 4Pi microscope has been reached in conjunction with STED microscopy in fixed cells [ 25 ] and RESOLFT microscopy with switchable proteins in living cells. [ 26 ]
Structured illumination microscopy (SIM) enhances spatial resolution by collecting information from frequency space outside the observable region. This process is done in reciprocal space: the Fourier transform (FT) of an SI image contains superimposed additional information from different areas of reciprocal space; with several frames where the illumination is shifted by some phase , it is possible to computationally separate and reconstruct the FT image, which has much more resolution information. The reverse FT returns the reconstructed image to a super-resolution image.
SIM could potentially replace electron microscopy as a tool for some medical diagnoses. These include diagnosis of kidney disorders, [ 27 ] kidney cancer, [ 28 ] and blood diseases. [ 29 ]
Although the term "structured illumination microscopy" was coined by others in later years, Guerra (1995) first published results [ 30 ] in which light patterned by a 50 nm pitch grating illuminated a second grating of pitch 50 nm, with the gratings rotated with respect to each other by the angular amount needed to achieve magnification. Although the illuminating wavelength was 650 nm, the 50 nm grating was easily resolved. This showed a nearly 5-fold improvement over the Abbe resolution limit of 232 nm that should have been the smallest obtained for the numerical aperture and wavelength used. In further development of this work, Guerra showed that super-resolved lateral topography is attained by phase-shifting the evanescent field. Several U.S. patents [ 31 ] were issued to Guerra individually, or with colleagues, and assigned to the Polaroid Corporation . Licenses to this technology were procured by Dyer Energy Systems, Calimetrics Inc., and Nanoptek Corp. for use of this super-resolution technique in optical data storage and microscopy.
One implementation of structured illumination is known as spatially modulated illumination (SMI). Like standard structured illumination, the SMI technique modifies the point spread function (PSF) of a microscope in a suitable manner. In this case however, "the optical resolution itself is not enhanced"; [ 32 ] instead structured illumination is used to maximize the precision of distance measurements of fluorescent objects, to "enable size measurements at molecular dimensions of a few tens of nanometers". [ 32 ]
The Vertico SMI microscope achieves structured illumination by using one or two opposing interfering laser beams along the axis. The object being imaged is then moved in high-precision steps through the wave field, or the wave field itself is moved relative to the object by phase shifts. This results in an improved axial size and distance resolution. [ 32 ] [ 33 ] [ 34 ]
SMI can be combined with other super resolution technologies, for instance with 3D LIMON or LSI- TIRF as a total internal reflection interferometer with laterally structured illumination (this last instrument and technique is essentially a phase-shifted photon tunneling microscope, which employs a total internal reflection light microscope with phase-shifted evanescent field (Guerra, 1996). [ 31 ] This SMI technique allows one to acquire light-optical images of autofluorophore distributions in sections from human eye tissue with previously unmatched optical resolution. Use of three different excitation wavelengths (488, 568, and 647 nm), enables one to gather spectral information about the autofluorescence signal. This has been used to examine human eye tissue affected by macular degeneration . [ 35 ]
Biosensing is crucial for understanding the activities of cellular components in cell biology. Genetically encoded sensors have transformed this field and typically consist of two parts: the sensing domain, which detects cellular activity or interactions, and the reporting domain, which produces measurable signals. There are two main types of sensors: FRET-based sensors using two fluorophores for precise quantification but with some limitations, and single-fluorophore biosensors that are smaller, faster, and allow for multiplexed experiments, but may have challenges in obtaining absolute values and detecting response saturation. Various microscopy methods, including super-resolution optical fluctuation imaging, have been used to quantify and monitor biological activities in real time. Examples include calcium, pH, and voltage sensing. Greenwald et al. offer a more comprehensive overview of these applications. [ 36 ]
REversible Saturable OpticaL Fluorescence Transitions (RESOLFT) microscopy is an optical microscopy with very high resolution that can image details in samples that cannot be imaged with conventional or confocal microscopy . Within RESOLFT the principles of STED microscopy [ 37 ] [ 38 ] and GSD microscopy are generalized. Also, there are techniques with other concepts than RESOLFT or SSIM. For example, fluorescence microscopy using the optical AND gate property of nitrogen-vacancy center , [ 39 ] or super-resolution by Stimulated Emission of Thermal Radiation (SETR), which uses the intrinsic super-linearities of the Black-Body radiation and expands the concept of super-resolution beyond microscopy. [ 40 ]
Stimulated emission depletion microscopy (STED) uses two laser pulses, the excitation pulse for excitation of the fluorophores to their fluorescent state and the STED pulse for the de-excitation of fluorophores by means of stimulated emission . [ 19 ] [ 41 ] [ 42 ] [ 43 ] [ 44 ] [ 45 ] In practice, the excitation laser pulse is first applied whereupon a STED pulse soon follows (STED without pulses using continuous wave lasers is also used). Furthermore, the STED pulse is modified in such a way so that it features a zero-intensity spot that coincides with the excitation focal spot. Due to the non-linear dependence of the stimulated emission rate on the intensity of the STED beam, all the fluorophores around the focal excitation spot will be in their off state (the ground state of the fluorophores). By scanning this focal spot, one retrieves the image. The full width at half maximum (FWHM) of the point spread function (PSF) of the excitation focal spot can theoretically be compressed to an arbitrary width by raising the intensity of the STED pulse, according to equation ( 1 ).
The main disadvantage of STED, which has prevented its widespread use, is that the machinery is complicated. On the one hand, the image acquisition speed is relatively slow for large fields of view because of the need to scan the sample in order to retrieve an image. On the other hand, it can be very fast for smaller fields of view: recordings of up to 80 frames per second have been shown. [ 46 ] [ 47 ] Due to a large I s value associated with STED, there is the need for a high-intensity excitation pulse, which may cause damage to the sample.
Ground state depletion microscopy (GSD microscopy) uses the triplet state of a fluorophore as the off-state and the singlet state as the on-state, whereby an excitation laser is used to drive the fluorophores at the periphery of the singlet state molecule to the triplet state. This is much like STED, where the off-state is the ground state of fluorophores, which is why equation ( 1 ) also applies in this case. The I s {\displaystyle I_{s}} value is smaller than in STED, making super-resolution imaging possible at a much smaller laser intensity. Compared to STED, though, the fluorophores used in GSD are generally less photostable; and the saturation of the triplet state may be harder to realize. [ 48 ]
Saturated structured-illumination microscopy (SSIM) exploits the nonlinear dependence of the emission rate of fluorophores on the intensity of the excitation laser. [ 49 ] By applying a sinusoidal illumination pattern [ 50 ] with a peak intensity close to that needed in order to saturate the fluorophores in their fluorescent state, one retrieves Moiré fringes. The fringes contain high order spatial information that may be extracted by computational techniques. Once the information is extracted, a super-resolution image is retrieved.
SSIM requires shifting the illumination pattern multiple times, effectively limiting the temporal resolution of the technique. In addition there is the need for very photostable fluorophores, due to the saturating conditions, which inflict radiation damage on the sample and restrict the possible applications for which SSIM may be used.
Examples of this microscopy are shown under section Structured illumination microscopy (SIM) : images of cell nuclei and mitotic stages recorded with 3D-SIM Microscopy.
Single-molecule localization microscopy (SMLM) summarizes all microscopical techniques that achieve super-resolution by isolating emitters and fitting their images with the point spread function (PSF). Normally, the width of the point spread function (~ 250 nm) limits resolution. However, given an isolated emitter, one is able to determine its location with a precision only limited by its intensity according to equation ( 2 ). [ 51 ]
This fitting process can only be performed reliably for isolated emitters (see Deconvolution ), and interesting biological samples are so densely labeled with emitters that fitting is impossible when all emitters are active at the same time. SMLM techniques solve this dilemma by activating only a sparse subset of emitters at the same time, localizing these few emitters very precisely, deactivating them and activating another subset.
Considering background and camera pixelation, and using Gaussian approximation for the point spread function ( Airy disk ) of a typical microscope, the theoretical resolution is proposed by Thompson et al. [ 52 ] and fine-tuned by Mortensen et al.: [ 53 ]
Generally, localization microscopy is performed with fluorophores. Suitable fluorophores (e.g. for STORM) reside in a non-fluorescent dark state for most of the time and are activated stochastically, typically with an excitation laser of low intensity. A readout laser stimulates fluorescence and bleaches or photoswitches the fluorophores back to a dark state, typically within 10–100 ms. In Points Accumulation for Imaging in Nanoscale Topography (PAINT), the fluorophores are nonfluorescent before binding and afterwards become fluorescent. The photons emitted during the fluorescent phase are collected with a camera and the resulting image of the fluorophore (which is distorted by the PSF) can be fitted with very high precision, even on the order of a few Angstroms. [ 54 ] Repeating the process several thousand times ensures that all fluorophores can go through the bright state and are recorded. A computer then reconstructs a super-resolved image.
The desirable traits of fluorophores used for these methods, in order to maximize the resolution, are that they should be bright. That is, they should have a high extinction coefficient and a high quantum yield . They should also possess a high contrast ratio (ratio between the number of photons emitted in the light state and the number of photons emitted in the dark state). Also, a densely labeled sample is desirable, according to the Nyquist criteria .
The multitude of localization microscopy methods differ mostly in the type of fluorophores used.
A single, tiny source of light can be located much better than the resolution of a microscope usually allows for: although the light will produce a blurry spot, computer algorithms can be used to accurately calculate the center of the blurry spot, taking into account the point spread function of the microscope, the noise properties of the detector, etc. However, this approach does not work when there are too many sources close to each other: the sources then all blur together.
Spectral precision distance microscopy (SPDM) is a family of localizing techniques in fluorescence microscopy which gets around the problem of there being many sources by measuring just a few sources at a time, so that each source is "optically isolated" from the others (i.e., separated by more than the microscope's resolution, typically ~200-250 nm). [ 55 ] [ 56 ] [ 57 ] This "optical isolation" requires that the particles under examination have different spectral signatures, so that it is possible to look at light from just a few molecules at a time by using the appropriate light sources and filters. This achieves an effective optical resolution several times better than the conventional optical resolution that is represented by the half-width of the main maximum of the effective point image function. [ 55 ]
The structural resolution achievable using SPDM can be expressed in terms of the smallest measurable distance between two punctiform particles of different spectral characteristics ("topological resolution"). Modeling has shown that under suitable conditions regarding the precision of localization, particle density, etc., the "topological resolution" corresponds to a " space frequency " that, in terms of the classical definition, is equivalent to a much improved optical resolution. Molecules can also be distinguished in even more subtle ways based on fluorescent lifetime and other techniques. [ 55 ]
An important application is in genome research (study of the functional organization of the genome ). Another important area of use is research into the structure of membranes.
Localization microscopy for many standard fluorescent dyes like GFP , Alexa dyes , and fluorescein molecules is possible if certain photo-physical conditions are present. With this so-called physically modifiable fluorophores (SPDMphymod) technology, a single laser wavelength of suitable intensity is sufficient for nanoimaging [ 58 ] in contrast to other localization microscopy technologies that need two laser wavelengths when special photo-switchable/photo-activatable fluorescence molecules are used. A further example of the use of SPDMphymod is an analysis of Tobacco mosaic virus (TMV) particles [ 59 ] or the study of virus–cell interaction . [ 60 ] [ 61 ]
Based on singlet–triplet state transitions it is crucial for SPDMphymod that this process is ongoing and leading to the effect that a single molecule comes first into a very long-living reversible dark state (with half-life of as much as several seconds) from which it returns to a fluorescent state emitting many photons for several milliseconds before it returns into a very long-living, so-called irreversible dark state. SPDMphymod microscopy uses fluorescent molecules that emit the same spectral light frequency but with different spectral signatures based on the flashing characteristics. By combining two thousands images of the same cell, it is possible, using laser optical precision measurements, to record localization images with significantly improved optical resolution. [ 62 ]
Standard fluorescent dyes already successfully used with the SPDMphymod technology are GFP , RFP , YFP , Alexa 488 , Alexa 568, Alexa 647, Cy2 , Cy3, Atto 488 and fluorescein .
Cryogenic Optical Localization in 3D (COLD) is a method that allows localizing multiple fluorescent sites within a single small- to medium-sized biomolecule with Angstrom-scale resolution. [ 54 ] The localization precision in this approach is enhanced because the slower photochemistry at low temperatures leads to a higher number of photons that can be emitted from each fluorophore before photobleaching. [ 63 ] [ 64 ] Consequently, cryogenic stochastic localization microscopy achieves the sub-molecular resolution required to resolve the 3D positions of several fluorophores attached to a small protein. By employing algorithms known from electron microscopy, the 2D projections of fluorophores are reconstructed into a 3D configuration. COLD brings fluorescence microscopy to its fundamental limit, depending on the size of the label. The method can also be combined with other structural biology techniques—such as X-ray crystallography, magnetic resonance spectroscopy, and electron microscopy—to provide valuable complementary information and specificity.
Binding-activated localization microscopy (BALM) is a general concept for single-molecule localization microscopy (SMLM): super-resolved imaging of DNA-binding dyes based on modifying the properties of DNA and a dye. [ 65 ] By careful adjustment of the chemical environment—leading to local, reversible DNA melting and hybridization control over the fluorescence signal—DNA-binding dye molecules can be introduced. Intercalating and minor-groove binding DNA dyes can be used to register and optically isolate only a few DNA-binding dye signals at a time. DNA structure fluctuation-assisted BALM (fBALM) has been used to nanoscale differences in nuclear architecture, with an anticipated structural resolution of approximately 50 nm. Imaging chromatin nanostructure with binding-activated localization microscopy based on DNA structure fluctuations. [ 66 ] Recently, the significant enhancement of fluorescence quantum yield of NIAD-4 upon binding to an amyloid was exploited for BALM imaging of amyloid fibrils [ 67 ] and oligomers. [ 68 ]
Stochastic optical reconstruction microscopy (STORM), photo activated localization microscopy (PALM), and fluorescence photo-activation localization microscopy (FPALM) are super-resolution imaging techniques that use sequential activation and time-resolved localization of photoswitchable fluorophores to create high resolution images. During imaging, only an optically resolvable subset of fluorophores is activated to a fluorescent state at any given moment, such that the position of each fluorophore can be determined with high precision by finding the centroid positions of the single-molecule images of a particular fluorophore. One subset of fluorophores is subsequently deactivated, and another subset is activated and imaged. Iteration of this process allows numerous fluorophores to be localized and a super-resolution image to be constructed from the image data.
These three methods were published independently over a short period of time, and their principles are identical. STORM was originally described using Cy5 and Cy3 dyes attached to nucleic acids or proteins, [ 69 ] while PALM and FPALM were described using photoswitchable fluorescent proteins. [ 70 ] [ 71 ] In principle any photoswitchable fluorophore can be used, and STORM has been demonstrated with a variety of different probes and labeling strategies. Using stochastic photoswitching of single fluorophores, such as Cy5, [ 72 ] STORM can be performed with a single red laser excitation source. The red laser both switches the Cy5 fluorophore to a dark state by formation of an adduct [ 73 ] [ 74 ] and subsequently returns the molecule to the fluorescent state. Many other dyes have been also used with STORM. [ 75 ] [ 76 ] [ 77 ] [ 78 ] [ 79 ] [ 80 ]
In addition to single fluorophores, dye-pairs consisting of an activator fluorophore (such as Alexa 405, Cy2, or Cy3) and a photoswitchable reporter dye (such as Cy5, Alexa 647, Cy5.5, or Cy7) can be used with STORM. [ 69 ] [ 81 ] [ 82 ] In this scheme, the activator fluorophore, when excited near its absorption maximum, serves to reactivate the photoswitchable dye to the fluorescent state. Multicolor imaging has been performed by using different activation wavelengths to distinguish dye-pairs, depending on the activator fluorophore used, [ 81 ] [ 82 ] [ 83 ] or using spectrally distinct photoswitchable fluorophores, either with or without activator fluorophores. [ 75 ] [ 84 ] [ 85 ] Photoswitchable fluorescent proteins can be used as well. [ 70 ] [ 71 ] [ 85 ] [ 86 ] Highly specific labeling of biological structures with photoswitchable probes has been achieved with antibody staining, [ 81 ] [ 82 ] [ 83 ] [ 87 ] direct conjugation of proteins, [ 88 ] and genetic encoding. [ 70 ] [ 71 ] [ 85 ] [ 86 ]
STORM has also been extended to three-dimensional imaging using optical astigmatism, in which the elliptical shape of the point spread function encodes the x, y, and z positions for samples up to several micrometers thick, [ 82 ] [ 87 ] and has been demonstrated in living cells. [ 85 ] To date, the spatial resolution achieved by this technique is ~20 nm in the lateral dimensions and ~50 nm in the axial dimension; and the temporal resolution is as fast as 0.1–0.33s. [ citation needed ]
Points accumulation for imaging in nanoscale topography (PAINT) is a single-molecule localization method that achieves stochastic single-molecule fluorescence by molecular adsorption/absorption and photobleaching/desorption. [ 89 ] [ 90 ] The first dye used was Nile red which is nonfluorescent in aqueous solution but fluorescent when inserted into a hydrophobic environment, such as micelles or living cell walls. Thus, the concentration of the dye is kept small, at the nanomolar level, so that the molecule's sorption rate to the diffraction-limited area is in the millisecond region. The stochastic binding of single-dye molecules (probes) to an immobilized target can be spatially and temporally resolved under a typical widefield fluorescence microscope. Each dye is photobleached to return the field to a dark state, so the next dye can bind and be observed. The advantage of this method, compared to other stochastic methods, is that in addition to obtaining the super-resolved image of the fixed target, it can measure the dynamic binding kinetics of the diffusing probe molecules, in solution, to the target. [ 91 ] [ 90 ]
Combining 3D super-resolution technique (e.g. the double-helix point spread function develop in Moerner's group), photo-activated dyes, power-dependent active intermittency, and points accumulation for imaging in nanoscale topography, SPRAIPAINT (SPRAI=Super resolution by PoweR-dependent Active Intermittency [ 92 ] ) can super-resolve live-cell walls. [ 93 ] PAINT works by maintaining a balance between the dye adsorption/absorption and photobleaching/desorption rates. This balance can be estimated with statistical principles. [ 94 ] The adsorption or absorption rate of a dilute solute to a surface or interface in a gas or liquid solution can be calculated using Fick's laws of diffusion . The photobleaching/desorption rate can be measured for a given solution condition and illumination power density.
DNA-PAINT has been further extended to use regular dyes, where the dynamic binding and unbinding of a dye-labeled DNA probe to a fixed DNA origami is used to achieve stochastic single-molecule imaging. [ 95 ] [ 96 ] DNA-PAINT is no longer limited to environment-sensitive dyes and can measure both the adsorption and the desorption kinetics of the probes to the target. The method uses the camera blurring effect of moving dyes. When a regular dye is diffusing in the solution, its image on a typical CCD camera is blurred because of its relatively fast speed and the relatively long camera exposure time, contributing to the fluorescence background. However, when it binds to a fixed target, the dye stops moving; and clear input into the point spread function can be achieved.
The term for this method is mbPAINT ("mb" standing for motion blur ). [ 97 ] When a total internal reflection fluorescence microscope (TIRF) is used for imaging, the excitation depth is limited to ~100 nm from the substrate, which further reduces the fluorescence background from the blurred dyes near the substrate and the background in the bulk solution. Very bright dyes can be used for mbPAINT which gives typical single-frame spatial resolutions of ~20 nm and single-molecule kinetic temporal resolutions of ~20 ms under relatively mild photoexcitation intensities, which is useful in studying molecular separation of single proteins. [ 98 ]
By using a secondary DNA strand that couples to the primary (antibody-conjugated) strand, the fluorescent label can be gently stripped, allowing multiplexed localization of 30 different proteins. This method, called SUM-PAINT, has been used to map the localization of synaptic proteins at 5 nm resolution, revealing differences in the architecture of excitatory, inhibitory and mixed synapses. [ 99 ]
The temporal resolution has been further improved (20 times) using a rotational phase mask placed in the Fourier plane during data acquisition and resolving the distorted point spread function that contains temporal information. The method was named Super Temporal-Resolved Microscopy (STReM). [ 100 ]
Optical resolution of cellular structures in the range of about 50 nm can be achieved, even in label-free cells, using localization microscopy SPDM .
By using two different laser wavelengths, SPDM reveals cellular objects which are not detectable under conventional fluorescence wide-field imaging conditions, beside making for a substantial resolution improvement of autofluorescent structures.
As a control, the positions of the detected objects in the localization image match those in the bright-field image. [ 101 ]
Label-free superresolution microscopy has also been demonstrated using the fluctuations of a surface-enhanced Raman scattering signal on a highly uniform plasmonic metasurface. [ 102 ]
dSTORM uses the photoswitching of a single fluorophore. In dSTORM, fluorophores are embedded in a reducing and oxidizing buffering system (ROXS) and fluorescence is excited. Sometimes, stochastically, the fluorophore will enter a triplet or some other dark state that is sensitive to the oxidation state of the buffer, from which they can be made to fluoresce, so that single molecule positions can be recorded. [ 103 ] Development of the dSTORM method occurred at 3 independent laboratories at about the same time and was also called "reversible photobleaching microscopy" (RPM), [ 104 ] "ground state depletion microscopy followed by individual molecule return" (GSDIM), [ 105 ] as well as the now generally accepted moniker dSTORM. [ 106 ]
Localization microscopy depends heavily on software that can precisely fit the point spread function (PSF) to millions of images of active fluorophores within a few minutes. [ 107 ] Since the classical analysis methods and software suites used in the natural sciences are too slow to computationally solve these problems, often taking hours of computation for processing data measured in minutes, specialised software programs have been developed. Many of these localization software packages are open-source; they are listed at SMLM Software Benchmark. [ 108 ] Once molecule positions have been determined, the locations need to be displayed and several algorithms for display have been developed. [ 109 ]
Random Illumination Microscopy (RIM) is a super-resolution imaging technique that employs random or pseudo-random wide-field illuminations generated by a laser. This method enables the reconstruction of a high-resolution image from multiple low-resolution frames captured under varying, unknown illumination patterns, achieving resolutions down to 90 nanometers. RIM is particularly advantageous for imaging thick, living samples due to its minimal phototoxicity and robust z-sectioning capabilities. Additionally, its resistance to optical aberrations makes it a highly effective tool for biological research.
It is possible to circumvent the need for PSF fitting inherent in single molecule localization microscopy (SMLM) by directly computing the temporal autocorrelation of pixels. This technique is called super-resolution optical fluctuation imaging (SOFI) and has been shown to be more precise than SMLM when the density of concurrently active fluorophores is very high.
Omnipresent Localisation Microscopy (OLM) is an extension of Single Molecule Microscopy (SMLM) techniques that allow high-density single molecule imaging with an incoherent light source (such as a mercury-arc lamp) and a conventional epifluorescence microscope setup. [ 110 ] A short burst of deep-blue excitation (with a 350-380 nm, instead of a 405 nm, laser) enables a prolonged reactivation of molecules, for a resolution of 90 nm on test specimens. Finally, correlative STED and SMLM imaging can be performed on the same biological sample using a simple imaging medium, which can provide a basis for a further enhanced resolution. These findings can democratize super-resolution imaging and help any scientist to generate high-density single-molecule images even with a limited budget.
Resolution enhancement by sequential imaging (RESI) is an extension of DNA-PAINT that can achieve theoretically unlimited resolution. [ 111 ] Rather than using one label type to identify a given target species, copies of the same target are labeled with orthogonal DNA sequences. Upon sequential (i.e. separated) imaging, localization clouds that would overlap in conventional SMLM can be (1) resolved and (2) combined into a single "super" localization, the precision of which scales with the underlying number of localizations. As the number of achievable localizations in DNA-PAINT is unlimited, so is the theoretical resolution of RESI. Overlaying the RESI localizations from the underlying imaging rounds creates a composite, highly resolved image.
Light MicrOscopical Nanosizing microscopy (3D LIMON) images, using the Vertico SMI microscope, are made possible by the combination of SMI and SPDM , whereby first the SMI, and then the SPDM, process is applied.
The SMI process determines the center of particles and their spread in the direction of the microscope axis. While the center of particles/molecules can be determined with a precision of 1–2 nm, the spread around this point can be determined down to an axial diameter of approximately 30–40 nm.
Subsequently, the lateral position of the individual particle/molecule is determined using SPDM, achieving a precision of a few nanometers. [ 112 ]
As a biological application in the 3D dual color mode, the spatial arrangements of Her2/neu and Her3 clusters was achieved. The positions in all three directions of the protein clusters could be determined with an accuracy of about 25 nm. [ 113 ]
Combining a super-resolution microscope with an electron microscope enables the visualization of contextual information, with the labelling provided by fluorescence markers. This overcomes the problem of the black backdrop that the researcher is left with when using only a light microscope. In an integrated system, the sample is measured by both microscopes simultaneously. [ 114 ]
Recently, owing to advancements in artificial intelligence computing, deep learning neural networks ( GANs ) have been used for super-resolution enhancing of photographic images extracted from optical microscopes, [ 115 ] enhancing resolution from 40x to 100x. [ 116 ] Resolution increases from 20x with an optical microscope to 1500x, comparable to a scanning electron microscope, via a neural lens. [ 117 ] These techniques have applications in super-resolving images from positron-emission tomography and fluorescence microscopy. [ 118 ] | https://en.wikipedia.org/wiki/Super-resolution_microscopy |
Super-resolution optical fluctuation imaging ( SOFI ) is a post-processing method for the calculation of super-resolved images from recorded image time series that is based on the temporal correlations of independently fluctuating fluorescent emitters.
SOFI has been developed for super-resolution of biological specimen that are labelled with independently fluctuating fluorescent emitters (organic dyes, fluorescent proteins ). In comparison to other super-resolution microscopy techniques such as STORM or PALM that rely on single-molecule localization and hence only allow one active molecule per diffraction-limited area (DLA) and timepoint, [ 1 ] [ 2 ] SOFI does not necessitate a controlled photoswitching and/ or photoactivation as well as long imaging times. [ 3 ] [ 4 ] Nevertheless, it still requires fluorophores that are cycling through two distinguishable states, either real on-/off-states or states with different fluorescence intensities. In mathematical terms SOFI-imaging relies on the calculation of cumulants , for what two distinguishable ways exist. For one thing an image can be calculated via auto-cumulants [ 3 ] that by definition only rely on the information of each pixel itself, and for another thing an improved method utilizes the information of different pixels via the calculation of cross-cumulants. [ 5 ] Both methods can increase the final image resolution significantly although the cumulant calculation has its limitations. Actually SOFI is able to increase the resolution in all three dimensions. [ 3 ]
Likewise to other super-resolution methods SOFI is based on recording an image time series on a CCD- or CMOS camera. In contrary to other methods the recorded time series can be substantially shorter, since a precise localization of emitters is not required and therefore a larger quantity of activated fluorophores per diffraction-limited area is allowed. The pixel values of a SOFI-image of the n -th order are calculated from the values of the pixel time series in the form of a n -th order cumulant, whereas the final value assigned to a pixel can be imagined as the integral over a correlation function. The finally assigned pixel value intensities are a measure of the brightness and correlation of the fluorescence signal. Mathematically, the n -th order cumulant is related to the n -th order correlation function, but exhibits some advantages concerning the resulting resolution of the image. Since in SOFI several emitters per DLA are allowed, the photon count at each pixel results from the superposition of the signals of all activated nearby emitters. The cumulant calculation now filters the signal and leaves only highly correlated fluctuations. This provides a contrast enhancement and therefore a background reduction for good measure.
As it is implied in the figure on the left the fluorescence source distribution:
is convolved with the system's point spread function (PSF) U ( r ). Hence the fluorescence signal at time t and position r → {\displaystyle {\vec {r}}} is given by
Within the above equations N is the amount of emitters, located at the positions r → k {\displaystyle {\vec {r}}_{k}} with a time-dependent molecular brightness ε k ⋅ s k {\displaystyle \varepsilon _{k}\cdot s_{k}} where ε k {\displaystyle \varepsilon _{k}} is a variable for the constant molecular brightness and s k ( t ) {\displaystyle s_{k}(t)} is a time-dependent fluctuation function. The molecular brightness is just the average fluorescence count-rate divided by the number of molecules within a specific region. For simplification it has to be assumed that the sample is in a stationary equilibrium and therefore the fluorescence signal can be expressed as a zero-mean fluctuation:
where ⟨ ⋯ ⟩ t {\displaystyle \langle \cdots \rangle _{t}} denotes time-averaging. The auto-correlation here e.g. the second-order can then be described deductively as follows for a certain time-lag τ {\displaystyle \tau } :
From these equations it follows that the PSF of the optical system has to be taken to the power of the order of the correlation. Thus in a second-order correlation the PSF would be reduced along all dimensions by a factor of 2 {\displaystyle {\sqrt {2}}} . As a result, the resolution of the SOFI-images increases according to this factor.
Using only the simple correlation function for a reassignment of pixel values, would ascribe to the independency of fluctuations of the emitters in time in a way that no cross-correlation terms would contribute to the new pixel value. Calculations of higher-order correlation functions would suffer from lower-order correlations for what reason it is superior to calculate cumulants, since all lower-order correlation terms vanish.
For computational reasons it is convenient to set all time-lags in higher-order cumulants to zero so that a general expression for the n -th order auto-cumulant can be found: [ 3 ]
w k {\displaystyle w_{k}} is a specific correlation based weighting function influenced by the order of the cumulant and mainly depending on the fluctuation properties of the emitters.
Albeit there is no fundamental limitation in calculating very high orders of cumulants and thereby shrinking the FWHM of the PSF there are practical limitations according to the weighting of the values assigned to the final image. Emitters with a higher molecular brightness will show a strong increase in terms of the pixel cumulant value assigned at higher-orders as well as this performance can be expected from a diverse appearance of fluctuations of different emitters. A wide intensity range of the resulting image can therefore be expected and as a result dim emitters can get masked by bright emitters in higher-order images:. [ 3 ] [ 5 ] The calculation of auto-cumulants can be realized in a very attractive way in a mathematical sense. The n -th order cumulant can be calculated with a basic recursion from moments [ 6 ]
where K is a cumulant of the index's order, likewise μ {\displaystyle \mu } represents the moments. The term within the brackets indicates a binomial coefficient. This way of computation is straightforward in comparison with calculating cumulants with standard formulas. It allows for the calculation of cumulants with only little time of computing and is, as it is well implemented, even suitable for the calculation of high-order cumulants on large images.
In a more advanced approach cross-cumulants are calculated by taking the information of several pixels into account. Cross-cumulants can be described as follows: [ 5 ] [ 7 ]
j , l and k are indices for contributing pixels whereas i is the index for the current position. All other values and indices are used as before. The major difference in the comparison of this equation with the equation for the auto-cumulants is the appearance of a weighting-factor U ( r j − r l / n ) {\displaystyle U(r_{j}-r_{l}/{\sqrt {n}})} . This weighting-factor (also termed distance-factor) is PSF-shaped and depends on the distance of the cross-correlated pixels in a sense that the contribution of each pixels decays along the distance in a PSF-shaped manner. In principle this means that the distance-factor is smaller for pixels that are further apart. The cross-cumulant approach can be used to create new, virtual pixels revealing true information about the labelled specimen by reducing the effective pixel size. These pixels carry more information than pixels that arise from simple interpolation.
In addition the cross-cumulant approach can be used to estimate the PSF of the optical system by making use of the intensity differences of the virtual pixels that is due to the "loss" in cross-correlation as aforementioned. [ 5 ] Each virtual pixel can be re-weighted with the inverse of the distance-factor of the pixel leading to a restoration of the true cumulant value. At last the PSF can be used to create a resolution dependency of n for the n th-order cumulant by re-weighting the "optical transfer function" (OTF). [ 5 ] This step can also be replaced by using the PSF for a deconvolution that is associated with less computational cost.
Cross-cumulant calculation requires the usage of a computational much more expensive formula that comprises the calculation of sums over partitions. This is of course owed to the combination of different pixels to assign a new value. Hence no fast recursive approach is usable at this point. For the calculation of cross-cumulants the following equation can be used: [ 8 ]
In this equation P denotes the amount of possible partitions, p denotes the different parts of each partition. In addition i is the index for the different pixel positions taken into account during the calculation what for F is just the image stack of the different contributing pixels. The cross-cumulant approach facilitates the generation of virtual pixels depending on the order of the cumulant as previously mentioned. These virtual pixels can be calculated in a particular pattern from the original pixels for a 4th-order cross-cumulant image, as it is depicted in the lower image, part A. The pattern itself arises simple from the calculation of all possible combinations of the original image pixels A, B, C and D. Here this was done by a scheme of "combinations with repetitions". Virtual pixels exhibit a loss in intensity that is due to the correlation itself. Part B of the second image depicts this general dependency of the virtual pixels on the cross-correlation. To restore meaningful pixel values the image is smoothed by a routine that defines a distance-factor for each pixel of the virtual pixel grid in a PSF-shaped manner and applies the inverse on all image pixels that are related to the same distance-factor. [ 5 ] [ 7 ] | https://en.wikipedia.org/wiki/Super-resolution_optical_fluctuation_imaging |
SuperBBS is a DOS Bulletin Board System (BBS) software package written by Aki Antman and Risto Virkkala. [ 1 ] It was born as a functional clone of RemoteAccess BBS (which in turn was a clone of QuickBBS ), but extended the functionality with several newer technology a different way from RA. SuperBBS offered news, email, file sharing, discussion forums, realtime chat etc. and was used in more than 40 countries. [ citation needed ] It has been distributed as shareware .
SuperBBS supported Hudson type messagebase, USERS.BBS style userbase, flexible menu and textfile options to make the software highly customisable. Supported several style doorway (external) programs and utilities written for QuickBBS, RemoteAccess and ProBoard.
The development ceased when Antman entered the Finnish army in 1993 and decided not to continue the development.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/SuperBBS |
The SuperCPU is a processor upgrade for the Commodore 64 and Commodore 128 personal computer platforms. It uses the W65C816S 8/16 bit microprocessor , and takes the form of an expansion port cartridge, rather than a replacement for the 6510 CPU.
The SuperCPU was developed by Creative Micro Designs , Inc and released on May 4, 1997. [ 1 ] It used a device called the RamCard to increase its capabilities. The card is no longer sold by Creative Micro Designs as of 2001; the distribution was taken over from 2001 to 2009 by the U.S. company Click Here Software Co., but it is unclear if any were manufactured after 2001.
The SuperCPU can have up to 16 MB RAM installed and sported a " Turbo " switch which when enabled, clocked a Commodore 64 or Commodore 128 up to 20 MHz . [ 2 ] The SuperCPU requires 0.4 A (400mA) and has a shadow ROM in 128 KB of RAM. Internal ROM was 128 KB . [ 3 ] Using the RamCard's fast page mode 1, 4, 8 or 16 MB SIMM memory modules can be used. [ 4 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/SuperCPU |
SuperCam is a suite of remote-sensing instruments for the Mars 2020 Perseverance rover mission that performs remote analyses of rocks and soils with a camera, two lasers and four spectrometers to seek organic compounds that could hold biosignatures of past microbial life on Mars , if it ever existed there.
SuperCam was developed in collaboration between the Research Institute in Astrophysics and Planetology ( IRAP [ fr ] ) of the University of Toulouse in France, the French Space Agency ( CNES ), Los Alamos National Laboratory , the University of Valladolid (Spain), the University of Hawaii and the Universities of Basque Country and Málaga in Spain. The Principal Investigator is Roger Wiens from Los Alamos National Laboratory . SuperCam is an improved version of the successful ChemCam instruments of the Curiosity rover that have been upgraded with two different lasers and detectors. [ 1 ] [ 2 ] [ 3 ] SuperCam is used in conjunction with the AEGIS (Autonomous Exploration for Gathering Increased Science) targeting system, a program which Vandi Verma , NASA roboticist and engineer, helped develop. [ 4 ]
In April 2018, SuperCam entered the final stages of assembly and testing. The flight model was installed to the rover in June 2019. The rover mission was launched on 30 July 2020. [ 5 ]
For measurements of chemical composition, the instrument suite uses a version of the successful ChemCam instruments of the Curiosity rover that have been upgraded with two different lasers and detectors. [ 1 ] [ 2 ] [ 3 ] SuperCam's instruments are able to identify the kinds of chemicals that could be evidence of past life on Mars . SuperCam is a suite of various instruments, and the collection of correlated measurements on a target can be used to determine directly the geochemistry and mineralogy of samples. [ 1 ] [ 7 ] [ 8 ]
The suite has several integrated instruments: Raman spectroscopy , time-resolved fluorescence (TRF) spectroscopy, and Visible and InfraRed (VISIR) reflectance spectroscopy to provide preliminary information about the mineralogy and molecular structure of samples under consideration, as well as being able to directly measure organic compounds . [ 3 ] [ 2 ] The total is four complementary spectrometers, making the suite sensitive enough to measure trace amounts of chemicals. [ 1 ] [ 7 ]
The remote laser-induced breakdown spectroscopy (LIBS) system emits a 1064-nm laser beam to investigate targets as small as a grain of rice from a distance of more than 7 meters, allowing the rover to study targets beyond the reach of its arm. [ 6 ] [ 7 ] [ 8 ] The beam vaporizes a tiny amount of rock, creating a hot plasma . SuperCam then measures the colors of light in the plasma, which provide clues to the target's elemental composition. [ 2 ] [ 7 ] Its laser is also capable of remotely clearing away surface dust, giving all of its instruments a clear view of the targets. [ 6 ] [ 7 ] LIBS unit contains three spectrometers. Two of these handle the visible and violet portion of the VISIR spectra, while the IR portion is recorded in the mast. [ 9 ]
SuperCam's Raman spectrometer (at 532 nm) investigates targets up to 12 m from the rover. In the Raman spectroscopy technique, most of the green laser light reflects back at the same wavelength that was sent, but a small fraction of the light interacts with the target molecules, changing the wavelength in proportion to the vibrational energy of the molecular bonds. By spectrally observing the returned Raman light, the identity of the minerals can be determined. [ 10 ] [ 11 ]
The infrared spectrometer , provided by France, operates in the near-infrared (1.3 to 2.6 micrometers) wavelengths and its photodiodes , or detectors, are cooled by small thermoelectric coolers to ensure that they operate between −100 °C and −50 °C at all times. [ 9 ] This instrument will analyze many of the clay minerals and help unravel the history of liquid water on Mars . [ 1 ] The types of clay minerals and their abundances give clues about the nature of the water that was present, whether fresh or salty, acidic or neutral pH , whether it might have been icy or warm, and whether the water was present for a long period of time. [ 1 ] These are key questions to understand how habitable the surface environment was in the distant past.
SuperCam's optical camera acquires high-resolution color images of samples under study, which also help determine the surface geology. This camera can also study how atmospheric water and dust absorb or reflect solar radiation, which may help to develop weather forecasts . [ 6 ] SuperCam is also equipped with a microphone to capture the first audio recordings from the surface of Mars. [ 1 ] The microphone is the same model (Knowles Corp EK) as the ones that flew to Mars on the 1998 Mars Polar Lander and the 2007 Phoenix lander . [ 7 ] However, neither mission was able to record sounds. [ 7 ]
The detectors of all four spectrometers are cooled to just below 0 °C by thermoelectric coolers. The photodiodes for the infrared (IR) spectrometer are further cooled to between −100 °C and −50 °C at all times. [ 9 ] | https://en.wikipedia.org/wiki/SuperCam |
In lossless power transmission, a supergrid with hydrogen is an idea for combining very long distance electric power transmission with liquid hydrogen distribution, to achieve superconductivity in the cables. The hydrogen is both a distributed fuel and a cryogenic coolant for the power lines, rendering them superconducting . The concept's advocates describe it as being in a "visionary" stage, for which no new scientific breakthrough is required but which requires major technological innovations before it could progress to a practical system. [ 1 ] A system for the United States is projected to require "several decades" before it could be fully implemented. [ 1 ]
One proposed design for a superconducting cable includes a superconducting bipolar DC line operating at ±50 kV, and 50 kA, transmitting about 2.5 GW for several hundred kilometers at zero resistance and nearly no line loss. [ 2 ] High-voltage direct current (HVDC) lines have the capability of transmitting similar wattages, for example a 5 gigawatt HVDC system is being constructed along the southern provinces of China without the use of superconducting cables. [ 3 ]
In the United States , a Continental SuperGrid 4,000 kilometers long might carry 40,000 to 80,000 MW in a tunnel shared with long-distance high speed maglev trains, which at low pressure could allow cross continental journeys of one hour. The liquid hydrogen pipeline would both store and deliver hydrogen. [ 4 ]
1.5% [ 5 ] of the energy transmitted on the British AC Supergrid is lost (transformer, heating and capacitive losses). Of this, a little under two-thirds (or 1% on the British supergrid), represents "DC" (resistive) heating type losses. With superconductive power lines, the capacitive and transformer losses (in the unlikely event the transmission lines were still overhead AC lines) would remain the same. In addition, overhead lines do not lend themselves at all well physically to the incorporation of cryogenic hydrogen piping , due to the likely weight of the transmission medium and the considerable brittleness of supercooled materials. It would probably be necessary for a supercooled hydrogen-carrying transmission line to be subterranean, and this in turn means that for such a cable, if it were of any distance (e.g. over 60 km), the power would have to be converted to DC and transmitted as such, since otherwise the capacitive losses would be too high. In this case, the power electronic losses in the AC/DC converter substations would negate part or all of the power savings from the superconductive line itself.
Even before comprehensive continental and (in the case of the proposed European Super Grid ) intercontinental backbones of electrical transmission may be realized, such cables could be used to efficiently interconnect regional power grids of conventional design. | https://en.wikipedia.org/wiki/SuperGrid_(hydrogen) |
The SuperNova Early Warning System (SNEWS) is a network of neutrino detectors designed to give early warning to astronomers in the event of a supernova in the Milky Way , our home galaxy, or in a nearby galaxy such as the Large Magellanic Cloud .
SNEWS has been operating since 2005. As of March 2021 [update] , [ 1 ] it has not issued any supernova alerts. This is unsurprising, as supernovae appear to be rare: the most recent known supernova remnant in the Milky Way was around the turn of the 20th century, and the most recent Milky Way supernova confirmed to have been observed was Kepler's Supernova in 1604.
In June 2019 a "SNEWS 2.0" workshop was held at Laurentian University of Sudbury in Canada, focused on plans for an update of SNEWS. [ 1 ] [ 2 ] As the result, an upgraded system was devised under the name "SNEWS 2.0". [ 3 ] [ 4 ]
Powerful bursts of electron neutrinos (ν e ) with typical energies of the order of 10 MeV and duration of the order of 10 seconds are produced in the core of a red giant star as it collapses on itself via the "neutronization" reaction, i.e. fusion of protons and electrons into neutrons and neutrinos: p + e − → n + ν e . It is expected that the neutrinos are emitted well before the light from the supernova peaks, so in principle neutrino detectors could give warning to astronomers that a supernova has occurred and may soon be visible. The neutrino pulse from supernova 1987A arrived 3 hours before the associated photons – but SNEWS was not yet active and it was not recognised as a supernova event until after the photons arrived.
Directional precision of approximately 5° is expected. [ 5 ] SNEWS is not able to give warning of a type Ia supernova , as they are not expected to produce significant numbers of neutrinos. Type Ia supernovae, caused by a runaway nuclear fusion reaction in a white dwarf star, are thought to account for roughly one-third of all supernovae. [ 6 ]
There are currently seven neutrino detector members of SNEWS: Borexino , Daya Bay , KamLAND , HALO , IceCube , LVD , and Super-Kamiokande . [ 7 ] SNEWS began operation prior to 2004, with three members (Super-Kamiokande, LVD, and SNO). The Sudbury Neutrino Observatory is no longer active as it is being upgraded to its successor program SNO+ .
The detectors send reports of a possible supernova to a computer at Brookhaven National Laboratory to identify a supernova. If the SNEWS computer identifies signals from two detectors within 10 seconds, the computer will send a supernova alert to observatories around the world to study the supernova. [ 8 ] The SNEWS mailing list is open-subscription, and the general public is allowed to sign up; however, the SNEWS collaboration encourages amateur astronomers to instead use Sky and Telescope magazine's AstroAlert service, which is linked to SNEWS. | https://en.wikipedia.org/wiki/SuperNova_Early_Warning_System |
In physics , a Super Bloch oscillation describes a certain type of motion of a particle in a lattice potential under external periodic driving. The term super refers to the fact that the amplitude in position space of such an oscillation is several orders of magnitude larger than for 'normal' Bloch oscillations .
Normal Bloch oscillations and Super Bloch oscillations are closely connected. In general, Bloch oscillations are a consequence of the periodic structure of the lattice potential and the existence of a maximum value of the Bloch wave vector k max {\displaystyle k_{\text{max}}} . A constant force F 0 {\displaystyle F_{0}} results in the acceleration of the particle until the edge of the first Brillouin zone is reached. The following sudden change in velocity from + ℏ k max / m {\displaystyle +\hbar k_{\text{max}}/m} to − ℏ k max / m {\displaystyle -\hbar k_{\text{max}}/m} can be interpreted as a Bragg scattering of the particle by the lattice potential. As a result, the velocity of the particle never exceeds | ℏ k max / m | {\displaystyle |\hbar k_{\text{max}}/m|} but oscillates in a saw-tooth like manner with a corresponding periodic oscillation in position space. Surprisingly, despite the constant acceleration the particle does not translate, but just moves over very few lattice sites.
Super Bloch oscillations arise when an additional periodic driving force is added to F 0 {\displaystyle F_{0}} , resulting in: F ( t ) = F 0 + Δ F sin ( ω t + φ ) {\displaystyle F(t)=F_{0}+\Delta F\sin(\omega t+\varphi )} The details of the motion depend on the ratio between the driving frequency ω {\displaystyle \omega } and the Bloch frequency ω B {\displaystyle \omega _{B}} . A small detuning ω − ω B {\displaystyle \omega -\omega _{B}} results in a beat between the Bloch cycle and the drive, with a drastic change of the particle motion. On top of the Bloch oscillation, the motion shows a much larger oscillation in position space that extends over hundreds of lattice sites. Those Super Bloch oscillations directly correspond to the motion of normal Bloch oscillations, just rescaled in space and time.
A quantum mechanical description of the rescaling can be found here. [ 1 ] An experimental realization is demonstrated in these. [ 2 ] [ 3 ] [ 4 ] A theoretical analysis of the properties of Super-Bloch Oscillations, including dependence on the phase of the driving field is found here. [ 5 ] | https://en.wikipedia.org/wiki/Super_Bloch_oscillations |
In physics , the super Tonks–Girardeau gas represents an excited quantum gas phase with strong attractive interactions in a one-dimensional spatial geometry.
Usually, strongly attractive quantum gases are expected to form dense particle clusters and lose all gas-like properties. But in 2005, it was proposed by Stefano Giorgini and co-workers that there is a many-body state of attractively interacting bosons that does not decay in one-dimensional systems. [ 1 ] [ 2 ] [ 3 ] If prepared in a special way, this lowest gas-like state should be stable and show new quantum mechanical properties.
Particles in a super-Tonks gas should be strongly correlated and show long range order with a Luttinger liquid parameter K <1. Since each particle occupies a certain volume, the gas properties are similar to a classical gas of hard rods. Despite the mutual attraction, the single particle wave functions separate and the bosons behave similar to fermions with repulsive, long-range interaction.
To prepare the super-Tonks–Girardeau phase it is necessary to increase the repulsive interaction strength all the way through the Tonks–Girardeau regime up to infinity. Sudden switching from infinitely strong repulsive to infinitely attractive interactions stabilizes the gas against collapse and connects the ground state of the Tonks gas to the excited state of the super-Tonks gas.
The super-Tonks–Girardeau gas was experimentally observed in Ref. [ 4 ] using an ultracold gas of cesium atoms. Reducing the magnitude of the attractive interactions caused the gas to became unstable to collapse into cluster-like bound states. Repulsive dipolar interactions stabilize the gas when instead using highly magnetic dysprosium atoms. [ 5 ] This enabled the creation of prethermal quantum many-body scar states via the topological pumping of these super-Tonks-Girardeau gases. [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Super_Tonks–Girardeau_gas |
Super black is a surface treatment developed at the National Physical Laboratory (NPL) in the United Kingdom. It absorbs approximately 99.6% of visible light at normal incidence, while conventional black paint absorbs about 97.5%. At other angles of incidence, super black is even more effective: at an angle of 45°, it absorbs 99.9% of light.
The technology to create super black involves chemically etching a nickel - phosphorus alloy . [ 1 ] [ 2 ]
Applications of super black are in specialist optical instruments for reducing unwanted reflections. The disadvantage of this material is its low optical thickness, as it is a surface treatment. As a result, infrared light of a wavelength longer than a few micrometers penetrates through the dark layer and has much higher reflectivity. The reported spectral dependence increases from about 1% at 3 μm to 50% at 20 μm. [ 3 ]
In 2009, a competitor to the super black material, Vantablack , was developed based on carbon nanotubes . It has a relatively flat reflectance in a wide spectral range. [ 4 ]
In 2011, NASA and the US Army began funding research in the use of nanotube-based super black coatings in sensitive optics. [ 5 ] Nanotube-based superblack arrays and coatings have recently become commercially available. [ 6 ] | https://en.wikipedia.org/wiki/Super_black |
A super soldier (or supersoldier ) is a concept soldier capable of operating beyond normal human abilities through technological augmentation or (in fictional depictions) genetic modification or cybernetic augmentation. Soldiers that obtain greater-than-normal physical abilities by wearing powered armor or a technological exoskeleton (such as the Mobile Infantry in Robert A. Heinlein ´s Starship Troopers novella) are a distinct, but related concept and the two often overlap, as is the case for Halo and Warhammer 40,000 universes, for example.
Super soldiers are common in military science fiction literature, films, and video games. Well-known examples include the novel The Forever War by Joe Haldeman and the Halo franchise. Super soldiers are also prevalent in the science fiction universe of Warhammer 40,000 and its prequel The Horus Heresy . Critic Mike Ryder has argued that the super soldiers depicted in these worlds serve as a mirror to present-day issues around sovereignty, military ethics and the law. [ 2 ] Marvel Comics , and by extension the Marvel Cinematic Universe , feature a wide array of heroes and villains whose powers are obtained through various competing attempts to create a super soldier, including Captain America , Hulk , the German Red Skull , and the Russian Red Guardian . [ 3 ]
Fictional super soldiers are usually heavily augmented , possibly through surgical means , eugenics , genetic engineering , drugs , brainwashing , traumatic events, an extreme training regimen or other scientific and pseudoscientific means, or a combination of some of these methods. Some depictions can be categorized as cyborgs or cybernetic organisms due to their augmentations taking the form of technology integrated into a living organism. [ 4 ] A few stories also use paranormal methods or technology, and science of extraterrestrial origin. The fictional masterminds of such programs are depicted often as mad scientists or stern military personnel depending on the needs of the plot, in stories that typically explore the ethical boundaries of the pursuit of science and victory.
In 2022, the People's Liberation Army Academy of Military Sciences reported that a team of military scientists inserted a gene from the tardigrade into human embryonic stem cells in an experiment with the stated possibility of creating soldiers resistant to acute radiation syndrome who could survive nuclear fallout . [ 5 ]
In the book The Men Who Stare at Goats (2004), Welsh journalist Jon Ronson documented how the U.S. military repeatedly tried and failed to train soldiers in the use of parascientific and pseudoscientific combat techniques during the Cold War , [ 6 ] experimenting with New Age methods and psychic phenomena such as remote viewing , astral projection , " death touch " and mind reading against various Soviet targets. The book also inspired a war comedy of the same name (2009) directed by Grant Heslov , starring George Clooney . [ 7 ]
The following are examples of fictional super soldiers in various media: | https://en.wikipedia.org/wiki/Super_soldier |
A superabsorbent polymer (SAP) (also called slush powder ) is a water-absorbing hydrophilic homopolymers or copolymers [ 1 ] that can absorb and retain extremely large amounts of a liquid relative to its own mass . [ 2 ]
Water-absorbing polymers , which are classified as hydrogels when mixed, [ 3 ] absorb aqueous solutions through hydrogen bonding with water molecules . A SAP's ability to absorb water depends on the ionic concentration of the aqueous solution . In deionized and distilled water, a SAP may absorb 300 times its weight [ 4 ] (from 30 to 60 times its own volume) and can become up to 99.9% liquid, and when put into a 0.9% saline solution the absorbency drops to approximately 50 times its weight. [ citation needed ] The presence of valence cations in the solution impedes the polymer's ability to bond with the water molecule.
The SAP's total absorbency and swelling capacity are controlled by the type and degree of cross-linkers used to make the gel . Low-density cross-linked SAPs generally have a higher absorbent capacity and swell to a larger degree. These types of SAPs also have a softer and stickier gel formation. High cross-link density polymers exhibit lower absorbent capacity and swell, and the gel strength is firmer and can maintain particle shape even under modest pressure.
Superabsorbent polymer: Polymer that can absorb and retain extremely large amounts of a liquid relative to its own mass. [ 5 ] Notes:
Superabsorbent polymers are crosslinked in order to avoid dissolution. There are three main classes of SAPs:
1. Cross‐linked polyacrylates and polyacrylamides
2. Cellulose‐ or starch‐acrylonitrile graft copolymers
3. Cross‐linked maleic anhydride copolymers [ 1 ]
The largest use of SAPs is found in personal disposable hygiene products, such as baby diapers , adult diapers and sanitary napkins . [ 6 ] SAPs are also used for blocking water penetration in underground power or communications cable, in self-healing concrete, [ 7 ] [ 8 ] horticultural water retention agents, control of spill and waste aqueous fluid, and artificial snow for motion picture and stage production. The first commercial use was in 1978 for use in feminine napkins in Japan and disposable bed liners for nursing home patients in the United States. Early applications in the US market were with small regional diaper manufacturers as well as Kimberly Clark . [ 9 ]
Until the 1920s, water-absorbing materials were fiber-based products. Choices were tissue paper , cotton , sponge , and fluff pulp . The water-absorbing capacity of these types of materials is only up to eleven times their weight and most of it is lost under moderate pressure.
In the early 1960s, the United States Department of Agriculture (USDA) was conducting work on materials to improve water conservation in soils . They developed a resin based on the grafting of acrylonitrile polymer onto the backbone of starch molecules (i.e. starch-grafting). The hydrolyzed product of the hydrolysis of this starch- acrylonitrile co-polymer gave water absorption greater than 400 times its weight. Also, the gel did not release liquid water the way that fiber-based absorbents do.
The polymer came to be known as “Super Slurper”. [ 10 ] The USDA gave the technical know-how to several US companies for further development of the basic technology. A wide range of grafting combinations were attempted including work with acrylic acid , acrylamide and polyvinyl alcohol (PVA).
Today's research has proved the ability of natural materials, e.g. polysaccharides and proteins, to perform super absorbent properties in pure water and saline solution (0.9%wt.) within the same range as synthetic polyacrylates do in current applications. [ 11 ] Soy protein /poly(acrylic acid) superabsorbent polymers with good mechanical strength have been prepared. [ 12 ] Polyacrylate / polyacrylamide copolymers were originally designed for use in conditions with high electrolyte/mineral content and a need for long term stability including numerous wet/dry cycles. Uses include agricultural and horticultural. With the added strength of the acrylamide monomer, used as medical spill control, wire and cable water blocking.
Superabsorbent polymers are now commonly made from the polymerization of acrylic acid blended with sodium hydroxide in the presence of an initiator to form a poly-acrylic acid sodium salt (sometimes referred to as sodium polyacrylate ). This polymer is the most common type of SAP made in the world today. According to the U.S. Food & Drug Administration , sodium polyacrylate is listed in Food Additive Status List, and there are strict limitations. [ 13 ] [ clarification needed ]
Other materials are also used to make a superabsorbent polymer, such as polyacrylamide copolymer, ethylene maleic anhydride copolymer, cross-linked carboxymethylcellulose , polyvinyl alcohol copolymers, cross-linked polyethylene oxide , and starch grafted copolymer of polyacrylonitrile to name a few. The latter is one of the oldest SAP forms created. [ citation needed ]
Today superabsorbent polymers are made using one of three primary methods: gel polymerization, suspension polymerization or solution polymerization . Each of the processes have their respective advantages but all yield a consistent quality of product.
A mixture of acrylic acid, water, cross-linking agents and UV initiator chemicals are blended and placed either on a moving belt or in large tubs. The liquid mixture then goes into a "reactor" which is a long chamber with a series of strong UV lights . The UV radiation drives the polymerization and cross-linking reactions. The resulting "logs" are sticky gels containing 60 to 70% water. The logs are shredded or ground and placed in various types of driers. Additional cross-linking agents may be sprayed on the particles' surface; this "surface cross-linking" increases the product's ability to swell under pressure—a property measured as Absorbency Under Load (AUL) or Absorbency Against Pressure (AAP). The dried polymer particles are then screened for proper particle size distribution and packaging. The gel polymerization (GP) method is currently [ when? ] the most popular method for making the sodium polyacrylate superabsorbent polymers now used in baby diapers and other disposable hygienic articles.
Solution polymers offer the absorbency of a granular polymer supplied in solution form. Solutions can be diluted with water prior to application, and can coat or saturate most substrates. After drying at a specific temperature for a specific time, the result is a coated substrate with superabsorbency. For example, this chemistry can be applied directly onto wires and cables, though it is especially optimized for use on components such as rolled goods or sheeted substrates.
Solution-based polymerization is commonly used today for SAP manufacture of co-polymers, particularly those with the toxic acrylamide monomer. This process is efficient and generally has a lower capital cost base. The solution process uses a water-based monomer solution to produce a mass of reactant polymerized gel. The polymerization's own exothermic reaction energy is used to drive much of the process, helping reduce manufacturing cost. The reactant polymer gel is then chopped, dried and ground to its final granule size. Any treatments to enhance performance characteristics of the SAP are usually accomplished after the final granule size is created.
The suspension process is practiced by only a few companies because it requires a higher degree of production control and product engineering during the polymerization step. This process suspends the water-based reactant in a hydrocarbon -based solvent. The net result is that the suspension polymerization creates the primary polymer particle in the reactor rather than mechanically in post-reaction stages. Performance enhancements can also be made during, or just after, the reaction stage.
On 13 April 2010, Cathay Pacific flight 780 from Surabaya to Hong Kong encountered a dual engine stall whilst descending into Hong Kong International Airport , the aircraft landed safely with no fatalities. The investigation concluded that superabsorbent polymer (SAP) spheres, a component of a fuel filter monitor installed in a fueling dispenser at Juanda International Airport caused the main metering valves in the fuel metering unit to seize. It was discovered that salt water had contaminated the fuel supply at Juanda International Airport, which led to damage of the filter monitors and release of SAP spheres into the aircraft's fuel, eventually entering the main fuel supply lines. [ 14 ] | https://en.wikipedia.org/wiki/Superabsorbent_polymer |
A superalloy , or high-performance alloy , is an alloy with the ability to operate at a high fraction of its melting point. [ 1 ] Key characteristics of a superalloy include mechanical strength , thermal creep deformation resistance, surface stability, and corrosion and oxidation resistance.
The crystal structure is typically face-centered cubic (FCC) austenitic . Examples of such alloys are Hastelloy , Inconel , Waspaloy , Rene alloys , Incoloy , MP98T, TMS alloys, and CMSX single crystal alloys.
Superalloy development relies on chemical and process innovations. Superalloys develop high temperature strength through solid solution strengthening and precipitation strengthening from secondary phase precipitates such as gamma prime and carbides . Oxidation or corrosion resistance is provided by elements such as aluminium and chromium . Superalloys are often cast as a single crystal in order to eliminate grain boundaries , trading in strength at low temperatures for increased resistance to thermal creep.
The primary application for such alloys is in aerospace and marine turbine engines . Creep is typically the lifetime-limiting factor in gas turbine blades. [ 2 ]
Superalloys have made much of very-high-temperature engineering technology possible. [ 1 ]
Because these alloys are intended for high temperature applications their creep and oxidation resistance are of primary importance. Nickel (Ni)-based superalloys are the material of choice for these applications because of their unique γ' precipitates. [ 1 ] [ 3 ] [ page needed ] The properties of these superalloys can be tailored to a certain extent through the addition of various other elements, common or exotic, including not only metals , but also metalloids and nonmetals ; chromium , iron , cobalt , molybdenum , tungsten , tantalum , aluminium , titanium , zirconium , niobium , rhenium , yttrium , vanadium , carbon , boron or hafnium are some examples of the alloying additions used. Each addition serves a particular purpose in optimizing properties.
Creep resistance is dependent, in part, on slowing the speed of dislocation motion within a crystal structure. In modern Ni-based superalloys, the γ'-Ni 3 (Al,Ti) phase acts as a barrier to dislocation. For this reason, this γ;' intermetallic phase, when present in high volume fractions, increases the strength of these alloys due to its ordered nature and high coherency with the γ matrix. The chemical additions of aluminum and titanium promote the creation of the γ' phase. The γ' phase size can be precisely controlled by careful precipitation strengthening heat treatments. Many superalloys are produced using a two-phase heat treatment that creates a dispersion of cuboidal γ' particles known as the primary phase, with a fine dispersion between these known as secondary γ'. In order to improve the oxidation resistance of these alloys, Al , Cr , B , and Y are added. The Al and Cr form oxide layers that passivate the surface and protect the superalloy from further oxidation while B and Y are used to improve the adhesion of this oxide scale to the substrate. [ 4 ] Cr , Fe , Co , Mo and Re all preferentially partition to the γ matrix while Al , Ti , Nb , Ta , and V preferentially partition to the γ' precipitates and solid solution strengthen the matrix and precipitates respectively. In addition to solid solution strengthening, if grain boundaries are present, certain elements are chosen for grain boundary strengthening. B and Zr tend to segregate to the grain boundaries which reduces the grain boundary energy and results in better grain boundary cohesion and ductility. [ 5 ] Another form of grain boundary strengthening is achieved through the addition of C and a carbide former, such as Cr , Mo , W , Nb , Ta , Ti , or Hf , which drives precipitation of carbides at grain boundaries and thereby reduces grain boundary sliding.
Adding elements is usually helpful because of solid solution strengthening, but can result in unwanted precipitation. Precipitates can be classified as geometrically close-packed (GCP), topologically close-packed (TCP) , or carbides. GCP phases usually benefit mechanical properties, but TCP phases are often deleterious. Because TCP phases are not truly close packed, they have few slip systems and are brittle. Also they "scavenge" elements from GCP phases. Many elements that are good for forming γ' or have great solid solution strengthening may precipitate TCPs. The proper balance promotes GCPs while avoiding TCPs.
TCP phase formation areas are weak because they: [ 8 ] [ 9 ]
The main GCP phase is γ'. Almost all superalloys are Ni-based because of this phase. γ' is an ordered L1 2 (pronounced L-one-two), which means it has a certain atom on the face of the unit cell, and a certain atom on the corners of the unit cell. Ni-based superalloys usually present Ni on the faces and Ti or Al on the corners.
Another "good" GCP phase is γ''. It is also coherent with γ, but it dissolves at high temperatures.
The United States became interested in gas turbine development around 1905. [ 1 ] From 1910-1915, austenitic ( γ phase) stainless steels were developed to survive high temperatures in gas turbines. By 1929, 80Ni-20Cr alloy was the norm, with small additions of Ti and Al. Although early metallurgists did not know it yet, they were forming small γ' precipitates in Ni-based superalloys. These alloys quickly surpassed Fe- and Co-based superalloys, which were strengthened by carbides and solid solution strengthening.
Although Cr was great for protecting the alloys from oxidation and corrosion up to 700 °C, metallurgists began decreasing Cr in favor of Al, which had oxidation resistance at much higher temperatures. The lack of Cr caused issues with hot corrosion, so coatings needed to be developed.
Around 1950, vacuum melting became commercialized, which allowed metallurgists to create higher purity alloys with more precise composition.
In the 60s and 70s, metallurgists changed focus from alloy chemistry to alloy processing. Directional solidification was developed to allow columnar or even single-crystal turbine blades. Oxide dispersion strengthening could obtain very fine grains and superplasticity .
Co-based superalloys depend on carbide precipitation and solid solution strengthening for mechanical properties. While these strengthening mechanisms are inferior to gamma prime (γ') precipitation strengthening, [ 1 ] cobalt has a higher melting point than nickel and has superior hot corrosion resistance and thermal fatigue. As a result, carbide-strengthened Co-based superalloys are used in lower stress, higher temperature applications such as stationary vanes in gas turbines. [ 14 ]
Co's γ/γ' microstructure was rediscovered and published in 2006 by Sato et al. [ 15 ] That γ' phase was Co 3 (Al, W). Mo, Ti, Nb, V, and Ta partition to the γ' phase, while Fe, Mn, and Cr partition to the matrix γ.
The next family of Co-based superalloys was discovered in 2015 by Makineni et al. This family has a similar γ/γ' microstructure, but is W-free and has a γ' phase of Co 3 (Al,Mo,Nb). [ 16 ] Since W is heavy, its elimination makes Co-based alloys increasingly viable in turbines for aircraft, where low density is especially valued.
The most recently discovered family of superalloys was computationally predicted by Nyshadham et al. in 2017, [ 17 ] and demonstrated by Reyes Tirado et al. in 2018. [ 18 ] This γ' phase is W free and has the composition Co 3 (Nb,V) and Co 3 (Ta,V).
Steel superalloys are of interest because some present creep and oxidation resistance similar to Ni-based superalloys, at far less cost.
Gamma (γ): Fe-based alloys feature a matrix phase of austenite iron (FCC). Alloying elements include: Al, B, C, Co, Cr, Mo, Ni, Nb, Si, Ti, W, and Y. [ 22 ] Al (oxidation benefits) must be kept at low weight fractions (wt.%) because Al stabilizes a ferritic (BCC) primary phase matrix, which is undesirable, as it is inferior to the high temperature strength exhibited by an austenitic (FCC) primary phase matrix. [ 23 ]
Gamma-prime (γ'): This phase is introduced as precipitates to strengthen the alloy. γ'-Ni3Al precipitates can be introduced with the proper balance of Al, Ni, Nb, and Ti additions.
The two major types of austenitic stainless steels are characterized by the oxide layer that forms on the steel surface: either chromia-forming or alumina-forming. Cr-forming stainless steel is the most common type. However, Cr-forming steels do not exhibit high creep resistance at high temperatures, especially in environments with water vapor. Exposure to water vapor at high temperatures can increase internal oxidation in Cr-forming alloys and rapid formation of volatile Cr (oxy)hydroxides, both of which can reduce durability and lifetime. [ 23 ]
Al-forming austenitic stainless steels feature a single-phase matrix of austenite iron (FCC) with an Al-oxide at the surface of the steel. Al is more thermodynamically stable in oxygen than Cr. More commonly, however, precipitate phases are introduced to increase strength and creep resistance. In Al-forming steels, NiAl precipitates are introduced to act as Al reservoirs to maintain the protective alumina layer. In addition, Nb and Cr additions help form and stabilize Al by increasing precipitate volume fractions of NiAl. [ 23 ]
At least 5 grades of alumina-forming austenitic (AFA) alloys, with different operating temperatures at oxidation in air + 10% water vapor have been realized: [ 24 ]
Operating temperatures with oxidation in air and no water vapor are expected to be higher. In addition, an AFA superalloy grade exhibits creep strength approaching that of nickel alloy UNS N06617.
In pure Ni 3 Al phase Al atoms are placed at the vertices of the cubic cell and form sublattice A. Ni atoms are located at centers of the faces and form sublattice B. The phase is not strictly stoichiometric . An excess of vacancies in one of the sublattices may exist, which leads to deviations from stoichiometry. Sublattices A and B of the γ' phase can solute a considerable proportion of other elements. The alloying elements are dissolved in the γ phase. The γ' phase hardens the alloy through the yield strength anomaly . Dislocations dissociate in the γ' phase, leading to the formation of an anti-phase boundary .
To give an example, consider a dislocation with a burgers vector of a 2 [ 1 1 ¯ 0 ] {\displaystyle {\frac {a}{2}}\left[1{\bar {1}}0\right]} traveling along a { 111 } {\displaystyle \left\{111\right\}} slip plane initially in the γ phase, where it is a perfect dislocation in that FCC structure. Since the γ' phase is primitive cubic instead of FCC due to the substitution of aluminum into the vertices of the unit cell, the perfect burgers vector along that direction in γ' is twice that of γ. For the a 2 [ 1 1 ¯ 0 ] {\displaystyle {\frac {a}{2}}\left[1{\bar {1}}0\right]} dislocation to enter the γ' phase, it will have to create a high energy anti-phase boundary , which will need another such dislocation along the plane to restore order (as the sum of the two dislocations would have the perfect a [ 1 1 ¯ 0 ] {\displaystyle a\left[1{\bar {1}}0\right]} burgers vector). [ 25 ]
It is thus rather energy prohibitive for the dislocation to enter the γ' phase unless there are two of them in close proximity along the same plane. [ 26 ] However, the Peach-Koehler force between identical dislocations along the same plane is repulsive, [ 27 ] which makes this a less favorable configuration. One possible mechanism involved one of the dislocations being pinned against the γ' phase while the other dislocation in the γ phase cross-slips into close proximity of the pinned dislocation from another plane, allowing the pair of dislocations to push into the γ' phase. [ 28 ] [ 29 ]
Furthermore, the burgers vector a 2 ⟨ 110 ⟩ {\displaystyle {\frac {a}{2}}\left\langle 110\right\rangle } family of dislocations are likely to decompose into partial dislocations in this alloy due to its low stacking fault energy , such as dislocations with burgers vector of the a 6 ⟨ 211 ⟩ {\displaystyle {\frac {a}{6}}\left\langle 211\right\rangle } family ( Shockley partial dislocations ). [ 25 ] [ 29 ] The stacking faults between these partial dislocations can further provide another obstacle to the movement of other dislocations, further contributing to the strength of the material. There are also more slip systems that can be involved beyond the { 111 } {\displaystyle \left\{111\right\}} slip plane and ⟨ 110 ⟩ {\displaystyle \left\langle 110\right\rangle } slip direction. [ 30 ]
At elevated temperature, the free energy associated with the anti-phase boundary (APB) is considerably reduced if it lies on a particular plane, which by coincidence is not a permitted slip plane. One set of partial dislocations bounding the APB cross-slips so that the APB lies on the low-energy plane, and, since this low-energy plane is not a permitted slip plane, the dissociated dislocation is effectively locked. By this mechanism, the yield strength of γ' phase Ni 3 Al increases with temperature up to about 1000 °C.
Initial material selection for blade applications in gas turbine engines included alloys like the Nimonic series alloys in the 1940s. [ 3 ] [ page needed ] The early Nimonic series incorporated γ' Ni 3 (Al,Ti) precipitates in a γ matrix, as well as various metal-carbon carbides (e.g. Cr 23 C 6 ) at the grain boundaries [ 31 ] for additional grain boundary strength. Turbine blade components were forged until vacuum induction casting technologies were introduced in the 1950s. [ 3 ] [ page needed ] This process significantly improved cleanliness, reduced defects, and increased the strength and temperature capability.
Modern superalloys were developed in the 1980s. First generation superalloys incorporated increased Al, Ti, Ta, and Nb content in order to increase the γ' volume fraction. Examples include: PWA1480, René N4 and SRR99. Additionally, the volume fraction of the γ' precipitates increased to about 50–70% with the advent of monocrystal solidification techniques that enable grain boundaries to be entirely eliminated. Because the material contains no grain boundaries, carbides are unnecessary as grain boundary strengthers and were thus eliminated. [ 3 ] [ page needed ]
Second and third generation superalloys introduce about 3 and 6 weight percent rhenium , for increased temperature capability. Re is a slow diffuser and typically partitions the γ matrix, decreasing the rate of diffusion (and thereby high temperature creep ) and improving high temperature performance and increasing service temperatures by 30 °C and 60 °C in second and third generation superalloys, respectively. [ 32 ] Re promotes the formation of rafts of the γ' phase (as opposed to cuboidal precipitates). The presence of rafts can decrease creep rate in the power-law regime (controlled by dislocation climb), but can also potentially increase the creep rate if the dominant mechanism is particle shearing. Re tends to promote the formation of brittle TCP phases, which has led to the strategy of reducing Co, W, Mo, and particularly Cr. Later generations of Ni-based superalloys significantly reduced Cr content for this reason, however with the reduction in Cr comes a reduction in oxidation resistance . Advanced coating techniques offset the loss of oxidation resistance accompanying the decreased Cr contents. [ 13 ] [ 33 ] Examples of second generation superalloys include PWA1484, CMSX-4 and René N5.
Third generation alloys include CMSX-10, and René N6. Fourth, fifth, and sixth generation superalloys incorporate ruthenium additions, making them more expensive than prior Re-containing alloys. The effect of Ru on the promotion of TCP phases is not well-determined. Early reports claimed that Ru decreased the supersaturation of Re in the matrix and thereby diminished the susceptibility to TCP phase formation. [ 34 ] Later studies noted an opposite effect. Chen, et al., found that in two alloys differing significantly only in Ru content (USTB-F3 and USTB-F6) that the addition of Ru increased both the partitioning ratio as well as supersaturation in the γ matrix of Cr and Re, and thereby promoted the formation of TCP phases. [ 35 ]
The current trend is to avoid very expensive and very heavy elements. An example is Eglin steel , a budget material with compromised temperature range and chemical resistance. It does not contain rhenium or ruthenium and its nickel content is limited. To reduce fabrication costs, it was chemically designed to melt in a ladle (though with improved properties in a vacuum crucible). Conventional welding and casting is possible before heat-treatment. The original purpose was to produce high-performance, inexpensive bomb casings, but the material has proven widely applicable to structural applications, including armor.
Single-crystal superalloys (SX or SC superalloys) are formed as a single crystal using a modified version of the directional solidification technique, leaving no grain boundaries . The mechanical properties of most other alloys depend on the presence of grain boundaries, but at high temperatures, they participate in creep and require other mechanisms. In many such alloys, islands of an ordered intermetallic phase sit in a matrix of disordered phase, all with the same crystal lattice . This approximates the dislocation -pinning behavior of grain boundaries, without introducing any amorphous solid into the structure.
Single crystal (SX) superalloys have wide application in the high-pressure turbine section of aero- and industrial gas turbine engines due to the unique combination of properties and performance. Since introduction of single crystal casting technology, SX alloy development has focused on increased temperature capability, and major improvements in alloy performance are associated with rhenium (Re) and ruthenium (Ru). [ 36 ]
The creep deformation behavior of superalloy single crystal is strongly temperature-, stress-, orientation- and alloy-dependent. For a single-crystal superalloy, three modes of creep deformation occur under regimes of different temperature and stress: rafting, tertiary, and primary. [ 37 ] [ page needed ] At low temperature (~750 °C), SX alloys exhibits mostly primary creep behavior. Matan et al. concluded that the extent of primary creep deformation depends strongly on the angle between the tensile axis and the <001>/<011> symmetry boundary. [ 38 ] At temperatures above 850 °C, tertiary creep dominates and promotes strain softening behavior. [ 3 ] [ page needed ] When temperature exceeds 1000 °C, the rafting effect is prevalent where cubic particles transform into flat shapes under tensile stress. [ 39 ] The rafts form perpendicular to the tensile axis, since γ phase is transported out of the vertical channels and into the horizontal ones. Reed et al. studied unaxial creep deformation of <001> oriented CMSX-4 single crystal superalloy at 1105 °C and 100 MPa. They reported that rafting is beneficial to creep life since it delays evolution of creep strain. In addition, rafting occurs quickly and suppresses the accumulation of creep strain until a critical strain is reached. [ 40 ]
For superalloys operating at high temperatures and exposed to corrosive environments, oxidation behavior is a concern. Oxidation involves chemical reactions of the alloying elements with oxygen to form new oxide phases, generally at the alloy surface. If unmitigated, oxidation can degrade the alloy over time in a variety of ways, including: [ 41 ] [ 42 ]
Selective oxidation is the primary strategy used to limit these deleterious processes. The ratio of alloying elements promotes formation of a specific oxide phase that then acts as a barrier to further oxidation. Most commonly, aluminum and chromium are used in this role, because they form relatively thin and continuous oxide layers of alumina (Al 2 O 3 ) and chromia (Cr 2 O 3 ), respectively. They offer low oxygen diffusivities , effectively halting further oxidation beneath this layer. In the ideal case, oxidation proceeds through two stages. First, transient oxidation involves the conversion of various elements, especially the majority elements (e.g. nickel or cobalt). Transient oxidation proceeds until the selective oxidation of the sacrificial element forms a complete barrier layer. [ 41 ]
The protective effect of selective oxidation can be undermined. The continuity of the oxide layer can be compromised by mechanical disruption due to stress or may be disrupted as a result of oxidation kinetics (e.g. if oxygen diffuses too quickly). If the layer is not continuous, its effectiveness as a diffusion barrier to oxygen is compromised. The stability of the oxide layer is strongly influenced by the presence of other minority elements. For example, the addition of boron , silicon , and yttrium to superalloys promotes oxide layer adhesion , reducing spalling and maintaining continuity. [ 43 ]
Oxidation is the most basic form of chemical degradation superalloys may experience. More complex corrosion processes are common when operating environments include salts and sulfur compounds, or under chemical conditions that change dramatically over time. These issues are also often addressed through comparable coatings.
One of the main strengths of superalloys are their superior creep resistant properties when compared to most conventional alloys. To start, 𝛾’-strengthened superalloys have the benefit of requiring dislocations to move in pairs due to the phase creating a high antiphase boundary (APB) energy during dislocation motion. This high APB energy makes it so that a second dislocation has to undo the APB energy created by the first. [ 25 ] In doing so, this significantly reduces the mobility of dislocations in the material which should inhibit dislocation activated creep. These dislocation pairs (also called superdislocations [ 44 ] ) have been described as being either weakly or strongly coupled, the spacing between the dislocations compared to the size of the particle diameter being the determining factor between weak and strong. A weakly coupled dislocation has a relatively large spacing between the dislocations compared to the particle diameter while a strongly coupled dislocation has a relatively comparable spacing compared to the particle diameter. This is determined not by the dislocation spacing, but by the size of the 𝛾’ particles. A weakly coupled dislocation occurs when the particle size is relatively small while a strongly coupled dislocation occurs when the particle size is relatively large (such as when a superalloy has been aged for too long). Weakly coupled dislocations exhibit pinning and bowing of the dislocation line on the 𝛾’-particles. Strongly coupled dislocation behavior depends greatly on the dislocation line lengths and the resistances benefits they offer disappear once the particle size becomes large enough.
Additionally, superalloys exhibit comparatively superior high temperature creep resistance due to thermally activated cross-slip of dislocations. [ 25 ] When one of the dislocations in the pair cross-slips into another plane, the dislocations become pinned since they can no longer move as a pair. This pinning reduces the ability for the dislocations to move in dislocation activated creep and improving the creep resistant properties of the material.
Increasing the lattice misfit between 𝛾/𝛾' has also been shown to be beneficial for creep resistance. [ 45 ] This is primarily since a high lattice misfit between the two phases results in a higher barrier to dislocation motion than a low lattice misfit.
For Ni-based single-crystal superalloys, upwards of ten different kinds of alloying additions can be seen to improve creep-resistance and overall mechanical properties. [ 46 ] Alloying elements include Cr, Co, Al, Mo, W, Ti, Ta, Re, and Ru. Elements such as Co, Re, and Ru have been described to improve creep resistance by facilitating the formation of stacking faults by reducing the stacking fault energy. Increasing number of stacking faults leading to the inhibition of dislocation motion. Other elements (Al, Ti, Ta) can favorably partition into and improve the nucleation of 𝛾’-phase.
Diffusion is also a method of creep, and there are a few ways to limit diffusional creep. One primary way that superalloys can limit diffusional creep is by manipulating grain structure to reduce grain boundaries which tend to be pathways for easy diffusion. [ 47 ] Typically this is done by manufacturing the superalloys as single crystals oriented parallel to the direction of the applied force.
Superalloys were originally iron-based and cold wrought prior to the 1940s when investment casting of cobalt base alloys significantly raised operating temperatures. The 1950s development of vacuum melting allowed for fine control of the chemical composition of superalloys and reduction in contamination and in turn led to a revolution in processing techniques such as directional solidification of alloys and single crystal superalloys. [ 48 ] [ page needed ]
Processing methods vary widely depending on the required properties of each item.
Casting and forging are traditional metallurgical processing techniques that can be used to generate both polycrystalline and monocrystalline products. Polycrystalline casts offer higher fracture resistance, while monocrystalline casts offer higher creep resistance.
Jet turbine engines employ both crystalline component types to take advantage of their individual strengths. The disks of the high-pressure turbine, which are near the central hub of the engine are polycrystalline. The turbine blades, which extend radially into the engine housing, experience a much greater centripetal force, necessitating creep resistance, typically adopting monocrystalline or polycrystalline with a preferred crystal orientation.
Investment casting is a metallurgical processing technique in which a wax form is fabricated and used as a template for a ceramic mold. A ceramic mold is poured around the wax form and solidifies, the wax form is melted out of the ceramic mold, and molten metal is poured into the void left by the wax. This leads to a metal form in the same shape as the original wax form. Investment casting leads to a polycrystalline final product, as nucleation and growth of crystal grains occurs at numerous locations throughout the solid matrix. Generally, the polycrystalline product has no preferred grain orientation.
Directional solidification uses a thermal gradient to promote nucleation of metal grains on a low temperature surface, as well as to promote their growth along the temperature gradient. This leads to grains elongated along the temperature gradient, and significantly greater creep resistance parallel to the long grain direction. In polycrystalline turbine blades, directional solidification is used to orient the grains parallel to the centripetal force. It is also known as dendritic solidification.
Single crystal growth starts with a seed crystal that is used to template growth of a larger crystal. The overall process is lengthy, and machining is necessary after the single crystal is grown.
Powder metallurgy is a class of modern processing techniques in which metals are first powdered, and then formed into the desired shape by heating below the melting point. This is in contrast to casting, which occurs with molten metal. Superalloy manufacturing often employs powder metallurgy because of its material efficiency - typically much less waste metal must be machined away from the final product—and its ability to facilitate mechanical alloying. Mechanical alloying is a process by which reinforcing particles are incorporated into the superalloy matrix material by repeated fracture and welding. [ 49 ] [ failed verification ]
Sintering and hot isostatic pressing are processing techniques used to densify materials from a loosely packed " green body " into a solid object with physically merged grains. Sintering occurs below the melting point, and causes adjacent particles to merge at their boundaries, creating a strong bond between them. In hot isostatic pressing, a sintered material is placed in a pressure vessel and compressed from all directions (isostatically) in an inert atmosphere to affect densification. [ 50 ]
Selective laser melting (also known as powder bed fusion ) is an additive manufacturing procedure used to create intricately detailed forms from a CAD file. A shape is designed and then converted into slices. These slices are sent to a laser writer to print the final product. In brief, a bed of metal powder is prepared, and a slice is formed in the powder bed by a high energy laser sintering the particles together. The powder bed moves downwards, and a new batch of metal powder is rolled over the top. This layer is then sintered with the laser, and the process is repeated until all slices have been processed. [ 51 ] Additive manufacturing can leave pores behind. Many products undergo a heat treatment or hot isostatic pressing procedure to densify the product and reduce porosity. [ 52 ]
In modern gas turbines, the turbine entry temperature (~1750 K) exceeds superalloy incipient melting temperature (~1600 K), with the help of surface engineering. [ 53 ] [ page needed ]
The three types of coatings are: diffusion coatings, overlay coatings, and thermal barrier coatings. Diffusion coatings, mainly constituted with aluminide or platinum-aluminide, is the most common. MCrAlX-based overlay coatings (M=Ni or Co, X=Y, Hf, Si) enhance resistance to corrosion and oxidation. Compared to diffusion coatings, overlay coatings are more expensive, but less dependent on substrate composition, since they must be carried out by air or vacuum plasma spraying (APS/VPS) [ 54 ] [ page needed ] or electron beam physical vapour deposition (EB-PVD). [ 55 ] Thermal barrier coatings provide by far the best enhancement in working temperature and coating life. It is estimated that modern TBC of thickness 300 μm, if used in conjunction with a hollow component and cooling air, has the potential to lower metal surface temperatures by a few hundred degrees. [ 56 ]
Thermal barrier coatings (TBCs) are used extensively in gas turbine engines to increase component life and engine performance. [ 57 ] A coating of about 1-200 μm can reduce the temperature at the superalloy surface by up to 200 K. TBCs are a system of coatings consisting of a bond coat, a thermally grown oxide (TGO), and a thermally insulating ceramic top coat. In most applications, the bond coat is either a MCrAlY (where M=Ni or NiCo) or a Pt modified aluminide coating. A dense bond coat is required to provide protection of the superalloy substrate from oxidation and hot corrosion attack and to form an adherent, slow-growing surface TGO. The TGO is formed by oxidation of the aluminum that is contained in the bond coat. The current (first generation) thermal insulation layer is composed of 7wt % yttria-stabilized zirconia (7YSZ) with a typical thickness of 100–300 μm. Yttria-stabilized zirconia is used due to its low thermal conductivity (2.6W/mK for fully dense material), relatively high coefficient of thermal expansion, and high temperature stability. The electron beam-directed vapor deposition (EB-DVD) process used to apply the TBC to turbine airfoils produces a columnar microstructure with multiple porosity levels. Inter-column porosity is critical to providing strain tolerance (via a low in-plane modulus), as it would otherwise spall on thermal cycling due to thermal expansion mismatch with the superalloy substrate. This porosity reduces the thermal coating's conductivity.
The bond coat adheres the thermal barrier to the substrate. Additionally, the bond coat provides oxidation protection and functions as a diffusion barrier against the motion of substrate atoms towards the environment. The five major types of bond coats are: the aluminides , the platinum-aluminides, MCrAlY, cobalt- cermets , and nickel-chromium. For aluminide bond coatings, the coating's final composition and structure depends on the substrate composition. Aluminides lack ductility below 750 °C, and exhibit limited thermomechanical fatigue strength. Pt-aluminides are similar to the aluminide bond coats except for a layer of Pt (5—10 μm) deposited on the blade. The Pt aids in oxide adhesion and contributes to hot corrosion, increasing blade lifespan. MCrAlY does not strongly interact with the substrate. Normally applied by plasma spraying, MCrAlY coatings from secondary aluminum oxides. This means that the coatings form an outer chromia layer and a secondary alumina layer underneath. These oxide formations occur at high temperatures in the range of those that superalloys usually encounter. [ 58 ] The chromia provides oxidation and hot-corrosion resistance. The alumina controls oxidation mechanisms by limiting oxide growth by self-passivating. The yttrium enhances oxide adherence to the substrate, and limits the growth of grain boundaries (which can lead to coat flaking). [ 59 ] Addition of rhenium and tantalum increases oxidation resistance. Cobalt -cermet-based coatings consisting of materials such as tungsten carbide /cobalt can be used due to excellent resistance to abrasion, corrosion, erosion, and heat. [ 60 ] [ full citation needed ] These cermet coatings perform well in situations where temperature and oxidation damage are significant concerns, such as boilers. One of cobalt cermet's unique advantages is minimal loss of coating mass over time, due to the strength of carbides. Overall, cermet coatings are useful in situations where mechanical demands are equal to chemical demands. Nickel-chromium coatings are used most frequently in boilers fed by fossil fuels , electric furnaces , and waste incineration furnaces, where the danger of oxidizing agents and corrosive compounds in the vapor must be addressed. [ 61 ] The specific method of spray-coating depends on the coating composition. Nickel-chromium coatings that also contain iron or aluminum provide better corrosion resistance when they are sprayed and laser glazed, while pure nickel-chromium coatings perform better when thermally sprayed exclusively. [ 62 ]
Several kinds of coating process are available: pack cementation process, gas phase coating (both are a type of chemical vapor deposition (CVD)), thermal spraying , and physical vapor deposition. In most cases, after the coating process, near-surface regions of parts are enriched with aluminium in a matrix of the nickel aluminide .
Pack cementation is a widely used CVD technique that consists of immersing the components to be coated in a metal powder mixture and ammonium halide activators and sealing them in a retort . The entire apparatus is placed inside a furnace and heated in a protective atmosphere to a lower than normal temperature that allows diffusion, due to the halide salts chemical reaction that causes a eutectic bond between the two metals. The surface alloy that is formed due to thermal-diffused ion migration has a metallurgical bond to the substrate and an intermetallic layer found in the gamma layer of the surface alloys.
The traditional pack consists of four components at temperatures below (750 °C):
This process includes:
Pack cementation has reemerged when combined with other chemical processes to lower the temperatures of metal combinations and give intermetallic properties to different alloy combinations for surface treatments.
Thermal spraying involves heating a feedstock of precursor material and spraying it on a surface. Specific techniques depend on desired particle size, coat thickness, spray speed, desired area, etc. [ 63 ] [ full citation needed ] Thermal spraying relies on adhesion to the surface. As a result, the surface of the superalloy must be cleaned and prepared, and usually polished, before application. [ 64 ]
Plasma spraying offers versatility of usable coatings, and high-temperature performance. [ 65 ] Plasma spraying can accommodate a wide range of materials, versus other techniques. As long as the difference between melting and decomposition temperatures is greater than 300 K, plasma spraying is viable. [ 66 ] [ page needed ]
Gas phase coating is carried out at higher temperatures, about 1080 °C. The coating material is usually loaded onto trays without physical contact with the parts to be coated. The coating mixture contains active coating material and activator, but usually not thermal ballast. As in the pack cementation process, gaseous aluminium chloride (or fluoride) is transferred to the surface of the part. However, in this case the diffusion is outwards. This kind of coating also requires diffusion heat treatment.
Failure of thermal barrier coating usually manifests as delamination, which arises from the temperature gradient during thermal cycling between ambient temperature and working conditions coupled with the difference in thermal expansion coefficient of substrate and coating. It is rare for the coating to fail completely – some pieces remain intact, and significant scatter is observed in the time to failure if testing is repeated under identical conditions. [ 3 ] [ page needed ] Various degradation mechanisms affect thermal barrier coating, [ 67 ] [ 68 ] and some or all of these must operate before failure finally occurs:
Additionally, TBC life is sensitive to the combination of materials (substrate, bond coat, ceramic) and processes (EB-PVD, plasma spraying) used.
Nickel-based superalloys are used in load-bearing structures requiring the highest homologous temperature of any common alloy system (Tm = 0.9, or 90% of their melting point). Among the most demanding applications for a structural material are those in the hot sections of turbine engines (e.g. turbine blade ). They comprise over 50% of the weight of advanced aircraft engines. The widespread use of superalloys in turbine engines coupled with the fact that the thermodynamic efficiency of turbine engines is a function of increasing turbine inlet temperatures has provided part of the motivation for increasing the maximum-use temperature of superalloys. From 1990-2020, turbine airfoil temperature capability increased on average by about 2.2 °C/year. Two major factors have made this increase possible: [ citation needed ]
About 60% of the temperature increases related to advanced cooling, while 40% have resulted from material improvements. State-of-the-art turbine blade surface temperatures approach 1,150 C. The most severe stress and temperature combinations correspond to an average bulk metal temperature approaching 1,000 C..
Although Ni-based superalloys retain significant strength to 980 C, they tend to be susceptible to environmental attack because of the presence of reactive alloying elements. Surface attack includes oxidation, hot corrosion, and thermal fatigue. [ 10 ]
High temperature materials are valuable for energy conversion and energy production applications. Maximum energy conversion efficiency is desired in such applications, in accord with the Carnot cycle . Because Carnot efficiency is limited by the temperature difference between the hot and cold reservoirs, higher operating temperatures increase energy conversion efficiency. Operating temperatures are limited by superalloys, limiting applications to around 1000 °C-1400 °C. Energy applications include: [ 81 ]
Alumina-forming stainless steel is weldable and has potential for use in automotive applications, such as for high temperature exhaust piping and in heat capture and reuse.
Sandia National Laboratories is studying radiolysis for making superalloys. It uses nanoparticle synthesis to create alloys and superalloys. This process holds promise as a universal method of nanoparticle formation. By developing an understanding of the basic material science , it might be possible to expand research into other aspects of superalloys. Radiolysis produces polycrystalline alloys, which suffer from an unacceptable level of creep.
Stainless steel alloys remain a research target because of lower production costs, as well as the need for an austenitic stainless steel with high-temperature corrosion resistance in environments with water vapor. Research focuses on increasing high-temperature tensile strength, toughness, and creep resistance to compete with Ni-based superalloys. [ 24 ]
Oak Ridge National Laboratory is researching austenitic alloys, achieving similar creep and corrosion resistance at 800 °C to that of other austenitic alloys, including Ni-based superalloys. [ 24 ]
Development of AFA superalloys with a 35 wt.% Ni-base have shown potential for use in operating temperatures upwards to 1,100 °C. [ 24 ]
Researchers at Sandia Labs, Ames National Laboratory and Iowa State University reported a 3D-printed superalloy composed of 42% aluminum, 25% titanium, 13% niobium, 8% zirconium, 8% molybdenum and 4% tantalum. Most alloys are made chiefly of one primary element, combined with low amounts of other elements. In contrast MPES have substantial amounts of three or more elements. [ 82 ]
Such alloys promise improvements on high-temperature applications, strength-to-weight, fracture toughness, corrosion and radiation resistance, wear resistance, and others. They reported ratio of hardness and density of 1.8–2.6 GPa-cm 3 /g, which surpasses all known alloys, including intermetallic compounds, titanium aluminides, refractory MPEAs, and conventional Ni-based superalloys. This represents a 300% improvement over Inconel 718 based on measured peak hardness of 4.5 GPa and density of 8.2 g/cm 3 , (0.55 GPa-cm 3 /g). [ 82 ]
The material is stable at 800 °C, hotter than the 570+ °C found in typical coal-based power plants. [ 82 ]
The researchers acknowledged that the 3D printing process produces microscopic cracks when forming large parts, and that the feedstock includes metals that limit applicability in cost-sensitive applications. [ 82 ] | https://en.wikipedia.org/wiki/Superalloy |
An immediate inference is an inference which can be made from only one statement or proposition . [ 1 ] For instance, from the statement "All toads are green", the immediate inference can be made that "no toads are not green" or "no toads are non-green" (Obverse). There are a number of immediate inferences which can validly be made using logical operations. There are also invalid immediate inferences which are syllogistic fallacies .
Cases of the incorrect application of the contrary, subcontrary and subalternation relations (these hold in the traditional square of opposition , not the modern square of opposition) are syllogistic fallacies called illicit contrary , illicit subcontrary , and illicit subalternation , respectively. Cases of incorrect application of the contradictory relation (this relation holds in both the traditional and modern squares of opposition) are so infrequent, that an "illicit contradictory" fallacy is usually not recognized. The below shows examples of these cases. | https://en.wikipedia.org/wiki/Superaltern |
Superantigens ( SAgs ) are a class of antigens that result in excessive activation of the immune system . Specifically they cause non-specific activation of T-cells resulting in polyclonal T cell activation and massive cytokine release. Superantigens act by binding to the MHC proteins on antigen-presenting cells (APCs) and to the TCRs on their adjacent helper T-cells, bringing the signaling molecules together, and thus leading to the activation of the T-cells, regardless of the peptide displayed on the MHC molecule. [ 1 ] SAgs are produced by some pathogenic viruses and bacteria most likely as a defense mechanism against the immune system. [ 2 ] Compared to a normal antigen -induced T-cell response where 0.0001–0.001% of the body's T-cells are activated, these SAgs are capable of activating up to 20% of the body's T-cells. [ 3 ] Furthermore, Anti- CD3 and Anti- CD28 antibodies ( CD28-SuperMAB ) have also shown to be highly potent superantigens (and can activate up to 100% of T cells).
The large number of activated T-cells generates a massive immune response which is not specific to any particular epitope on the SAg thus undermining one of the fundamental strengths of the adaptive immune system , that is, its ability to target antigens with high specificity. More importantly, the large number of activated T-cells secrete large amounts of cytokines , the most important of which is Interferon gamma . This excess amount of IFN-gamma in turn activates the macrophages . The activated macrophages, in turn, over-produce proinflammatory cytokines such as IL-1 , IL-6 and TNF-alpha . TNF-alpha is particularly important as a part of the body's inflammatory response. In normal circumstances it is released locally in low levels and helps the immune system defeat pathogens. However, when it is systemically released in the blood and in high levels (due to mass T-cell activation resulting from the SAg binding), it can cause severe and life-threatening symptoms, including septic shock and multiple organ failure .
SAgs are produced intracellularly by bacteria and are released upon infection as extracellular mature toxins. [ 4 ]
The sequences of these bacterial toxins are relatively conserved among the different subgroups. More important than sequence homology, the 3D structure is very similar among different SAgs resulting in similar functional effects among different groups. [ 5 ] [ 6 ] There are at least 5 groups of superantigens with different binding preferences. [ 7 ]
Crystal structures of the enterotoxins reveals that they are compact, ellipsoidal proteins sharing a characteristic two- domain folding pattern comprising an NH2-terminal β barrel globular domain known as the oligosaccharide / oligonucleotide fold, a long α-helix that diagonally spans the center of the molecule, and a COOH-terminal globular domain. [ 5 ]
The domains have binding regions for the major histocompatibility complex class II ( MHC class II ) and the T-cell receptor (TCR), respectively. By bridging these two together, the SAg causes nonspecific activation. [ 8 ]
Superantigens bind first to the MHC class II and then coordinate to the variable alpha- or beta chain of T-cell Receptors (TCR) [ 6 ] [ 9 ] [ 10 ]
SAgs show preference for the HLA-DQ form of the molecule. [ 10 ] Binding to the α-chain puts the SAg in the appropriate position to coordinate to the TCR.
Less commonly, SAgs attach to the polymorphic MHC class II β-chain in an interaction mediated by a zinc ion coordination complex between three SAg residues and a highly conserved region of the HLA-DR β chain. [ 6 ] The use of a zinc ion in binding leads to a higher affinity interaction. [ 5 ] Several staphylococcal SAgs are capable of cross-linking MHC molecules by binding to both the α and β chains. [ 5 ] [ 6 ] This mechanism stimulates cytokine expression and release in antigen presenting cells as well as inducing the production of costimulatory molecules that allow the cell to bind to and activate T cells more effectively. [ 6 ]
T-cell binding region of the SAg interacts with the Variable region on the Beta chain (Vβ region) of the T-cell Receptor. A given SAg can activate a large proportion of the T-cell population because the human T-cell repertoire comprises only about 50 types of Vβ elements and some SAgs are capable of binding to multiple types of Vβ regions. This interaction varies slightly among the different groups of SAgs. [ 8 ] Variability among different people in the types of T-cell regions that are prevalent explains why some people respond more strongly to certain SAgs. Group I SAgs contact the Vβ at the CDR2 and framework region of the molecule. [ 11 ] [ 12 ] SAgs of Group II interact with the Vβ region using mechanisms that are conformation -dependent. These interactions are for the most part independent of specific Vβ amino acid side-chains. Group IV SAgs have been shown to engage all three CDR loops of certain Vβ forms. [ 11 ] [ 12 ] The interaction takes place in a cleft between the small and large domains of the SAg and allows the SAg to act as a wedge between the TCR and MHC. This displaces the antigenic peptide away from the TCR and circumvents the normal mechanism for T-cell activation. [ 6 ] [ 13 ]
The biological strength of the SAg (its ability to stimulate) is determined by its affinity for the TCR. SAgs with the highest affinity for the TCR elicit the strongest response. [ 14 ] SPMEZ-2 is the most potent SAg discovered to date. [ 14 ]
The SAg cross-links the MHC and the TCR inducing a signaling pathway that results in the proliferation of the cell and production of cytokines. This occurs because a cognate antigen activates a T cell not because of its structure per se , but because its affinity allows it to bind the TCR for a lengthy enough time period, and the SAg mimics this temporal bonding. Low levels of Zap-70 have been found in T-cells activated by SAgs, indicating that the normal signaling pathway of T-cell activation is impaired. [ 15 ]
It is hypothesized that Fyn rather than Lck is activated by a tyrosine kinase , leading to the adaptive induction of anergy. [ 16 ]
Both the protein kinase C pathway and the protein tyrosine kinase pathways are activated, resulting in upregulating production of proinflammatory cytokines. [ 17 ]
This alternative signaling pathway impairs the calcium/calcineurin and Ras/MAPkinase pathways slightly, [ 16 ] but allows for a focused inflammatory response.
SAg stimulation of antigen presenting cells and T-cells elicits a response that is mainly inflammatory, focused on the action of Th1 T-helper cells. Some of the major products are IL-1 , IL-2 , IL-6 , TNF-α , gamma interferon (IFN-γ), macrophage inflammatory protein 1α (MIP-1α), MIP-1β, and monocyte chemoattractant protein 1 ( MCP-1 ). [ 17 ]
This excessive uncoordinated release of cytokines, (especially TNF-α), overloads the body and results in rashes, fever, and can lead to multi-organ failure, coma and death. [ 10 ] [ 12 ]
Deletion or anergy of activated T-cells follows infection. This results from production of IL-4 and IL-10 from prolonged exposure to the toxin. The IL-4 and IL-10 downregulate production of IFN-gamma, MHC Class II, and costimulatory molecules on the surface of APCs. These effects produce memory cells that are unresponsive to antigen stimulation. [ 18 ] [ 19 ]
One mechanism by which this is possible involves cytokine-mediated suppression of T-cells. MHC crosslinking also activates a signaling pathway that suppresses hematopoiesis and upregulates Fas-mediated apoptosis . [ 20 ]
IFN-α is another product of prolonged SAg exposure. This cytokine is closely linked with induction of autoimmunity, [ 21 ] and the autoimmune disease Kawasaki disease is known to be caused by SAg infection. [ 14 ]
SAg activation in T-cells leads to production of CD40 ligand which activates isotype switching in B cells to IgG and IgM and IgE . [ 22 ]
To summarize, the T-cells are stimulated and produce excess amounts of cytokine resulting in cytokine-mediated suppression of T-cells and deletion of the activated cells as the body returns to homeostasis. The toxic effects of the microbe and SAg also damage tissue and organ systems, a condition known as toxic shock syndrome . [ 22 ]
If the initial inflammation is survived, the host cells become anergic or are deleted, resulting in a severely compromised immune system.
Apart from their mitogenic activity, SAgs are able to cause symptoms that are characteristic of infection. [ 2 ]
One such effect is vomiting . This effect is felt in cases of food poisoning , when SAg-producing bacteria release the toxin, which is highly resistant to heat. There is a distinct region of the molecule that is active in inducing gastrointestinal toxicity. [ 2 ] This activity is also highly potent , and quantities as small as 20-35 μg of SAg are able to induce vomiting. [ 10 ]
SAgs are able to stimulate recruitment of neutrophils to the site of infection in a way that is independent of T-cell stimulation. This effect is due to the ability of SAgs to activate monocytic cells, stimulating the release of the cytokine TNF-α, leading to increased expression of adhesion molecules that recruit leukocytes to infected regions. This causes inflammation in the lungs, intestinal tissue, and any place that the bacteria have colonized . [ 23 ] While small amounts of inflammation are natural and helpful, excessive inflammation can lead to tissue destruction.
One of the more dangerous indirect effects of SAg infection concerns the ability of SAgs to augment the effects of endotoxins in the body. This is accomplished by reducing the threshold for endotoxicity. Schlievert demonstrated that, when administered conjunctively, the effects of SAg and endotoxin are magnified as much as 50,000 times. [ 9 ] This could be due to the reduced immune system efficiency induced by SAg infection. Aside from the synergistic relationship between endotoxin and SAg, the “double hit” effect of the activity of the endotoxin and the SAg result in effects more deleterious that those seen in a typical bacterial infection. This also implicates SAgs in the progression of sepsis in patients with bacterial infections. [ 22 ]
The primary goals of medical treatment are to hemodynamically stabilize the patient and, if present, to eliminate the microbe that is producing the SAgs. This is accomplished through the use of vasopressors , fluid resuscitation and antibiotics . [ 2 ]
The body naturally produces antibodies to some SAgs, and this effect can be augmented by stimulating B-cell production of these antibodies. [ 26 ]
Immunoglobulin pools are able to neutralize specific antibodies and prevent T-cell activation. Synthetic antibodies and peptides have been created to mimic SAg-binding regions on the MHC class II, blocking the interaction and preventing T cell activation. [ 2 ]
Immunosuppressants are also employed to prevent T-cell activation and the release of cytokines. Corticosteroids are used to reduce inflammatory effects. [ 22 ]
SAg production effectively corrupts the immune response, allowing the microbe secreting the SAg to be carried and transmitted unchecked. One mechanism by which this is done is through inducing anergy of the T-cells to antigens and SAgs. [ 15 ] [ 18 ] Lussow and MacDonald demonstrated this by systematically exposing animals to a streptococcal antigen. They found that exposure to other antigens after SAg infection failed to elicit an immune response. [ 18 ] In another experiment, Watson and Lee discovered that memory T-cells created by normal antigen stimulation were anergic to SAg stimulation and that memory T-cells created after a SAg infection were anergic to all antigen stimulation. The mechanism by which this occurred was undetermined. [ 15 ] The genes that regulate SAg expression also regulate mechanisms of immune evasion such as M protein and Bacterial capsule expression, supporting the hypothesis that SAg production evolved primarily as a mechanism of immune evasion. [ 27 ]
When the structure of individual SAg domains has been compared to other immunoglobulin-binding streptococcal proteins (such as those toxins produced by E. coli ) it was found that the domains separately resemble members of these families. This homology suggests that the SAgs evolved through the recombination of two smaller β-strand motifs. [ 28 ]
"Staphylococcal Superantigen-Like" (SSL) toxins are a group of secreted proteins structurally similar to SAgs. Instead of binding to MHC and TCR, they target diverse components of innate immunity such as complement , Fc receptors , and myeloid cells . One way SSL targets myeloid cells is by binding the siallylactosamine glycan on surface glycoproteins. [ 29 ] In 2017, a superantigen was found to also have a glycan-binding ability. [ 30 ]
Minor lymphocyte stimulating (Mls; P03319 ) exotoxins were originally discovered in the thymic stromal cells of mice. These toxins are encoded by SAg genes that were incorporated into the mouse genome from the mouse mammary tumour virus (MMTV). The presence of these genes in the mouse genome allows the mouse to express the antigen in the thymus as a means of negatively selecting for lymphocytes with a variable Beta region that is susceptible to stimulation by the viral SAg. The result is that these mice are immune to infection by the virus later in life. [ 2 ]
Similar endogenous SAg-dependent selection has yet to be identified in the human genome, but endogenous SAgs have been discovered and are suspected of playing an integral role in viral infection. Infection by the Epstein–Barr virus , for example, is known to cause production of a SAg in infected cells, yet no gene for the toxin has been found on the genome of the virus. The virus manipulates the infected cell to express its own SAg genes, and this helps it to evade the host immune system. Similar results have been found with rabies , cytomegalovirus , and HIV . [ 2 ] In 2001, it was found that EBV actually transactivates a superantigen encoded by the env gene ( O42043 ) of HERV-K18 . In 2006, it was found that EBV does so by docking to CD2 . [ 31 ]
The two viral superantigens have no homology to aforementioned bacterial superantigens, nor are they homologous to each other.
Rasooly, R., Do, P. and Hernlem, B. (2011) Auto-presentation of Staphylococcal enterotoxin A by mouse CD4+ T cells. Open Journal of Immunology, 1, 8-14. | https://en.wikipedia.org/wiki/Superantigen |
In organic chemistry , aromaticity is a chemical property describing the way in which a conjugated ring of unsaturated bonds , lone pairs , or empty orbitals exhibits a stabilization stronger than would be expected from conjugation alone. The earliest use of the term was in an article by August Wilhelm Hofmann in 1855. [ 1 ] There is no general relationship between aromaticity as a chemical property and the olfactory properties of such compounds.
Aromaticity can also be considered a manifestation of cyclic delocalization and of resonance . [ 2 ] [ 3 ] [ 4 ] This is usually considered to be because electrons are free to cycle around circular arrangements of atoms that are alternately single- and double- bonded to one another. This commonly seen model of aromatic rings, namely the idea that benzene was formed from a six-membered carbon ring with alternating single and double bonds (cyclohexatriene), was developed by Kekulé (see History section below). Each bond may be seen as a hybrid of a single bond and a double bond, every bond in the ring identical to every other. The model for benzene consists of two resonance forms, which corresponds to the double and single bonds superimposing to give rise to six one-and-a-half bonds. Benzene is a more stable molecule than would be expected without accounting for charge delocalization.
As is standard for resonance diagrams , a double-headed arrow is used to indicate that the two structures are not distinct entities, but merely hypothetical possibilities. Neither is an accurate representation of the actual compound, which is best represented by a hybrid (average) of these structures, which can be seen at right. A C=C bond is shorter than a C−C bond, but benzene is perfectly hexagonal—all six carbon-carbon bonds have the same length , intermediate between that of a single and that of a double bond .
A better representation is that of the circular π bond (Armstrong's inner cycle ), in which the electron density is evenly distributed through a π-bond above and below the ring. This model more correctly represents the location of electron density within the aromatic ring.
The single bonds are formed with electrons in line between the carbon nuclei — these are called σ-bonds . Double bonds consist of a σ-bond and a π-bond. The π-bonds are formed from overlap of atomic p-orbitals above and below the plane of the ring. The following diagram shows the positions of these p-orbitals:
Since they are out of the plane of the atoms, these orbitals can interact with each other freely, and become delocalized. This means that, instead of being tied to one atom of carbon, each electron is shared by all six in the ring. Thus, there are not enough electrons to form double bonds on all the carbon atoms, but the "extra" electrons strengthen all of the bonds on the ring equally. The resulting molecular orbital has π symmetry.
The first known use of the word "aromatic" as a chemical term — namely, to apply to compounds that contain the phenyl radical — occurs in an article by August Wilhelm Hofmann in 1855. [ 1 ] If this is indeed the earliest introduction of the term, it is curious that Hofmann says nothing about why he introduced an adjective indicating olfactory character to apply to a group of chemical substances only some of which have notable aromas . Also, many of the most odoriferous organic substances known are terpenes , which are not aromatic in the chemical sense. But terpenes and benzenoid substances do have a chemical characteristic in common, namely higher unsaturation indices than many aliphatic compounds , and Hofmann may not have been making a distinction between the two categories.
In the 19th century, chemists found it puzzling that benzene could be so unreactive toward addition reactions, given its presumed high degree of unsaturation. The cyclohexatriene structure for benzene was first proposed by August Kekulé in 1865. Over the next few decades, most chemists readily accepted this structure, since it accounted for most of the known isomeric relationships of aromatic chemistry.
Between 1897 and 1906, J. J. Thomson , the discoverer of the electron, proposed three equivalent electrons between each carbon atom in benzene.
An explanation for the exceptional stability of benzene is conventionally attributed to Sir Robert Robinson , who was apparently the first (in 1925) [ 6 ] to coin the term aromatic sextet as a group of six electrons that resists disruption.
In fact, this concept can be traced further back, via Ernest Crocker in 1922, [ 7 ] to Henry Edward Armstrong , who in 1890 wrote "the (six) centric affinities act within a cycle ... benzene may be represented by a double ring ( sic ) ... and when an additive compound is formed, the inner cycle of affinity suffers disruption, the contiguous carbon-atoms to which nothing has been attached of necessity acquire the ethylenic condition". [ 8 ] [ verification needed ]
Here, Armstrong is describing at least four modern concepts. First, his "affinity" is better known nowadays as the electron , which was to be discovered only seven years later by J. J. Thomson. Second, he is describing electrophilic aromatic substitution , proceeding (third) through a Wheland intermediate , in which (fourth) the conjugation of the ring is broken. He introduced the symbol C centered on the ring as a shorthand for the inner cycle , thus anticipating Erich Clar 's notation. It is argued that he also anticipated the nature of wave mechanics , since he recognized that his affinities had direction, not merely being point particles, and collectively having a distribution that could be altered by introducing substituents onto the benzene ring ( much as the distribution of the electric charge in a body is altered by bringing it near to another body ).
The quantum mechanical origins of this stability, or aromaticity, were first modelled by Hückel in 1931. He was the first to separate the bonding electrons into sigma and pi electrons.
An aromatic (or aryl ) compound contains a set of covalently bound atoms with specific characteristics:
Whereas benzene is aromatic (6 electrons, from 3 double bonds), cyclobutadiene is not, since the number of π delocalized electrons is 4, which of course is a multiple of 4. The cyclobutadienide (2−) ion, however, is aromatic (6 electrons). An atom in an aromatic system can have other electrons that are not part of the system, and are therefore ignored for the 4n + 2 rule. In furan , the oxygen atom is sp² hybridized. One lone pair is in the π system and the other in the plane of the ring (analogous to C-H bond on the other positions). There are 6 π electrons, so furan is aromatic.
Aromatic molecules typically display enhanced chemical stability, compared to similar non-aromatic molecules. A molecule that can be aromatic will tend to alter its electronic or conformational structure to be in this situation. This extra stability changes the chemistry of the molecule. Aromatic compounds undergo electrophilic aromatic substitution and nucleophilic aromatic substitution reactions, but not electrophilic addition reactions as happens with carbon-carbon double bonds.
Many of the earliest-known examples of aromatic compounds, such as benzene and toluene, have distinctive pleasant smells. This property led to the term "aromatic" for this class of compounds, and hence the term "aromaticity" for the eventually discovered electronic property.
The circulating π electrons in an aromatic molecule produce ring currents that oppose the applied magnetic field in NMR . [ 9 ] The NMR signal of protons in the plane of an aromatic ring are shifted substantially further down-field than those on non-aromatic sp² carbons. This is an important way of detecting aromaticity. By the same mechanism, the signals of protons located near the ring axis are shifted up-field.
Aromatic molecules are able to interact with each other in so-called π-π stacking : The π systems form two parallel rings overlap in a "face-to-face" orientation. Aromatic molecules are also able to interact with each other in an "edge-to-face" orientation: The slight positive charge of the substituents on the ring atoms of one molecule are attracted to the slight negative charge of the aromatic system on another molecule.
Planar monocyclic molecules containing 4n π electrons are called antiaromatic and are, in general, destabilized. Molecules that could be antiaromatic will tend to alter their electronic or conformational structure to avoid this situation, thereby becoming non-aromatic. For example, cyclooctatetraene (COT) distorts itself out of planarity, breaking π overlap between adjacent double bonds. Relatively recently, cyclobutadiene was discovered to adopt an asymmetric, rectangular configuration in which single and double bonds indeed alternate; there is no resonance and the single bonds are markedly longer than the double bonds, reducing unfavorable p-orbital overlap. This reduction of symmetry lifts the degeneracy of the two formerly non-bonding molecular orbitals, which by Hund's rule forces the two unpaired electrons into a new, weakly bonding orbital (and also creates a weakly antibonding orbital). Hence, cyclobutadiene is non-aromatic; the strain of the asymmetric configuration outweighs the anti-aromatic destabilization that would afflict the symmetric, square configuration.
Aromatic compounds play key roles in the biochemistry of all living things. The four aromatic amino acids histidine , phenylalanine , tryptophan , and tyrosine each serve as one of the 20 basic building-blocks of proteins. Further, all 5 nucleotides ( adenine , thymine , cytosine , guanine , and uracil ) that make up the sequence of the genetic code in DNA and RNA are aromatic purines or pyrimidines . The molecule heme contains an aromatic system with 22 π electrons. Chlorophyll also has a similar aromatic system.
Aromatic compounds are important in industry. Key aromatic hydrocarbons of commercial interest are benzene , toluene , ortho -xylene and para -xylene . About 35 million tonnes are produced worldwide every year. They are extracted from complex mixtures obtained by the refining of oil or by distillation of coal tar, and are used to produce a range of important chemicals and polymers, including styrene , phenol , aniline , polyester and nylon .
The overwhelming majority of aromatic compounds are compounds of carbon, but they need not be hydrocarbons.
Benzene , as well as most other annulenes ( cyclodecapentaene excepted) with the formula C n H n where n ≥ 4 and is an even number, such as cyclotetradecaheptaene .
In heterocyclic aromatics ( heteroaromats ), one or more of the atoms in the aromatic ring is of an element other than carbon. This can lessen the ring's aromaticity, and thus (as in the case of furan ) increase its reactivity. Other examples include pyridine , pyrazine , imidazole , pyrazole , oxazole , thiazole , thiophene , and their benzannulated analogs ( benzimidazole , for example).
Polycyclic aromatic hydrocarbons are molecules containing two or more simple aromatic rings fused together by sharing two neighboring carbon atoms (see also simple aromatic rings ). Examples are naphthalene , anthracene , and phenanthrene .
Many chemical compounds are aromatic rings with other functional groups attached. Examples include trinitrotoluene (TNT), acetylsalicylic acid (aspirin), paracetamol , and the nucleotides of DNA .
Aromaticity is found in ions as well: the cyclopropenyl cation (2e system), the cyclopentadienyl anion (6e system), the tropylium ion (6e), and the cyclooctatetraene dianion (10e). Aromatic properties have been attributed to non-benzenoid compounds such as tropone . Aromatic properties are tested to the limit in a class of compounds called cyclophanes .
A special case of aromaticity is found in homoaromaticity where conjugation is interrupted by a single sp ³ hybridized carbon atom.
When carbon in benzene is replaced by other elements in borabenzene , silabenzene , germanabenzene , stannabenzene , phosphorine or pyrylium salts the aromaticity is still retained. Aromaticity also occurs in compounds that are not carbon-based at all. Inorganic 6-membered-ring compounds analogous to benzene have been synthesized. Hexasilabenzene (Si 6 H 6 ) and borazine (B 3 N 3 H 6 ) are structurally analogous to benzene, with the carbon atoms replaced by another element or elements. In borazine, the boron and nitrogen atoms alternate around the ring. Quite recently, the aromaticity of planar Si 5 6- rings occurring in the Zintl phase Li 12 Si 7 was experimentally evidenced by Li solid state NMR. [ 10 ]
Metal aromaticity is believed to exist in certain metal clusters of aluminium. [ citation needed ]
Möbius aromaticity occurs when a cyclic system of molecular orbitals, formed from p π atomic orbitals and populated in a closed shell by 4n (n is an integer) electrons, is given a single half-twist to correspond to a Möbius strip . A π system with 4n electrons in a flat (non-twisted) ring would be anti-aromatic, and therefore highly unstable, due to the symmetry of the combinations of p atomic orbitals. By twisting the ring, the symmetry of the system changes and becomes allowed (see also Möbius–Hückel concept for details). Because the twist can be left-handed or right-handed , the resulting Möbius aromatics are dissymmetric or chiral .
As of 2012, there is no proof that a Möbius aromatic molecule was synthesized. [ 11 ] [ 12 ] Aromatics with two half-twists corresponding to the paradromic topologies were first suggested by Johann Listing . [ 13 ] In carbo-benzene the ring bonds are extended with alkyne and allene groups.
Y-aromaticity is a concept which was developed to explain the extraordinary stability and high basicity of the guanidinium cation. Guanidinium does not have a ring structure but has six π-electrons which are delocalized over the molecule. However, this concept is controversial and some authors have stressed different effects. [ 14 ] [ 15 ] [ 16 ] | https://en.wikipedia.org/wiki/Superaromaticity |
In chemistry , a superatom is any cluster of atoms that seem to exhibit some of the properties of elemental atoms. [ 1 ] One example of a superatom is the cluster Al 13 − . [ 2 ]
Sodium atoms, when cooled from vapor , naturally condense into clusters, preferentially containing a magic number of atoms (2, 8, 20, 40, 58, etc.), with the outermost electron of each atom entering an orbital encompassing all the atoms in the cluster. Superatoms tend to behave chemically in a way that will allow them to have a closed shell of electrons, in this new counting scheme. [ citation needed ]
Certain aluminium clusters have superatom properties. These aluminium clusters are generated as anions ( Al − n with n = 1, 2, 3, … ) in helium gas and reacted with a gas containing iodine. When analyzed by mass spectrometry one main reaction product turns out to be Al 13 I − . [ 3 ] These clusters of 13 aluminium atoms with an extra electron added do not appear to react with oxygen when it is introduced in the same gas stream, indicating a halide-like character and a magic number of 40 free electrons. Such a cluster is known as a superhalogen . [ 4 ] [ 5 ] [ 6 ] [ 7 ] The cluster component in Al 13 I − ion is similar to an iodide ion or better still a bromide ion. The related Al 13 I − 2 cluster is expected to behave chemically like the triiodide ion. [ 3 ]
Similarly it has been noted that Al 14 clusters with 42 electrons (2 more than the magic numbers) appear to exhibit the properties of an alkaline earth metal which typically adopt +2 valence states. This is only known to occur when there are at least 3 iodine atoms attached to an Al − 14 cluster, Al 14 I − 3 . The anionic cluster has a total of 43 itinerant electrons, but the three iodine atoms each remove one of the itinerant electrons to leave 40 electrons in the jellium shell. [ 8 ] [ 9 ]
It is particularly easy and reliable to study atomic clusters of inert gas atoms by computer simulation because interaction between two atoms can be approximated very well by the Lennard-Jones potential . Other methods are readily available and it has been established that the magic numbers are 13, 19, 23, 26, 29, 32, 34, 43, 46, 49, 55, etc. [ 10 ]
Superatom complexes are a special group of superatoms that incorporate a metal core which is stabilized by organic ligands. In thiolate-protected gold cluster complexes, a simple electron counting rule can be used to determine the total number of electrons ( n e ) which correspond to a magic number :
where N is the number of metal atoms (A) in the core, v is the atomic valence, M is the number of electron withdrawing ligands, and z is the overall charge on the complex. [ 19 ] For example the Au 102 (p-MBA) 44 has 58 electrons and corresponds to a closed shell magic number. [ 20 ] | https://en.wikipedia.org/wiki/Superatom |
Supercavitation is the use of a cavitation bubble to reduce skin friction drag on a submerged object and enable high speeds . Applications include torpedoes and propellers , but in theory, the technique could be extended to an entire underwater vessel.
Cavitation is the formation of vapour bubbles in liquid caused by flow around an object. Bubbles form when water accelerates around sharp corners and the pressure drops below the vapour pressure . Pressure increases upon deceleration, and the water generally reabsorbs the vapour; however, vapour bubbles can implode and apply small concentrated impulses that may damage surfaces like ship propellers and pump impellers.
The potential for vapour bubbles to form in a liquid is given by the nondimensional cavitation number . It equals local pressure minus vapour pressure, divided by dynamic pressure . At increasing depths (or pressures in piping), the potential for cavitation is lower because the difference between local pressure and vapour pressure is greater.
A supercavitating object is a high-speed submerged object that is designed to initiate a cavitation bubble at its nose. The bubble extends (either naturally or augmented with internally generated gas) past the aft end of the object and prevents contact between the sides of the object and the liquid. This separation substantially reduces the skin friction drag on the supercavitating object.
A key feature of the supercavitating object is the nose, which typically has a sharp edge around its perimeter to form the cavitation bubble. [ 1 ] The nose may be articulated and shaped as a flat disk or cone. The shape of the supercavitating object is generally slender so the cavitation bubble encompasses the object. If the bubble is not long enough to encompass the object, especially at slower speeds, the bubble can be enlarged and extended by injecting high-pressure gas near the object's nose. [ 1 ]
The very high speed required for supercavitation can be temporarily reached by underwater-fired projectiles and projectiles entering water. For sustained supercavitation, rocket propulsion is used, and the high-pressure rocket gas can be routed to the nose to enhance the cavitation bubble. In principle, supercavitating objects can be maneuvered using various methods, including the following:
The Russian Navy developed the VA-111 Shkval supercavitation torpedo , [ 3 ] [ 4 ] which uses rocket propulsion and exceeds the speed of conventional torpedoes by at least a factor of five. NII-24 began development in 1960 under the code name "Шквал" (Squall). The VA-111 Shkval has been in service (exclusively in the Russian Navy) since 1977 with mass production starting in 1978. Several models were developed, with the most successful, the M-5, completed by 1972. From 1972 to 1977, over 300 test launches were conducted (95% of them on Issyk Kul lake ). [ citation needed ]
In 2006, German weapons manufacturer Diehl BGT Defence announced their own supercavitating torpedo , the Barracuda, now officially named Superkavitierender Unterwasserlaufkörper (English: supercavitating underwater projectile ). According to Diehl, it reaches speeds greater than 400 kilometres per hour (250 mph). [ 5 ]
In 1994, the United States Navy began development of the Rapid Airborne Mine Clearance System (RAMICS), a sea mine clearance system invented by C Tech Defense Corporation. The system is based on a supercavitating projectile stable in both air and water. RAMICS projectiles have been produced in diameters of 12.7 millimetres (0.50 in), 20 millimetres (0.79 in), and 30 millimetres (1.2 in). [ 6 ] The projectile's terminal ballistic design enables the explosive destruction of sea mines as deep as 45 meters (148 ft) with a single round. [ 7 ] In 2000 at Aberdeen Proving Ground , RAMICS projectiles fired from a hovering Sea Cobra gunship successfully destroyed a range of live underwater mines. As of March 2009, Northrop Grumman completed the initial phase of RAMICS testing for introduction into the fleet. [ 8 ]
Iran claimed to have successfully tested its first supercavitation torpedo, the Hoot (Whale), on 2–3 April 2006. Some sources have speculated it is based on the Russian VA-111 Shkval supercavitation torpedo, which travels at the same speed. [ 9 ] Russian Foreign Minister Sergey Lavrov denied supplying Iran with the technology. [ 10 ]
In 2004, DARPA announced the Underwater Express program, a research and evaluation program to demonstrate the use of supercavitation for a high-speed underwater craft application. The US Navy's ultimate goal is a new class of underwater craft for littoral missions that can transport small groups of navy personnel or specialized military cargo at speeds up to 100 knots. DARPA awarded contracts to Northrop Grumman and General Dynamics Electric Boat in late 2006. [ citation needed ] In 2009, DARPA announced progress on a new class of submarine:
The submarine's designer, Electric Boat, is working on a one-quarter scale model for sea trials off the coast of Rhode Island. If the trials are successful, Electric Boat will begin production on a full-scale 100-foot submarine. Currently, the Navy's fastest submarine can only travel at 25 to 30 knots while submerged. But if everything goes according to plan, the Underwater Express will speed along at 100 knots, allowing the delivery of men and materiel faster than ever. [ 11 ]
A prototype ship named the Ghost , uses supercavitation to propel itself atop two struts with sharpened edges. It was designed for stealth operations by Gregory Sancoff of Juliet Marine Systems . The vessel rides smoothly in choppy water and has reached speeds of 29 knots. [ 12 ]
The Chinese Navy [ 13 ] [ 14 ] [ 15 ] and US Navy [ 16 ] are reportedly working on their own supercavitating submarines using technical information obtained on the Russian VA-111 Shkval supercavitation torpedo.
A supercavitating propeller uses supercavitation to reduce water skin friction and increase propeller speed. The design is used in military applications, high-performance racing boats , and model racing boats. It operates fully submerged with wedge-shaped blades to force cavitation on the entire forward face, starting at the leading edge. Since the cavity collapses well behind the blade, the supercavitating propeller avoids spalling damage caused by cavitation, which is a problem with conventional propellers.
Supercavitating ammunition is used with German and Russian [ 17 ] underwater firearms , and other similar weapons. [ 18 ]
The Kursk submarine disaster was initially thought to have been caused by a faulty Shkval supercavitating torpedo, [ 19 ] though later evidence points to a faulty 65-76 torpedo . | https://en.wikipedia.org/wiki/Supercavitation |
In solid-state physics and crystallography , a crystal structure is described by a unit cell repeating periodically over space. There are an infinite number of choices for unit cells, with different shapes and sizes, which can describe the same crystal, and different choices can be useful for different purposes.
Say that a crystal structure is described by a unit cell U . Another unit cell S is a supercell of unit cell U , if S is a cell which describes the same crystal, but has a larger volume than cell U . Many methods which use a supercell perturbate it somehow to determine properties which cannot be determined by the initial cell. For example, during phonon calculations by the small displacement method, phonon frequencies in crystals are calculated using force values on slightly displaced atoms in the supercell. Another very important example of a supercell is the conventional cell of body-centered (bcc) or face-centered (fcc) cubic crystals .
The basis vectors of unit cell U ( a → , b → , c → ) {\textstyle ({\vec {a}},{\vec {b}},{\vec {c}})} can be transformed to basis vectors of supercell S ( a → ′ , b → ′ , c → ′ ) {\textstyle ({\vec {a}}',{\vec {b}}',{\vec {c}}')} by linear transformation [ 1 ]
( a → ′ b → ′ c → ′ ) = ( a → b → c → ) P ^ = ( a → b → c → ) ( P 11 P 12 P 13 P 21 P 22 P 23 P 31 P 32 P 33 ) {\displaystyle {\begin{pmatrix}{\vec {a}}'&{\vec {b}}'&{\vec {c}}'\\\end{pmatrix}}={\begin{pmatrix}{\vec {a}}&{\vec {b}}&{\vec {c}}\\\end{pmatrix}}{\hat {P}}={\begin{pmatrix}{\vec {a}}&{\vec {b}}&{\vec {c}}\\\end{pmatrix}}{\begin{pmatrix}P_{11}&P_{12}&P_{13}\\P_{21}&P_{22}&P_{23}\\P_{31}&P_{32}&P_{33}\\\end{pmatrix}}} where P ^ {\textstyle {\hat {P}}} is a transformation matrix . All elements P i j {\textstyle P_{ij}} should be integers with det ( P ^ ) > 1 {\textstyle \det({\hat {P}})>1} (with det ( P ^ ) = 1 {\textstyle \det({\hat {P}})=1} the transformation preserves volume). [ 2 ] For example, the matrix P P → I = ( 0 1 1 1 0 1 1 1 0 ) {\displaystyle P_{P\rightarrow I}={\begin{pmatrix}0&1&1\\1&0&1\\1&1&0\\\end{pmatrix}}} transforms a primitive cell to body-centered. Another particular case of the transformation is a diagonal matrix (i.e., P i ≠ j = 0 {\textstyle P_{i\neq j}=0} ). This called diagonal supercell expansion and can be represented as repeating of the initial cell over crystallographic axes of the initial cell.
Supercells are also commonly used in computational models of crystal defects to allow the use of periodic boundary conditions . [ 3 ]
This crystallography -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Supercell_(crystal) |
A supercentenarian , sometimes hyphenated as super-centenarian , is a person who is 110 or older. This age is achieved by about one in 1,000 centenarians . [ 1 ] [ obsolete source ] Supercentenarians typically live a life free of significant age-related diseases until shortly before the maximum human lifespan is reached. [ 2 ] [ 3 ]
The term "supercentenarian" has been used since 1832 or earlier. [ 4 ] Norris McWhirter , editor of The Guinness Book Of Records , used the term in association with age claims researcher A. Ross Eckler Jr. in 1976, and the term was further popularised in 1991 by William Strauss and Neil Howe in their book Generations .
The term "semisupercentenarian", has been used to describe someone aged 105–109. Originally the term "supercentenarian" was used to mean someone well over the age of 100, but 110 years and over became the cutoff point of accepted criteria for demographers. [ 2 ] [ 5 ]
The Gerontology Research Group maintains a top 30–40 list of oldest verified living people. The researchers estimate, based on a 0.15% to 0.25% survival rate of centenarians until the age of 110, that there should be between 300 and 450 living supercentenarians in the world. [ 6 ] [ 7 ] A study conducted in 2010 by the Max Planck Institute for Demographic Research found 663 validated supercentenarians, living and dead, and showed that the countries with the highest total number (not frequency) of supercentenarians (in decreasing order) were the United States, [ 8 ] Japan, England plus Wales, France, and Italy. [ 1 ] [ 9 ] The first verified supercentenarian in human history was Dutchman Geert Adriaans Boomgaard (1788–1899), [ 10 ] and it was not until the 1980s that the oldest verified age surpassed 115.
While claims of extreme age have persisted from the earliest times in history, the earliest supercentenarian accepted by Guinness World Records is Dutchman Thomas Peters (reportedly c. 1745–1857). [ citation needed ] However, Peters's age cannot be reliably verified due to an absence of any documents recording his early life. [ 11 ] Other scholars, such as French demographer Jean-Marie Robine , consider Geert Adriaans Boomgaard , also of the Netherlands, who turned 110 in 1898, to be the first verifiable case, as the alleged evidence for Peters has apparently been lost. The evidence for the 112 years of Englishman William Hiseland (reportedly 1620–1732) does not meet the standards required by Guinness World Records. [ 12 ]
Church of Norway records, the accuracy of which is subject to dispute, also show what appear to be several supercentenarians who lived in the south-central part of present-day Norway during the 16th and 17th centuries, including Johannes Torpe (1549–1664), [ citation needed ] and Knud Erlandson Etun (1659–1770), [ citation needed ] both residents of Valdres , Oppland . [ citation needed ]
In 1902, Margaret Ann Neve , born in 1792, became the first verified female supercentenarian. [ 13 ]
Jeanne Calment of France, who died in 1997 aged 122 years, 164 days, had the longest human lifespan documented. The oldest man ever verified is Jiroemon Kimura of Japan, who died in 2013 aged 116 years and 54 days. [ 14 ]
Ethel Caterham (born 21 August 1909) of the United Kingdom is the world's oldest living person, aged 115 years, 266 days. João Marinho Neto (born 5 October 1912) of Brazil is the world's oldest living man, aged 112 years, 221 days. [ 15 ] [ 16 ]
Research into centenarians helps scientists understand how an ordinary person might live longer. [ 17 ] [ 18 ] [ 19 ]
Organisations that research centenarians and supercentenarians include the GRG, LongeviQuest , and the Supercentenarian Research Foundation . [ 20 ]
In May 2021, whole genome sequencing analysis of 81 Italian semi-supercentenarians and supercentenarians were published, along with 36 control group people from the same region who were simply of advanced age. [ 21 ]
Research on the morbidity of supercentenarians has found that they remain free of major age-related diseases (e.g., stroke, cardiovascular disease , dementia , cancer , Parkinson's disease and diabetes ) until the very end of life when they die of exhaustion of organ reserve, which is the ability to return organ function to homeostasis . [ 2 ] About 10% of supercentenarians survive until the last three months of life without major age-related diseases, as compared to only 4% of semi-supercentenarians and 3% of centenarians. [ 2 ]
By measuring the biological age of various tissues from supercentenarians, researchers may be able to identify the nature of those that are protected from ageing effects. According to a study of 30 different body parts from a 112-year-old female supercentenarian, along with younger controls, the cerebellum is protected from ageing, according to an epigenetic biomarker of tissue age known as the epigenetic clock —the reading is about 15 years younger than expected in a centenarian. [ 22 ] These findings could explain why the cerebellum exhibits fewer neuropathological hallmarks of age-related dementia as compared to other brain regions.
A 2021 genomic study identified genetic characteristics that protect against age-related diseases, particularly variants that improve DNA repair . Five variants were found to be significant, affecting STK17A (increased expression) and COA1 (reduced expression) genes. Supercentenarians also had an unexpectedly low level of somatic mutations . [ 23 ] | https://en.wikipedia.org/wiki/Supercentenarian |
Modern biological research has revealed strong evidence that the enzymes of the mitochondrial respiratory chain assemble into larger, supramolecular structures called supercomplexes , instead of the traditional fluid model of discrete enzymes dispersed in the inner mitochondrial membrane . These supercomplexes are functionally active and necessary for forming stable respiratory complexes. [ 1 ]
One supercomplex of complex I , III , and IV make up a unit known as a respirasome . Respirasomes have been found in a variety of species and tissues, including rat brain, [ 2 ] liver, [ 2 ] kidney, [ 2 ] skeletal muscle, [ 2 ] [ 3 ] heart, [ 2 ] bovine heart, [ 4 ] human skin fibroblasts , [ 5 ] fungi, [ 6 ] plants, [ 7 ] [ 8 ] and C. elegans . [ 9 ]
In 1955, biologists Britton Chance and G. R. Williams were the first to propose the idea that respiratory enzymes assemble into larger complexes, [ 10 ] although the fluid state model remained the standard. However, as early as 1985, researchers had begun isolating complex III / complex IV supercomplexes from bacteria [ 11 ] [ 12 ] [ 13 ] and yeast . [ 14 ] [ 15 ] Finally, in 2000 Hermann Schägger and Kathy Pfeiffer used Blue Native PAGE to isolate bovine mitochondrial membrane proteins, showing Complex I , III, and IV arranged in supercomplexes. [ 16 ]
The most common supercomplexes observed are Complex I/III, Complex I/III/IV, and Complex III/IV. Most of Complex II is found in a free-floating form in both plant and animal mitochondria. Complex V can be found co-migrating as a dimer with other supercomplexes, but scarcely as part of the supercomplex unit. [ 1 ]
Supercomplex assembly appears to be dynamic and respiratory enzymes are able to alternate between participating in large respirasomes and existing in a free state. It is not known what triggers changes in complex assembly, but research has revealed that the formation of supercomplexes is heavily dependent upon the lipid composition of the mitochondrial membrane, and in particular requires the presence of cardiolipin , a unique mitochondrial lipid. [ 17 ] In yeast mitochondria lacking cardiolipin, the number of enzymes forming respiratory supercomplexes was significantly reduced. [ 17 ] [ 18 ] According to Wenz et al. (2009), cardiolipin stabilizes the supercomplex formation by neutralizing the charges of lysine residues in the interaction domain of Complex III with Complex IV. [ 19 ] In 2012, Bazan et al. was able to reconstitute trimer and tetramer Complex III/IV supercomplexes from purified complexes isolated from Saccharomyces cerevisiae and exogenous cardiolipin liposomes . [ 20 ]
Another hypothesis for respirasome formation is that membrane potential may initiate changes in the electrostatic / hydrophobic interactions mediating the assembly/disassembly of supercomplexes. [ 21 ]
The functional significance of respirasomes is not entirely clear but more recent research is beginning to shed some light on their purpose. It has been hypothesized that the organization of respiratory enzymes into supercomplexes reduces oxidative damage and increases metabolism efficiency. Schäfer et al. (2006) demonstrated that supercomplexes comprising Complex IV had higher activities in Complex I and III, indicating that the presence of Complex IV modifies the conformation of the other complexes to enhance catalytic activity. [ 22 ] Evidence has also been accumulated to show that the presence of respirasomes is necessary for the stability and function of Complex I. [ 21 ] In 2013, Lapuente-Brun et al. demonstrated that supercomplex assembly is "dynamic and organizes electron flux to optimize the use of available substrates." [ 23 ] | https://en.wikipedia.org/wiki/Supercomplex |
The high performance supercomputing program started in mid-to-late 1980s in Pakistan . [ 1 ] Supercomputing is a recent area of Computer science in which Pakistan has made progress, driven in part by the growth of the information technology age in the country. Developing on the ingenious supercomputer program started in 1980s when the deployment of the Cray supercomputers was initially denied. [ 2 ] The fastest supercomputer currently in use in Pakistan is developed and hosted by the Pak-Austria Fachhochschule: Institute of Applied Sciences and Technology (PAF-IAST) in Haripur, which maintains the largest high-performance computing cluster in the country. As of November 2012, there are no supercomputers from Pakistan on the Top500 list. [ 3 ]
But what about supercomputer exports to India or Pakistan? Will they be used to advance the nations' economies or to speed development of nuclear weapons?
The initial interests of Pakistan in the research and development of supercomputing began during the early 1980s, at several high-powered institutions of the country. During this time, senior scientists at the Pakistan Atomic Energy Commission (PAEC) were the first to engage in research on high performance computing , while calculating and determining exact values involving fast-neutron calculations . [ 5 ] According to one scientist involved in the development of the supercomputer, a team of the leading scientists at PAEC developed powerful computerized electronic codes, acquired powerful high performance computers to design this system and came up with the first design that was to be manufactured, as part of the atomic bomb project . [ 5 ] However, the most productive and pioneering research was carried out by physicist M.S. Zubairy at the Institute of Physics of Quaid-e-Azam University . [ 6 ] Zubairy published two important books on Quantum Computers and high-performance computing throughout his career that are presently taught worldwide. [ 7 ] In 1980s and 1990s, the scientific research and mathematical work on the supercomputers was also carried out by mathematician Dr. Tasneem Shah at the Kahuta Research Laboratories while trying to solve additive problems in Computational mathematics and the Statistical physics using the Monte Carlo method . [ citation needed ] In 1990s, the Khan Research Laboratories deployed a series of supercomputer systems at its site, becoming nation's one of the first fastest computers at that time. [ 8 ] Technological imports in supercomputers were denied to Pakistan, as well as India, due to an arms embargo, as the foreign powers feared that the imports and enhancement to the supercomputing development was a dual use of technology and could be used for developing nuclear weapons in 1990s.
During the Bush administration, in an effort to help US-based companies gain competitive ground in developing information technology-based markets, the U.S. government eased regulations that applied to exporting high-performance computers to Pakistan and four other technologically developing countries. The new regulations allowed these countries to import supercomputer systems that were capable of processing information at a speed of 190,000 million theoretical operations per second (MTOPS); the previous limit had been 85,000 MTOPS. [ 4 ]
Developed Supercomputer in CCMS Department of Physics, University of Malakand. It is heavily used by Graduate Students, PhD Scholars and Faculty Members of UOM as well as Researchers from other organizations. It is operational since 2016.
It has 2-servers used as Head Nodes and 24-machines used as Compute Nodes.
It has been mostly used for Simulation and Modeling by the Researchers of Materials Science and Chemistry Departments.
The Ghulam Ishaq Khan Institute of Engineering Sciences and Technology (GIKI) has nation's notable supercomputer programmes.
This facility has been funded by Directorate of Science and Technology (DoST), Government of Khyber Pakhtunkhwa, Pakistan in 2012 under supervision of Dr. Masroor Hussain. [ 9 ] This system provides a test bed for shared memory systems, distributed memory systems and Array Processing using OpenMP, MPI-2 and CUDA specifications, respectively. It is a compute-intensive platform and consisted of the following hardware components: [ 10 ]
The COMSATS Institute of Information Technology (CIIT) has been actively involved in research in the areas of parallel computing and computer cluster systems. [ 11 ] In 2004, CIIT built a cluster-based supercomputer for research purposes. The project was funded by the Higher Education Commission of Pakistan . [ 11 ] The Linux -based computing cluster, which was tested and configured for optimization, achieved a performance of 158 GFLOPS . The packaging of the cluster was locally designed. [ 11 ]
The National University of Sciences and Technology (NUST) in Islamabad has developed the fastest supercomputing facility in Pakistan till date. The supercomputer, which operates at the university's Research Centre for Modeling and Simulation (RCMS), was inaugurated in September 2012. [ 12 ] The supercomputer has parallel computation abilities and has a performance of 132 teraflops per second (i.e. 132 trillion floating-point operations per second), making it the fastest graphics processing unit (GPU) parallel computing system currently in operation in Pakistan. [ 12 ]
It has multi-core processors and graphics co-processors, with an inter-process communication speed of 40 gigabits per second . According to specifications available of the system, the cluster consists of a "66 NODE supercomputer with 30,992 processor cores , 2 head nodes (16 processor cores), 32 dual quad core computer nodes (256 processor cores) and 32 Nvidia computing processors. Each processor has 960 processor cores (30,720 processor cores), QDR InfiniBand interconnection and 21.6 TB SAN storage." [ 12 ]
In 1990s, the Kahuta Research Laboratories (KRL) became nation's first site and a home of a number of the most high-performance supercomputer and parallel computing systems that were installed at the facility by a team of mathematicians. [ 2 ] A parallel Computational Fluid Dynamics (CFD) division was established which specialized in conducting high performance computations on shock waves in the blast effects from the outer surface to the inner core by using the difficult differential equations of the state of the materials under high pressure. [ 2 ]
The Kohat University of Science and Technology installed a supercomputer facility with the specifications of Cluster. [ 13 ]
On 22 January 2016, Riphah International University based in Islamabad announced that their team of engineers have developed a supercomputer architecture. The system supports CUDA, MPI/LAM, OpenMP, OpenCL and OpenACC programming models. It also can solve larger algorithms, numerical techniques, big data, data mining, bioinformatics and genomics, business intelligence and analytics, climate, and weather and ocean related problems. [ 14 ]
UCERD Private Limited proposed and developed Pakistan's 1st FPGA-Powered Supercomputer. [ 15 ] [ 16 ]
In 2019, the UCERD team has won HEC Technology Development Fund of Rs. 16 Million [ 17 ] for the Project "Development of Scalable Heterogeneous Supercomputing System" [ 18 ] | https://en.wikipedia.org/wiki/Supercomputing_in_Pakistan |
The superconducting camera , SCAM, is an ultra-fast photon-counting camera developed by the European Space Agency . It is cooled to just 0.3 K (three-tenths of a degree Celsius above absolute zero ). This enables its sensitive electronic detectors, known as superconducting tunnel junction detectors, to register almost every photon of light that falls onto it.
Its advantage over a charge-coupled device (CCD) is that it can measure both the brightness (rate) of the incoming photon stream and the color ( wavelength or energy ) of each individual photon .
The number of free primary electrons generated per photon event is proportional to the photon energy and amounts to ~18,000 per electronvolt . As a result, if the device is operated in single-photon count mode the energy of each captured photon can be calculated in the visible-light range, where photons have energies of a few electronvolts, each generating >20,000 electrons. In a normal CCD, only one primary electron is generated per photon, except for very energetic photons, like X-rays , where a normal CCD can operate in a similar way to a SCAM.
In 2006 the SCAM instrument was mounted on the ESA's Optical Ground Station telescope in order to observe the disintegration of Comet 73P/Schwassmann-Wachmann 3 . [ 1 ]
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Superconducting_camera |
In superconductivity , the superconducting coherence length , usually denoted as ξ {\displaystyle \xi } (Greek lowercase xi ), is the characteristic exponent of the variations of the density of superconducting component.
The superconducting coherence length is one of two parameters in the Ginzburg–Landau theory of superconductivity. It is given by: [ 1 ]
where α ( T ) {\displaystyle \alpha (T)} is a parameter in the Ginzburg–Landau equation for ψ {\displaystyle \psi } with the form α 0 ( T − T c ) {\displaystyle \alpha _{0}(T-T_{c})} , where α 0 {\displaystyle \alpha _{0}} is a constant.
In Landau mean-field theory, at temperatures T {\displaystyle T} near the superconducting critical temperature T c {\displaystyle T_{c}} , ξ ( T ) ∝ ( 1 − T / T c ) − 1 2 {\displaystyle \xi (T)\propto (1-T/T_{c})^{-{\frac {1}{2}}}} . Up to a factor of 2 {\displaystyle {\sqrt {2}}} , it is equivalent to the characteristic exponent describing a recovery of the order parameter away from a perturbation in the theory of the second order phase transitions.
In some special limiting cases , for example in the weak-coupling BCS theory of isotropic s-wave superconductor it is related to characteristic Cooper pair size: [ 2 ]
where ℏ {\displaystyle \hbar } is the reduced Planck constant , m {\displaystyle m} is the mass of a Cooper pair (twice the electron mass ), v f {\displaystyle v_{f}} is the Fermi velocity, and Δ {\displaystyle \Delta } is the superconducting energy gap . The superconducting coherence length is a measure of the size of a Cooper pair (distance between the two electrons) and is of the order of 10 − 4 {\displaystyle 10^{-4}} cm. The electron near or at the Fermi surface moving through the lattice of a metal produces behind itself an attractive potential of range of the order of 3 × 10 − 6 {\displaystyle 3\times 10^{-6}} cm, the lattice distance being of order 10 − 8 {\displaystyle 10^{-8}} cm. For a very authoritative explanation based on physical intuition see the CERN article by V.F. Weisskopf. [ 3 ]
The ratio κ = λ / ξ {\displaystyle \kappa =\lambda /\xi } , where λ {\displaystyle \lambda } is the London penetration depth , is known as the Ginzburg–Landau parameter. Type-I superconductors are those with 0 < κ < 1 / 2 {\displaystyle 0<\kappa <1/{\sqrt {2}}} , and type-II superconductors are those with κ > 1 / 2 {\displaystyle \kappa >1/{\sqrt {2}}} .
In strong-coupling, anisotropic and multi-component theories these expressions are modified. [ 4 ] | https://en.wikipedia.org/wiki/Superconducting_coherence_length |
A superconducting magnet is an electromagnet made from coils of superconducting wire . They must be cooled to cryogenic temperatures during operation. In its superconducting state the wire has no electrical resistance and therefore can conduct much larger electric currents than ordinary wire, creating intense magnetic fields. Superconducting magnets can produce stronger magnetic fields than all but the strongest non-superconducting electromagnets , and large superconducting magnets can be cheaper to operate because no energy is dissipated as heat in the windings. They are used in MRI instruments in hospitals, and in scientific equipment such as NMR spectrometers, mass spectrometers , fusion reactors and particle accelerators . They are also used for levitation, guidance and propulsion in a magnetic levitation (maglev) railway system being constructed in Japan .
During operation, the magnet windings must be cooled below their critical temperature , the temperature at which the winding material changes from the normal resistive state and becomes a superconductor , which is in the cryogenic range far below room temperature. The windings are typically cooled to temperatures significantly below their critical temperature, because the lower the temperature, the better superconductive windings work—the higher the currents and magnetic fields they can stand without returning to their non-superconductive state. Two types of cooling systems are commonly used to maintain magnet windings at temperatures sufficient to maintain superconductivity:
Liquid helium is used as a coolant for many superconductive windings. It has a boiling point of 4.2 K, far below the critical temperature of most winding materials. The magnet and coolant are contained in a thermally insulated container ( dewar ) called a cryostat . To keep the helium from boiling away, the cryostat is usually constructed with an outer jacket containing (significantly cheaper) liquid nitrogen at 77 K. Alternatively, a thermal shield made of conductive material and maintained in 40 K – 60 K temperature range, cooled by conductive connections to the cryocooler cold head, is placed around the helium-filled vessel to keep the heat input to the latter at acceptable level. One of the goals of the search for high temperature superconductors is to build magnets that can be cooled by liquid nitrogen alone. At temperatures above about 20 K cooling can be achieved without boiling off cryogenic liquids. [ citation needed ]
Because of increasing cost and the dwindling availability of liquid helium, many superconducting systems are cooled using two stage mechanical refrigeration. In general two types of mechanical cryocoolers are employed which have sufficient cooling power to maintain magnets below their critical temperature. The Gifford–McMahon cryocooler has been commercially available since the 1960s and has found widespread application. [ 1 ] [ 2 ] [ 3 ] [ 4 ] The G-M regenerator cycle in a cryocooler operates using a piston type displacer and heat exchanger. Alternatively, 1999 marked the first commercial application using a pulse tube cryocooler . This design of cryocooler has become increasingly common due to low vibration and long service interval as pulse tube designs use an acoustic process in lieu of mechanical displacement. In a typical two-stage refrigerator, the first stage will offer higher cooling capacity but at higher temperature (≈ 77 K) with the second stage reaching ≈ 4.2 K and < 2.0 W of cooling power. In use, the first stage is used primarily for ancillary cooling of the cryostat with the second stage used primarily for cooling the magnet.
The maximal magnetic field achievable in a superconducting magnet is limited by the field at which the winding material ceases to be superconducting, its "critical field", H c , which for type-II superconductors is its upper critical field . Another limiting factor is the "critical current", I c , at which the winding material also ceases to be superconducting. Advances in magnets have focused on creating better winding materials.
The superconducting portions of most current magnets are composed of niobium–titanium . This material has critical temperature of 10 K and can superconduct at up to about 15 T . More expensive magnets can be made of niobium–tin (Nb 3 Sn). These have a T c of 18 K. When operating at 4.2 K they are able to withstand a much higher magnetic field intensity , up to 25 T to 30 T. Unfortunately, it is far more difficult to make the required filaments from this material. This is why sometimes a combination of Nb 3 Sn for the high-field sections and NbTi for the lower-field sections is used. Vanadium–gallium is another material used for the high-field inserts.
High-temperature superconductors (e.g. BSCCO or YBCO ) may be used for high-field inserts when required magnetic fields are higher than Nb 3 Sn can manage. [ citation needed ] BSCCO, YBCO or magnesium diboride may also be used for current leads, conducting high currents from room temperature into the cold magnet without an accompanying large heat leak from resistive leads. [ citation needed ]
The coil windings of a superconducting magnet are made of wires or tapes of Type II superconductors (e.g. niobium–titanium or niobium–tin ). The wire or tape itself may be made of tiny filaments (about 20 micrometres thick) of superconductor in a copper matrix. The copper is needed to add mechanical stability, and to provide a low resistance path for the large currents in case the temperature rises above T c or the current rises above I c and superconductivity is lost. These filaments need to be this small because in this type of superconductor the current only flows in a surface layer whose thickness is limited to the London penetration depth (see Skin effect ). The coil must be carefully designed to withstand (or counteract) magnetic pressure and Lorentz forces that could otherwise cause wire fracture or crushing of insulation between adjacent turns.
The current to the coil windings is provided by a high current, very low voltage DC power supply , since in steady state the only voltage across the magnet is due to the resistance of the feeder wires. Any change to the current through the magnet must be done very slowly, first because electrically the magnet is a large inductor and an abrupt current change will result in a large voltage spike across the windings, and more importantly because fast changes in current can cause eddy currents and mechanical stresses in the windings that can precipitate a quench (see below). So the power supply is usually microprocessor-controlled, programmed to accomplish current changes gradually, in gentle ramps. It usually takes several minutes to energize or de-energize a laboratory-sized magnet.
An alternate operating mode used by most superconducting magnets is to short-circuit the windings with a piece of superconductor once the magnet has been energized. The windings become a closed superconducting loop, the power supply can be turned off, and persistent currents will flow for months, preserving the magnetic field. The advantage of this persistent mode is that stability of the magnetic field is better than is achievable with the best power supplies, and no energy is needed to power the windings. The short circuit is made by a 'persistent switch', a piece of superconductor inside the magnet connected across the winding ends, attached to a small heater. [ 5 ] When the magnet is first turned on, the switch wire is heated above its transition temperature, so it is resistive. Since the winding itself has no resistance, no current flows through the switch wire. To go to persistent mode, the supply current is adjusted until the desired magnetic field is obtained, then the heater is turned off. The persistent switch cools to its superconducting temperature, short-circuiting the windings. Then the power supply can be turned off. The winding current, and the magnetic field, will not actually persist forever, but will decay slowly according to a normal inductive time constant ( L / R ):
where R {\displaystyle R} is a small residual resistance in the superconducting windings due to joints or a phenomenon called flux motion resistance. Nearly all commercial superconducting magnets are equipped with persistent switches.
A quench is an abnormal termination of magnet operation that occurs when part of the superconducting coil enters the normal ( resistive ) state. This can occur because the field inside the magnet is too large, the rate of change of field is too large (causing eddy currents and resultant heating in the copper support matrix), or a combination of the two. More rarely a defect in the magnet can cause a quench. When this happens, that particular spot is subject to rapid Joule heating from the enormous current, which raises the temperature of the surrounding regions. This pushes those regions into the normal state as well, which leads to more heating in a chain reaction. The entire magnet rapidly becomes normal (this can take several seconds, depending on the size of the superconducting coil). This is accompanied by a loud bang as the energy in the magnetic field is converted to heat, and rapid boil-off of the cryogenic fluid. The abrupt decrease of current can result in kilovolt inductive voltage spikes and arcing. Permanent damage to the magnet is rare, but components can be damaged by localized heating, high voltages, or large mechanical forces. In practice, magnets usually have safety devices to stop or limit the current when the beginning of a quench is detected. If a large magnet undergoes a quench, the inert vapor formed by the evaporating cryogenic fluid can present a significant asphyxiation hazard to operators by displacing breathable air.
A large section of the superconducting magnets in CERN 's Large Hadron Collider unexpectedly quenched during start-up operations in 2008, necessitating the replacement of a number of magnets. [ 6 ] In order to mitigate against potentially destructive quenches, the superconducting magnets that form the LHC are equipped with fast-ramping heaters that are activated once a quench event is detected by the complex quench protection system. As the dipole bending magnets are connected in series, each power circuit includes 154 individual magnets, and should a quench event occur, the entire combined stored energy of these magnets must be dumped at once. This energy is transferred into massive blocks of metal which heat up to several hundred degrees Celsius due to the resistive heating, in a matter of seconds. Although undesirable, a magnet quench is a "fairly routine event" during the operation of a particle accelerator. [ 7 ]
In certain cases, superconducting magnets designed for very high currents require extensive bedding in, to enable the magnets to function at their full planned currents and fields. This is known as "training" the magnet, and involves a type of material memory effect. One situation this is required in is the case of particle colliders such as CERN 's Large Hadron Collider . [ 8 ] [ 9 ] The magnets of the LHC were planned to run at 8 TeV (2 × 4 TeV) on its first run and 14 TeV (2 × 7 TeV) on its second run, but were initially operated at a lower energy of 3.5 TeV and 6.5 TeV per beam respectively. Because of initial crystallographic defects in the material, they will initially lose their superconducting ability ("quench") at a lower level than their design current. CERN states that this is due to electromagnetic forces causing tiny movements in the magnets, which in turn cause superconductivity to be lost when operating at the high precision needed for their planned current. [ 9 ] By repeatedly running the magnets at a lower current and then slightly increasing the current until they quench under control, the magnet will gradually both gain the required ability to withstand the higher currents of its design specification without quenches occurring, and have any such issues "shaken" out of them, until they are eventually able to operate reliably at their full planned current without experiencing quenches. [ 9 ]
Although the idea of making electromagnets with superconducting wire was proposed by Heike Kamerlingh Onnes shortly after he discovered superconductivity in 1911, a practical superconducting electromagnet had to await the discovery of superconducting materials that could support large critical supercurrent densities in high magnetic fields. The first successful superconducting magnet was built by G.B. Yntema in 1955 using niobium wire and achieved a field of 0.7 T at 4.2 K. [ 10 ] Then, in 1961, J.E. Kunzler , E. Buehler, F.S.L. Hsu, and J.H. Wernick made the discovery that a compound of niobium and tin could support critical-supercurrent densities greater than 100,000 amperes per square centimetre in magnetic fields of 8.8 teslas. [ 11 ] Despite its brittle nature, niobium–tin has since proved extremely useful in supermagnets generating magnetic fields up to 20 T.
The persistent switch was invented in 1960 by Dwight Adams while a postdoctoral associate at Stanford University. The second persistent switch was constructed at the University of Florida by M.S. student R.D. Lichti in 1963. It has been preserved in a showcase in the UF Physics Building.
In 1962, T.G. Berlincourt and R.R. Hake [ 12 ] discovered the high-critical-magnetic-field, high-critical-supercurrent-density properties of niobium–titanium alloys. Although niobium–titanium alloys possess less spectacular superconducting properties than niobium–tin, they are highly ductile, easily fabricated, and economical. Useful in supermagnets generating magnetic fields up to 10 teslas, niobium–titanium alloys are the most widely used supermagnet materials.
In 1986, the discovery of high temperature superconductors by Georg Bednorz and Karl Müller energized the field, raising the possibility of magnets that could be cooled by liquid nitrogen instead of the more difficult-to-work-with helium.
In 2007, a magnet with windings of YBCO achieved a world record field of 26.8 T . [ 13 ] The US National Research Council had a goal of creating a 30-tesla superconducting magnet.
Globally in 2014, almost six billion US dollars worth of economic activity resulted for which superconductivity was indispensable. MRI systems, most of which employ niobium–titanium, accounted for about 80% of that total. [ 14 ]
In 2016, Yoon et al. reported a 26 T no-insulation superconducting magnet that they built out of GdBa 2 Cu 3 O 7– x , [ 15 ] using a technique which was previously reported in 2013. [ 16 ]
In 2017, a YBCO magnet created by the National High Magnetic Field Laboratory (NHMFL) broke the previous world record with a strength of 32 T. This is an all superconducting user magnet, designed to last for many decades. They held the record as of March 2018.
In 2019, a new world-record of 32.35 T with all-superconducting magnet was achieved by the Institute of Electrical Engineering, Chinese Academy of Sciences (IEE, CAS). [ 17 ] No-insulation technique for the HTS insert magnet is also used.
In 2019, the NHMFL also developed a non-insulated YBCO test coil combined with a resistive magnet and broke the lab's own world record for highest continuous magnetic field for any configuration of magnet at 45.5 T. [ 18 ] [ 19 ]
A 1.2 GHz (28.2 T) NMR magnet [ 20 ] was achieved in 2020 using an HTS magnet. [ 21 ]
In 2022, the Hefei Institutes of Physical Science, Chinese Academy of Sciences (HFIPS, CAS) claimed a new world record for the strongest steady magnetic field of 45.22 T reached, [ 22 ] [ 23 ] while the previous NHMFL 45.5 T record in 2019 was actually reached when the magnet failed immediately in a quench .
Superconducting magnets have a number of advantages over resistive electromagnets. They can generate much stronger magnetic fields than ferromagnetic-core electromagnets , which are limited to fields of around 2 T. The field is generally more stable, resulting in less noisy measurements. They can be smaller, and the area at the center of the magnet where the field is created is empty rather than being occupied by an iron core. Large magnets can consume much less power. In the persistent state (above), the only power the magnet consumes is that needed for refrigeration equipment. Higher fields can be achieved with cooled resistive electromagnets, as superconducting coils enter the non-superconducting state at high fields. Steady fields of over 40 T can be achieved, usually by combining a Bitter electromagnet with a superconducting magnet (often as an insert).
Superconducting magnets are widely used in MRI scanners, NMR equipment, mass spectrometers , magnetic separation processes, and particle accelerators .
In Japan, after decades of research and development into superconducting maglev by Japanese National Railways and later Central Japan Railway Company (JR Central), the Japanese government gave permission to JR Central to build the Chūō Shinkansen , linking Tokyo to Nagoya and later to Osaka. [ citation needed ]
One of the most challenging uses of superconducting magnets is in the LHC particle accelerator. [ 24 ] Its niobium–titanium (Nb–Ti) magnets operate at 1.9 K to allow them to run safely at 8.3 T. Each magnet stores 7 MJ. In total the magnets store 10.4 GJ . Once or twice a day, as protons are accelerated from 450 GeV to 7 TeV, the field of the superconducting bending magnets is increased from 0.54 T to 8.3 T.
The central solenoid and toroidal field superconducting magnets designed for the ITER fusion reactor use niobium–tin (Nb 3 Sn) as a superconductor. The central solenoid coil carries a current of 46 kA and produce a magnetic field of 13.5 T. The 18 toroidal field coils at a maximum field of 11.8 T store an energy of 41 GJ (total?). [ clarification needed ] They have been tested at a record current of 80 kA. Other lower field ITER magnets (PF and CC) [ clarification needed ] use niobium–titanium. Most of the ITER magnets have their field varied many times per hour.
One high-resolution mass spectrometer planned to use a 21-tesla SC magnet. [ 25 ] | https://en.wikipedia.org/wiki/Superconducting_magnet |
Superconducting magnetic energy storage (SMES) systems store energy in the magnetic field created by the flow of direct current in a superconducting coil that has been cryogenically cooled to a temperature below its superconducting critical temperature . This use of superconducting coils to store magnetic energy was invented by M. Ferrier in 1970. [ 2 ]
A typical SMES system includes three parts: superconducting coil , power conditioning system and cryogenically cooled refrigerator. Once the superconducting coil is energized, the current will not decay and the magnetic energy can be stored indefinitely.
The stored energy can be released back to the network by discharging the coil. The power conditioning system uses an inverter / rectifier to transform alternating current (AC) power to direct current or convert DC back to AC power. The inverter/rectifier accounts for about 2–3% energy loss in each direction. SMES loses the least amount of electricity in the energy storage process compared to other methods of storing energy. SMES systems are highly efficient; the round-trip efficiency is greater than 95%. [ 3 ]
Due to the energy requirements of refrigeration and the high cost of superconducting wire , SMES is currently used for short duration energy storage. Therefore, SMES is most commonly devoted to improving power quality .
There are several reasons for using superconducting magnetic energy storage instead of other energy storage methods. The most important advantage of SMES is that the time delay during charge and discharge is quite short. Power is available almost instantaneously and very high power output can be provided for a brief period of time. Other energy storage methods, such as pumped hydro or compressed air , have a substantial time delay associated with the energy conversion of stored mechanical energy back into electricity. Thus if demand is immediate, SMES is a viable option. Another advantage is that the loss of power is less than other storage methods because electric currents encounter almost no resistance . Additionally the main parts in a SMES are motionless, which results in high reliability.
There are several small SMES units available for commercial use and several larger test bed projects. Several 1 MW·h units are used for power quality control in installations around the world, especially to provide power quality at manufacturing plants requiring ultra-clean power, such as microchip fabrication facilities. [ 4 ]
These facilities have also been used to provide grid stability in distribution systems. [ 5 ] SMES is also used in utility applications. In northern Wisconsin , a string of distributed SMES units were deployed to enhance stability of a transmission loop. [ 6 ] The transmission line is subject to large, sudden load changes due to the operation of a paper mill, with the potential for uncontrolled fluctuations and voltage collapse.
The Engineering Test Model is a large SMES with a capacity of approximately 20 MW·h, capable of providing 40 MW of power for 30 minutes or 10 MW of power for 2 hours. [ 7 ]
A SMES system typically consists of four parts
This system includes the superconducting coil, a magnet and the coil protection. Here the energy is stored by disconnecting the coil from the larger system and then using electromagnetic induction from the magnet to induce a current in the superconducting coil. This coil then preserves the current until the coil is reconnected to the larger system, after which the coil partly or fully discharges.
The refrigeration system maintains the superconducting state of the coil by cooling the coil to the operating temperature.
The power conditioning system typically contains a power conversion system that converts DC to AC current and the other way around.
The control system monitors the power demand of the grid and controls the power flow from and to the coil. The control system also manages the condition of the SMES coil by controlling the refrigerator.
As a consequence of Faraday's law of induction , any loop of wire that generates a changing magnetic field in time, also generates an electric field. This process takes energy out of the wire through the electromotive force (EMF). EMF is defined as electromagnetic work done on a unit charge when it has traveled one round of a conductive loop. The energy could now be seen as stored in the electric field. This process uses energy from the wire with power equal to the electric potential times the total charge divided by time. Where ℰ is the voltage or EMF. By defining the power we can calculate the work that is needed to create such an electric field. Due to energy conservation this amount of work also has to be equal to the energy stored in the field.
This formula can be rewritten in the easier to measure variable of electric current by the substitution.
where I is the electric current in Ampere. The EMF ℰ is an inductance and can thus be rewritten as:
Substitution now gives:
where L is just a linearity constant called the inductance measured in Henry. Now that the power is found, all that is left to do is fill in the work equation to find the work.
As said earlier the work has to be equal to the energy stored in the field. This entire calculation is based on a single looped wire. For wires that are looped multiple times the inductance L increases, as L is simply defined as the ratio between the voltage and rate of change of the current. In conclusion the stored energy in the coil is equal to: [ 8 ]
where
Consider a cylindrical coil with conductors of a rectangular cross section . The mean radius of coil is R . a and b are width and depth of the conductor. f is called form function, which is different for different shapes of coil. ξ (xi) and δ (delta) are two parameters to characterize the dimensions of the coil. We can therefore write the magnetic energy stored in such a cylindrical coil as shown below. This energy is a function of coil dimensions, number of turns and carrying current.
where
Besides the properties of the wire, the configuration of the coil itself is an important issue from a mechanical engineering aspect. There are three factors that affect the design and the shape of the coil – they are: Inferior strain tolerance, thermal contraction upon cooling and Lorentz forces in an energized coil. Among them, the strain tolerance is crucial not because of any electrical effect, but because it determines how much structural material is needed to keep the SMES from breaking. For small SMES systems, the optimistic value of 0.3% strain tolerance is selected. Toroidal geometry can help to lessen the external magnetic forces and therefore reduces the size of mechanical support needed. Also, due to the low external magnetic field, toroidal SMES can be located near a utility or customer load.
For small SMES, solenoids are usually used because they are easy to coil and no pre-compression is needed. In toroidal SMES, the coil is always under compression by the outer hoops and two disks, one of which is on the top and the other is on the bottom to avoid breakage. Currently, there is little need for toroidal geometry for small SMES, but as the size increases, mechanical forces become more important and the toroidal coil is needed.
The older large SMES concepts usually featured a low aspect ratio solenoid approximately 100 m in diameter buried in earth. At the low extreme of size is the concept of micro-SMES solenoids, for energy storage range near 1 MJ.
Under steady state conditions and in the superconducting state, the coil resistance is negligible. However, the refrigerator necessary to keep the superconductor cool requires electric power and this refrigeration energy must be considered when evaluating the efficiency of SMES as an energy storage device.
Although high-temperature superconductors (HTS) have higher critical temperature, flux lattice melting takes place in moderate magnetic fields around a temperature lower than this critical temperature. The heat loads that must be removed by the cooling system include conduction through the support system, radiation from warmer to colder surfaces, AC losses in the conductor (during charge and discharge), and losses from the cold–to-warm power leads that connect the cold coil to the power conditioning system. Conduction and radiation losses are minimized by proper design of thermal surfaces. Lead losses can be minimized by good design of the leads. AC losses depend on the design of the conductor, the duty cycle of the device and the power rating.
The refrigeration requirements for HTSC and low-temperature superconductor (LTSC) toroidal coils for the baseline temperatures of 77 K, 20 K, and 4.2 K, increases in that order. The refrigeration requirements here is defined as electrical power to operate the refrigeration system. As the stored energy increases by a factor of 100, refrigeration cost only goes up by a factor of 20. Also, the savings in refrigeration for an HTSC system is larger (by 60% to 70%) than for an LTSC systems.
Whether HTSC or LTSC systems are more economical depends because there are other major components determining the cost of SMES: Conductor consisting of superconductor and copper stabilizer and cold support are major costs in themselves. They must be judged with the overall efficiency and cost of the device. Other components, such as vacuum vessel insulation , has been shown to be a small part compared to the large coil cost. The combined costs of conductors, structure and refrigerator for toroidal coils are dominated by the cost of the superconductor. The same trend is true for solenoid coils. HTSC coils cost more than LTSC coils by a factor of 2 to 4. HTSC was expected to be cheaper due to lower refrigeration requirements but this is not the case.
To gain some insight into costs consider a breakdown by major components of both HTSC and LTSC coils corresponding to three typical stored energy levels, 2, 20 and 200 MW·h. The conductor cost dominates the three costs for all HTSC cases and is particularly important at small sizes. The principal reason lies in the comparative current density of LTSC and HTSC materials. The critical current of HTSC wire is lower than LTSC wire generally in the operating magnetic field, about 5 to 10 teslas (T). Assume the wire costs are the same by weight. Because HTSC wire has lower ( J c ) value than LTSC wire, it will take much more wire to create the same inductance. Therefore, the cost of wire is much higher than LTSC wire. Also, as the SMES size goes up from 2 to 20 to 200 MW·h, the LTSC conductor cost also goes up about a factor of 10 at each step. The HTSC conductor cost rises a little slower but is still by far the costliest item.
The structure costs of either HTSC or LTSC go up uniformly (a factor of 10) with each step from 2 to 20 to 200 MW·h. But HTSC structure cost is higher because the strain tolerance of the HTSC (ceramics cannot carry much tensile load) is less than LTSC, such as Nb 3 Ti or Nb 3 Sn , which demands more structure materials. Thus, in the very large cases, the HTSC cost can not be offset by simply reducing the coil size at a higher magnetic field.
It is worth noting here that the refrigerator cost in all cases is so small that there is very little percentage savings associated with reduced refrigeration demands at high temperature. This means that if a HTSC, BSCCO for instance, works better at a low temperature, say 20K, it will certainly be operated there. For very small SMES, the reduced refrigerator cost will have a more significant positive impact.
Clearly, the volume of superconducting coils increases with the stored energy. Also, we can see that the LTSC torus maximum diameter is always smaller for a HTSC magnet than LTSC due to higher magnetic field operation. In the case of solenoid coils, the height or length is also smaller for HTSC coils, but still much higher than in a toroidal geometry (due to low external magnetic field).
An increase in peak magnetic field yields a reduction in both volume (higher energy density) and cost (reduced conductor length). Smaller volume means higher energy density and cost is reduced due to the decrease of the conductor length. There is an optimum value of the peak magnetic field, about 7 T in this case. If the field is increased past the optimum, further volume reductions are possible with minimal increase in cost. The limit to which the field can be increased is usually not economic but physical and it relates to the impossibility of bringing the inner legs of the toroid any closer together and still leave room for the bucking cylinder.
The superconductor material is a key issue for SMES. Superconductor development efforts focus on increasing Jc and strain range and on reducing the wire manufacturing cost .
The energy density, efficiency and the high discharge rate make SMES useful systems to incorporate into modern energy grids and green energy initiatives. The SMES system's uses can be categorized into three categories: power supply systems, control systems and emergency/contingency systems.
FACTS ( flexible AC transmission system ) devices are static devices that can be installed in electricity grids . These devices are used to enhance the controllability and power transfer capability of an electric power grid. The application of SMES in FACTS devices was the first application of SMES systems. The first realization of SMES using FACTS devices were installed by the Bonneville power authority in 1980. This system uses SMES systems to damp the low frequencies, which contributes to the stabilization of the power grid. [ 9 ] [ 6 ] [ 10 ] In 2000, SMES based FACTS systems were introduced at key points in the northern Winston power grid to enhance the stability of the grid.
The use of electric power requires a stable energy supply that delivers a constant power. This stability is dependent on the amount of power used and the amount of power created. The power usage varies throughout the day, and also varies during the seasons. SMES systems can be used to store energy when the generated power is higher than the demand/Load, and release power when the load is higher than the generated power. Thereby compensating for power fluctuations. [ 11 ] Using these systems makes it possible for conventional generating units to operate at a constant output that is more efficient and convenient. [ 12 ] However, when the power imbalance between supply and demand lasts for a long time, the SMES may get completely discharged. [ 13 ]
When the load does not meet the generated power output, due to a load perturbation, this can cause the load to be larger than the rated power output of the generators. This for example can happen when wind generators don't spin due to a sudden lack of wind. This load perturbation can cause a load-frequency control problem. This problem can be amplified in DFIG -based wind power generators. [ 14 ] This load disparity can be compensated by power output from SMES systems that store energy when the generation is larger than the load. [ 15 ] SMES based load frequency control systems have the advantage of a fast response when compared to contemporary control systems.
Uninterruptible Power Supplies (UPS) are used to protect against power surges and shortfalls by supplying a continuous power supply. This compensation is done by switching from the failing power supply to a SMES systems that can almost instantaneously supply the necessary power to continue the operation of essential systems. The SMES based UPS are most useful in systems that need to be kept at certain critical loads. [ 16 ] [ 17 ]
When the power angle difference across a circuit breaker is too large, protective relays prevent the reclosing of the circuit breakers. SMES systems can be used in these situations to reduce the power angle difference across the circuit breaker. Thereby allowing the reclosing of the circuit breaker. These systems allow the quick restoration of system power after major transmission line outages. [ 12 ]
Spinning reserve is the extra generating capacity that is available by increasing the power generation of systems that are connected to the grid. This capacity reserved by the system operator for the compensation of disruptions in the power grid. Due to the fast recharge times and fast alternating current to direct current conversion process of SMES systems, these systems can be used as a spinning reserve when a major grid of transmission line is out of service. [ 18 ] [ 19 ]
Superconducting fault current limiters (SFCL) are used to limit current under a fault in the grid. In this system a superconductor is quenched (raised in temperature) when a fault in the gridline is detected. By quenching the superconductor the resistance rises and the current is diverted to other grid lines. This is done without interrupting the larger grid. Once the fault is cleared, the SFCL temperature is lowered and becomes invisible to the larger grid. [ 20 ] [ 15 ]
Electromagnetic launchers are electric projectile weapons that use a magnetic field to accelerate projectiles to a very high velocity. These launchers require high power pulse sources in order to work. These launchers can be realised by the use of the quick release capability and the high power density of the SMES system. [ 21 ]
Future developments in the components of SMES systems could make them more viable for other applications; specifically, superconductors with higher critical temperatures and critical current densities. These limits are the same faced in other industrial usage of superconductors. Recent development of HTS wire made of YBCO with a superconducting transition temperature of around 90 K shows promise.Typically, the higher the superconducting transition temperature, the higher the maximum current density the superconductor can sustain before Cooper pair breakdown. A substance with a high critical temperature will generally have a higher critical current at low temperature than a superconductor with a lower critical temperature. This higher critical current will raise the energy storage quadratically, which may make SMES and other industrial applications of superconductors cost-effective. [ 22 ]
The energy content of current SMES systems is usually quite small. Methods to increase the energy stored in SMES often resort to large-scale storage units. As with other superconducting applications, cryogenics are a necessity. A robust mechanical structure is usually required to contain the very large Lorentz forces generated by and on the magnet coils. The dominant cost for SMES is the superconductor, followed by the cooling system and the rest of the mechanical structure.
Needed because of large Lorentz forces generated by the strong magnetic field acting on the coil, and the strong magnetic field generated by the coil on the larger structure.
To achieve commercially useful levels of storage, around 5 GW·h (18 TJ ), a SMES installation would need a loop of around 800 m. This is traditionally pictured as a circle, though in practice it could be more like a rounded rectangle. In either case it would require access to a significant amount of land to house the installation.
here are two manufacturing issues around SMES. The first is the fabrication of bulk cable suitable to carry the current. The HTSC superconducting materials found to date are relatively delicate ceramics, making it difficult to use established techniques to draw extended lengths of superconducting wire. Much research has focused on layer deposit techniques, applying a thin film of material onto a stable substrate, but this is currently only suitable for small-scale electrical circuits.
The second problem is the infrastructure required for an installation. Until room-temperature superconductors are found, the 800 m loop of wire would have to be contained within a vacuum flask of liquid nitrogen . This in turn would require stable support, most commonly envisioned by burying the installation.
Above a certain field strength, known as the critical field, the superconducting state is destroyed. This means that there exists a maximum charging rate for the superconducting material, given that the magnitude of the magnetic field determines the flux captured by the superconducting coil.
In general power systems look to maximize the current they are able to handle. This makes any losses due to inefficiencies in the system relatively insignificant. Unfortunately, large currents may generate magnetic fields greater than the critical field due to Ampere's Law . Current materials struggle, therefore, to carry sufficient current to make a commercial storage facility economically viable.
Several issues at the onset of the technology have hindered its proliferation:
These still pose problems for superconducting applications but are improving over time. Advances have been made in the performance of superconducting materials. Furthermore, the reliability and efficiency of refrigeration systems has improved significantly.
At the moment it takes four months to cool the coil from room temperature to its operating temperature . This also means that the SMES takes equally long to return to operating temperature after maintenance and when restarting after operating failures. [ 23 ]
Due to the large amount of energy stored, certain measures need to be taken to protect the coils from damage in the case of coil failure. The rapid release of energy in case of coil failure might damage surrounding systems. Some conceptual designs propose to incorporate a superconducting cable into the design with as goal the absorption of energy after coil failure. [ 6 ] [ 18 ] The system also needs to be kept in excellent electric isolation in order to prevent loss of energy. [ 6 ] | https://en.wikipedia.org/wiki/Superconducting_magnetic_energy_storage |
The superconducting nanowire single-photon detector ( SNSPD or SSPD ) is a type of optical and near-infrared single- photon detector based on a current-biased superconducting nanowire . [ 1 ] It was first developed by scientists at Moscow State Pedagogical University and at the University of Rochester in 2001. [ 2 ] [ 3 ] The first fully operational prototype was demonstrated in 2005 by the National Institute of Standards and Technology (Boulder), and BBN Technologies as part of the DARPA Quantum Network . [ 4 ] [ 5 ] [ 6 ] [ 7 ]
As of 2023, a superconducting nanowire single-photon detector is the fastest single-photon detector (SPD) for photon counting . [ 8 ] [ 9 ] [ 10 ] It is a key enabling technology for quantum optics and optical quantum technologies . SNSPDs are available with very high detection efficiency, very low dark count rate and very low timing jitter, compared to other types of single-photon detectors. SNSPDs are covered by International Electrotechnical Commission (IEC) international standards. [ 11 ] As of 2023, commercial SNSPD devices are available in multichannel systems in a price range of 100,000 euros.
It was recently discovered that superconducting wires as wide as 1.5 μm can detect single infra-red photons. [ 12 ] [ 13 ] [ 14 ] This is important because optical lithography rather than electron lithography can be used in their construction. This reduces the cost for applications that require large photodetector areas. One application is in dark matter detection experiments, where the target is a scintillating GaAs crystal. GaAs suitably doped with silicon and boron is a luminous cryogenic scintillator that has no apparent afterglow and is available commercially in the form of large, high-quality crystals. [ 15 ] [ 16 ] [ 17 ]
The SNSPD consists of a thin (≈ 5 nm) and narrow (≈ 100 nm) superconducting nanowire . The length is typically hundreds of micrometers , and the nanowire is patterned in a compact meander geometry to create a square or circular pixel with high detection efficiency. The nanowire is cooled well below its superconducting critical temperature and biased with a DC current that is close to but less than the superconducting critical current of the nanowire. A photon incident on the nanowire breaks Cooper pairs and reduces the local critical current below that of the bias current. This results in the formation of a localized non-superconducting region, or hotspot, with finite electrical resistance . This resistance is typically larger than the 50 ohm input impedance of the readout amplifier, and hence most of the bias current is shunted to the amplifier. This produces a measurable voltage pulse that is approximately equal to the bias current multiplied by 50 ohms. With most of the bias current flowing through the amplifier, the non-superconducting region cools and returns to the superconducting state. The time for the current to return to the nanowire is typically set by the inductive time constant of the nanowire, equal to the kinetic inductance of the nanowire divided by the impedance of the readout circuit. [ 18 ] Proper self-resetting of the device requires that this inductive time constant be slower than the intrinsic cooling time of the nanowire hotspot. [ 19 ]
While the SNSPD does not match the intrinsic energy or photon-number resolution of the superconducting transition edge sensor , the SNSPD is significantly faster than conventional transition edge sensors and operates at higher temperatures. A degree of photon-number resolution can be achieved in SNSPD arrays, [ 20 ] through time-binning [ 21 ] or advanced readout schemes. [ 22 ] Most SNSPDs are made of sputtered niobium nitride (NbN), which offers a relatively high superconducting critical temperature (≈ 10 K ) which enables SNSPD operation in the temperature range 1 K to 4 K (compatible with liquid helium or modern closed-cycle cryocoolers ). The intrinsic thermal time constants of NbN are short, giving very fast cooling time after photon absorption (<100 picoseconds). [ 23 ]
The absorption in the superconducting nanowire can be boosted by a variety of strategies: integration with an optical cavity , [ 24 ] integration with a photonic waveguide [ 25 ] or addition of nanoantenna structures. [ 26 ] SNSPD cavity devices in NbN, NbTiN, WSi & MoSi have demonstrated fibre-coupled device detection efficiencies greater than 98% at 1550 nm wavelength [ 27 ] with count rates in the tens of MHz. [ 28 ] The detection efficiencies are optimized for a specific wavelength range in each detector. They vary widely, however, due to highly localized regions of the nanowires where the effective cross-sectional area for superconducting current is reduced. [ 29 ] SNSPD devices have also demonstrated exceptionally low jitter – the uncertainty in the photon arrival time – as low as 3 picoseconds at visible wavelengths. [ 30 ] [ 31 ] Timing jitter increases as photon energy drops and has been verified out to 3.5 micrometres wavelength. [ 32 ] Timing jitter is an extremely important property for time-correlated single-photon counting (TCSPC) [ 33 ] applications. Furthermore, SNSPDs have extremely low rates of dark counts, i.e. the occurrence of voltage pulses in the absence of a detected photon. [ 34 ] In addition, the deadtime (time interval following a detection event during which the detector is not sensitive) is on the order of a few nanoseconds, this short deadtime translates into very high saturation count rates and enables antibunching measurements with a single detector. [ 35 ]
For the detection of longer wavelength photons, however, the detection efficiency of standard SNSPDs decreases significantly. [ 36 ] Recent efforts to improve the detection efficiency at near-infrared and mid-infrared wavelengths include studies of narrower (20 nm and 30 nm wide) NbN nanowires [ 37 ] as well as extensive studies of alternative superconducting materials [ 38 ] with lower superconducting critical temperatures than NbN ( tungsten silicide , [ 39 ] niobium silicide, [ 40 ] molybdenum silicide [ 41 ] and tantalum nitride [ 42 ] ). Single photon sensitivity up to 10 micrometer wavelength has recently been demonstrated in a tungsten silicide SNSPD. [ 43 ] Alternative thin film deposition techniques such as atomic layer deposition are of interest for extending the spectral range and scalability of SNSPDs to large areas. [ 44 ] High temperature superconductors have been investigated for SNSPDs [ 45 ] [ 46 ] with some encouraging recent reports. [ 47 ] [ 48 ] SNSPDs have been created from magnesium diboride with some single photon sensitivity in the visible and near infrared. [ 49 ] [ 50 ]
There is considerable interest and effort in scaling up SNSPDs to large multipixel arrays and cameras. [ 51 ] [ 52 ] A kilopixel SNSPD array has recently been reported. [ 53 ] A key challenge is readout, [ 54 ] which can be addressed via multiplexing [ 55 ] [ 56 ] or digital readout using superconducting single flux quantum logic. [ 57 ]
Many of the initial application demonstrations of SNSPDs have been in the area of quantum information , [ 58 ] such as quantum key distribution [ 59 ] and optical quantum computing . [ 60 ] [ 61 ] Other current and emerging applications include imaging of infrared photoemission for defect analysis in CMOS circuitry, [ 62 ] single photon emitter characterization , [ 63 ] LIDAR , [ 64 ] [ 65 ] [ 66 ] on-chip quantum optics , [ 67 ] [ 68 ] optical neuromorphic computing , [ 69 ] [ 70 ] fibre optic temperature sensing , [ 71 ] optical time domain reflectometry , [ 72 ] readout for ion trap qubits, [ 73 ] quantum plasmonics, [ 74 ] [ 75 ] single electron detection, [ 76 ] single α and β particle detection, [ 77 ] singlet oxygen luminescence detection, [ 78 ] deep space optical communication , [ 79 ] [ 80 ] dark matter searches [ 81 ] and exoplanet detection. [ 82 ] A number of companies worldwide are successfully commercializing complete single-photon detection systems based on superconducting nanowires, including Munich Quantum Instruments , Single Quantum , Photon Spot , Scontel , Quantum Opus , ID Quantique , PhoTec and Pixel Photonics . Wider adoption of SNSPD technology is closely linked to advances in cryocoolers for 4 K and below, and SNSPDs have recently been demonstrated in miniaturized systems. [ 83 ] [ 84 ]
SNSPDs have also been demonstrated to have applicability for high-energy proton detection. [ 85 ] | https://en.wikipedia.org/wiki/Superconducting_nanowire_single-photon_detector |
Superconducting radio frequency (SRF) science and technology involves the application of electrical superconductors to radio frequency devices. The ultra-low electrical resistivity of a superconducting material allows an RF resonator to obtain an extremely high quality factor , Q . For example, it is commonplace for a 1.3 GHz niobium SRF resonant cavity at 1.8 kelvins to obtain a quality factor of Q =5×10 10 . Such a very high Q resonator stores energy with very low loss and narrow bandwidth . These properties can be exploited for a variety of applications, including the construction of high-performance particle accelerator structures.
The amount of loss in an SRF resonant cavity is so minute that it is often explained with the following comparison: Galileo Galilei (1564–1642) was one of the first investigators of pendulous motion, a simple form of mechanical resonance . Had Galileo experimented with a 1 Hz resonator with a quality factor Q typical of today's SRF cavities and left it swinging in an entombed lab since the early 17th century, that pendulum would still be swinging today with about half of its original amplitude.
The most common application of superconducting RF is in particle accelerators . Accelerators typically use resonant RF cavities formed from or coated with superconducting materials. Electromagnetic fields are excited in the cavity by coupling in an RF source with an antenna. When the RF fed by the antenna is the same as that of a cavity mode, the resonant fields build to high amplitudes. Charged particles passing through apertures in the cavity are then accelerated by the electric fields and deflected by the magnetic fields. The resonant frequency driven in SRF cavities typically ranges from 200 MHz to 3 GHz, depending on the particle species to be accelerated.
The most common fabrication technology for such SRF cavities is to form thin walled (1–3 mm) shell components from high purity niobium sheets by stamping . These shell components are then welded together to form cavities.
A simplified diagram of the key elements of an SRF cavity setup is shown below. The cavity is immersed in a saturated liquid helium bath. Pumping removes helium vapor boil-off and controls the bath temperature. The helium vessel is often pumped to a pressure below helium's superfluid lambda point to take advantage of the superfluid's thermal properties. Because superfluid has very high thermal conductivity, it makes an excellent coolant. In addition, superfluids boil only at free surfaces, preventing the formation of bubbles on the surface of the cavity, which would cause mechanical perturbations. An antenna is needed in the setup to couple RF power to the cavity fields and, in turn, any passing particle beam. The cold portions of the setup need to be extremely well insulated, which is best accomplished by a vacuum vessel surrounding the helium vessel and all ancillary cold components. The full SRF cavity containment system, including the vacuum vessel and many details not discussed here, is a cryomodule .
Entry into superconducting RF technology can incur more complexity, expense, and time than normal-conducting RF cavity strategies. SRF requires chemical facilities for harsh cavity treatments, a low-particulate cleanroom for high-pressure water rinsing and assembly of components, and complex engineering for the cryomodule vessel and cryogenics. A vexing aspect of SRF is the as-yet elusive ability to consistently produce high Q cavities in high volume production, which would be required for a large linear collider . Nevertheless, for many applications the capabilities of SRF cavities provide the only solution for a host of demanding performance requirements.
Several extensive treatments of SRF physics and technology are available, many of them free of charge and online. There are the proceedings of CERN accelerator schools, [ 2 ] [ 3 ] [ 4 ] a scientific paper giving a thorough presentation of the many aspects of an SRF cavity to be used in the International Linear Collider , [ 5 ] bi-annual International Conferences on RF Superconductivity held at varying global locations in odd numbered years, [ 6 ] and tutorials presented at the conferences. [ 7 ]
A large variety of RF cavities are used in particle accelerators. Historically most have been made of copper – a good electrical conductor – and operated near room temperature with exterior water cooling to remove the heat generated by the electrical loss in the cavity. In the past two decades, however, accelerator facilities have increasingly found superconducting cavities to be more suitable (or necessary) for their accelerators than normal-conducting copper versions. The motivation for using superconductors in RF cavities is not to achieve a net power saving, but rather to increase the quality of the particle beam being accelerated. Though superconductors have little AC electrical resistance, the little power they do dissipate is radiated at very low temperatures, typically in a liquid helium bath at 1.6 K to 4.5 K, and maintaining such low temperatures takes a lot of energy. The refrigeration power required to maintain the cryogenic bath at low temperature in the presence of heat from small RF power dissipation is dictated by the Carnot efficiency , and can easily be comparable to the normal-conductor power dissipation of a room-temperature copper cavity. The principle motivations for using superconducting RF cavities, are:
When future advances in superconducting material science allow higher superconducting critical temperatures T c and consequently higher SRF bath temperatures, then the reduced thermocline between the cavity and the surrounding environment could yield a significant net power savings by SRF over the normal conducting approach to RF cavities. Other issues will need to be considered with a higher bath temperature, though, such as the fact that superfluidity (which is presently exploited with liquid helium) would not be present with (for example) liquid nitrogen. At present, none of the "high T c " superconducting materials are suitable for RF applications. Shortcomings of these materials arise due to their underlying physics as well as their bulk mechanical properties not being amenable to fabricating accelerator cavities. However, depositing films of promising materials onto other mechanically amenable cavity materials may provide a viable option for exotic materials serving SRF applications. At present, the de facto choice for SRF material is still pure niobium, which has a critical temperature of 9.3 K and functions as a superconductor nicely in a liquid helium bath of 4.2 K or lower, and has excellent mechanical properties.
The physics of Superconducting RF can be complex and lengthy. A few simple approximations derived from the complex theories, though, can serve to provide some of the important parameters of SRF cavities.
By way of background, some of the pertinent parameters of RF cavities are itemized as follows. A resonator's quality factor is defined by
where:
The energy stored in the cavity is given by the integral of field energy density over its volume,
where:
The power dissipated is given by the integral of resistive wall losses over its surface,
where:
The integrals of the electromagnetic field in the above expressions are generally not solved analytically, since the cavity boundaries rarely lie along axes of common coordinate systems. Instead, the calculations are performed by any of a variety of computer programs that solve for the fields for non-simple cavity shapes, and then numerically integrate the above expressions.
An RF cavity parameter known as the Geometry Factor ranks the cavity's effectiveness of providing accelerating electric field due to the influence of its shape alone, which excludes specific material wall loss. The Geometry Factor is given by
and then
The geometry factor is quoted for cavity designs to allow comparison to other designs independent of wall loss, since wall loss for SRF cavities can vary substantially depending on material preparation, cryogenic bath temperature, electromagnetic field level, and other highly variable parameters. The Geometry Factor is also independent of cavity size, it is constant as a cavity shape is scaled to change its frequency.
As an example of the above parameters, a typical 9-cell SRF cavity for the International Linear Collider [ 5 ] (a.k.a. a TESLA cavity) would have G =270 Ω and R s = 10 nΩ, giving Q o =2.7×10 10 .
The critical parameter for SRF cavities in the above equations is the surface resistance R s , and is where the complex physics comes into play. For normal-conducting copper cavities operating near room temperature, R s is simply determined by the empirically measured bulk electrical conductivity σ by
For copper at 300 K, σ =5.8×10 7 (Ω·m) −1 and at 1.3 GHz, R s copper = 9.4 mΩ.
For Type II superconductors in RF fields, R s can be viewed as the sum of the superconducting BCS resistance and temperature-independent "residual resistances",
The BCS resistance derives from BCS theory . One way to view the nature of the BCS RF resistance is that the superconducting Cooper pairs , which have zero resistance for DC current, have finite mass and momentum which has to alternate sinusoidally for the AC currents of RF fields, thus giving rise to a small energy loss. The BCS resistance for niobium can be approximated when the temperature is less than half of niobium's superconducting critical temperature , T < T c /2, by
where:
Note that for superconductors, the BCS resistance increases quadratically with frequency, ~ f 2 , whereas for normal conductors the surface resistance increases as the root of frequency, ~√ f . For this reason, the majority of superconducting cavity applications favor lower frequencies, <3 GHz, and normal-conducting cavity applications favor higher frequencies, >0.5 GHz, there being some overlap depending on the application.
The superconductor's residual resistance arises from several sources, such as random material defects, hydrides that can form on the surface due to hot chemistry and slow cool-down, and others that are yet to be identified. One of the quantifiable residual resistance contributions is due to an external magnetic field pinning magnetic fluxons in a Type II superconductor. The pinned fluxon cores create small normal-conducting regions in the niobium that can be summed to estimate their net resistance. For niobium, the magnetic field contribution to R s can be approximated by
where:
The Earth's nominal magnetic flux of 0.5 gauss (50 μT ) translates to a magnetic field of 0.5 Oe (40 A/m) and would produce a residual surface resistance in a superconductor that is orders of magnitude greater than the BCS resistance, rendering the superconductor too lossy for practical use. For this reason, superconducting cavities are surrounded by magnetic shielding to reduce the field permeating the cavity to typically <10 mOe (0.8 A/m).
Using the above approximations for a niobium a SRF cavity at 1.8 K, 1.3 GHz, and assuming a magnetic field of 10 mOe (0.8 A/m), the surface resistance components would be
The Q o just described can be further improved by up to a factor of 2 by performing a mild vacuum bake of the cavity. Empirically, the bake seems to reduce the BCS resistance by 50%, but increases the residual resistance by 30%. The plot below shows the ideal Q o values for a range of residual magnetic field for a baked and unbaked cavity.
In general, much care and attention to detail must be exercised in the experimental setup of SRF cavities so that there is not Q o degradation due to RF losses in ancillary components, such as stainless steel vacuum flanges that are too close to the cavity's evanescent fields. However, careful SRF cavity preparation and experimental configuration have achieved the ideal Q o not only for low field amplitudes, but up to cavity fields that are typically 75% of the magnetic field quench limit. Few cavities make it to the magnetic field quench limit since residual losses and vanishingly small defects heat up localized spots, which eventually exceed the superconducting critical temperature and lead to a thermal quench .
When using superconducting RF cavities in particle accelerators, the field level in the cavity should generally be as high as possible to most efficiently accelerate the beam passing through it. The Q o values described by the above calculations tend to degrade as the fields increase, which is plotted for a given cavity as a " Q vs E " curve, where " E " refers to the accelerating electric field of the TM 01 mode. Ideally, the cavity Q o would remain constant as the accelerating field is increased all the way up to the point of a magnetic quench field, as indicated by the "ideal" dashed line in the plot below. In reality, though, even a well prepared niobium cavity will have a Q vs E curve that lies beneath the ideal, as shown by the "good" curve in the plot.
There are many phenomena that can occur in an SRF cavity to degrade its Q vs E performance, such as impurities in the niobium, hydrogen contamination due to excessive heat during chemistry, and a rough surface finish. After a couple decades of development, a necessary prescription for successful SRF cavity production is emerging. This includes:
There remains some uncertainty as to the root cause of why some of these steps lead to success, such as the electropolish and vacuum bake. However, if this prescription is not followed, the Q vs E curve often shows an excessive degradation of Q o with increasing field, as shown by the " Q slope" curve in the plot below. Finding the root causes of Q slope phenomena is the subject of ongoing fundamental SRF research. The insight gained could lead to simpler cavity fabrication processes as well as benefit future material development efforts to find higher T c alternatives to niobium.
In 2012, the Q(E) dependence on SRF cavities discovered for the first time in such a way that the Q-rise phenomenon was observed in Ti doped SRF cavity. [ 9 ] The quality factor increases with increase in accelerating field and was explained by the presence of sharper peaks in the electronic density of states at the gap edges in doped cavities and such peaks being broadened by the rf current. [ 10 ] Later the similar phenomenon was observed with nitrogen doping and which has been the current state-of-art cavity preparation for high performance. [ 11 ]
One of the main reasons for using SRF cavities in particle accelerators is that their large apertures result in low beam impedance and higher thresholds of deleterious beam instabilities. As a charged particle beam passes through a cavity, its electromagnetic radiation field is perturbed by the sudden increase of the conducting wall diameter in the transition from the small-diameter beampipe to the large hollow RF cavity. A portion of the particle's radiation field is then "clipped off" upon re-entrance into the beampipe and left behind as wakefields in the cavity. The wakefields are simply superimposed upon the externally driven accelerating fields in the cavity. The spawning of electromagnetic cavity modes as wakefields from the passing beam is analogous to a drumstick striking a drumhead and exciting many resonant mechanical modes.
The beam wakefields in an RF cavity excite a subset of the spectrum of the many electromagnetic modes , including the externally driven TM 01 mode. There are then a host of beam instabilities that can occur as the repetitive particle beam passes through the RF cavity, each time adding to the wakefield energy in a collection of modes.
For a particle bunch with charge q , a length much shorter than the wavelength of a given cavity mode, and traversing the cavity at time t =0, the amplitude of the wakefield voltage left behind in the cavity in a given mode is given by [ 12 ]
where:
The shunt impedance R can be calculated from the solution of the electromagnetic fields of a mode, typically by a computer program that solves for the fields. In the equation for V wake , the ratio R / Q o serves as a good comparative measure of wakefield amplitude for various cavity shapes, since the other terms are typically dictated by the application and are fixed. Mathematically,
where relations defined above have been used. R / Q o is then a parameter that factors out cavity dissipation and is viewed as measure of the cavity geometry's effectiveness of producing accelerating voltage per stored energy in its volume. The wakefield being proportional to R / Q o can be seen intuitively since a cavity with small beam apertures concentrates the electric field on axis and has high R / Q o , but also clips off more of the particle bunch's radiation field as deleterious wakefields.
The calculation of electromagnetic field buildup in a cavity due to wakefields can be complex and depends strongly on the specific accelerator mode of operation. For the straightforward case of a storage ring with repetitive particle bunches spaced by time interval T b and a bunch length much shorter than the wavelength of a given mode, the long term steady state wakefield voltage presented to the beam by the mode is given by [ 12 ]
where:
As an example calculation, let the phase shift δ=0 , which would be close to the case for the TM 01 mode by design and unfortunately likely to occur for a few HOM's. Having δ=0 (or an integer multiple of an RF mode's period, δ=n2π ) gives the worse-case wakefield build-up, where successive bunches are maximally decelerated by previous bunches' wakefields and give up even more energy than with only their "self wake". Then, taking ω o = 2 π 500 MHz, T b =1 μs, and Q L =10 6 , the buildup of wakefields would be V ss wake =637× V wake . A pitfall for any accelerator cavity would be the presence of what is termed a "trapped mode". This is an HOM that does not leak out of the cavity and consequently has a Q L that can be orders of magnitude larger than used in this example. In this case, the buildup of wakefields of the trapped mode would likely cause a beam instability. The beam instability implications due to the V ss wake wakefields is thus addressed differently for the fundamental accelerating mode TM 01 and all other RF modes, as described next.
The complex calculations treating wakefield-related beam stability for the TM 010 mode in accelerators show that there are specific regions of phase between the beam bunches and the driven RF mode that allow stable operation at the highest possible beam currents. At some point of increasing beam current, though, just about any accelerator configuration will become unstable. As pointed out above, the beam wakefield amplitude is proportional to the cavity parameter R / Q o , so this is typically used as a comparative measure of the likelihood of TM 01 related beam instabilities. A comparison of R / Q o and R for a 500 MHz superconducting cavity and a 500 MHz normal-conducting cavity is shown below. The accelerating voltage provided by both cavities is comparable for a given net power consumption when including refrigeration power for SRF. The R / Q o for the SRF cavity is 15 times less than the normal-conducting version, and thus less beam-instability susceptible. This one of the main reasons such SRF cavities are chosen for use in high-current storage rings.
In addition to the fundamental accelerating TM 010 mode of an RF cavity, numerous higher frequency modes and a few lower-frequency dipole modes are excited by charged particle beam wakefields, all generally denoted higher order modes (HOMs). These modes serve no useful purpose for accelerator particle beam dynamics, only giving rise to beam instabilities, and are best heavily damped to have as low a Q L as possible. The damping is accomplished by preferentially allowing dipole and all HOMs to leak out of the SRF cavity, and then coupling them to resistive RF loads. The leaking out of undesired RF modes occurs along the beampipe, and results from a careful design of the cavity aperture shapes. The aperture shapes are tailored to keep the TM 01 mode "trapped" with high Q o inside of the cavity and allow HOMs to propagate away. The propagation of HOMs is sometimes facilitated by having a larger diameter beampipe on one side of the cavity, beyond the smaller diameter cavity iris, as seen in the SRF cavity CAD cross-section at the top of this wiki page. The larger beampipe diameter allows the HOMs to easily propagate away from the cavity to an HOM antenna or beamline absorber.
The resistive load for HOMs can be implemented by having loop antennas located at apertures on the side of the beampipe, with coaxial lines routing the RF to outside of the cryostat to standard RF loads. Another approach is to place the HOM loads directly on the beampipe as hollow cylinders with RF lossy material attached to the interior surface, as shown in the adjacent image. This "beamline load" approach can be more technically challenging, since the load must absorb high RF power while preserving a high-vacuum beamline environment in close proximity to a contamination-sensitive SRF cavity. Further, such loads must sometimes operate at cryogenic temperatures to avoid large thermal gradients along the beampipe from the cold SRF cavity. The benefit of the beamline HOM load configuration, however, is a greater absorptive bandwidth and HOM attenuation as compared to antenna coupling. This benefit can be the difference between a stable vs. an unstable particle beam for high current accelerators.
A significant part of SRF technology is cryogenic engineering. The SRF cavities tend to be thin-walled structures immersed in a bath of liquid helium having temperature 1.6 K to 4.5 K. Careful engineering is then required to insulate the helium bath from the room-temperature external environment. This is accomplished by:
The major cryogenic engineering challenge is the refrigeration plant for the liquid helium. The small power that is dissipated in an SRF cavity and the heat leak to the vacuum vessel are both heat loads at very low temperature. The refrigerator must replenish this loss with an inherent poor efficiency, given by the product of the Carnot efficiency η C and a "practical" efficiency η p . The Carnot efficiency derives from the second law of thermodynamics and can be quite low. It is given by
where
In most cases T warm = 300 K, so for T cold ≥ 150 K the Carnot efficiency is unity. The practical efficiency is a catch-all term that accounts for the many mechanical non-idealities that come into play in a refrigeration system aside from the fundamental physics of the Carnot efficiency. For a large refrigeration installation there is some economy of scale, and it is possible to achieve η p in the range of 0.2–0.3. The wall-plug power consumed by the refrigerator is then
where
As an example, if the refrigerator delivers 1.8 K helium to the cryomodule where the cavity and heat leak dissipate P cold =10 W, then the refrigerator having T warm =300 K and η p =0.3 would have η C =0.006 and a wall-plug power of P warm =5.5 kW. Of course, most accelerator facilities have numerous SRF cavities, so the refrigeration plants can get to be very large installations.
The temperature of operation of an SRF cavity is typically selected as a minimization of wall-plug power for the entire SRF system. The plot to the right then shows the pressure to which the helium vessel must be pumped to obtain the desired liquid helium temperature. Atmospheric pressure is 760 Torr (101.325 kPa), corresponding to 4.2 K helium. The superfluid λ point occurs at about 38 Torr (5.1 kPa), corresponding to 2.18 K helium. Most SRF systems either operate at atmospheric pressure, 4.2 K, or below the λ point at a system efficiency optimum usually around 1.8 K, corresponding to about 12 Torr (1.6 kPa). | https://en.wikipedia.org/wiki/Superconducting_radio_frequency |
The superconducting tunnel junction ( STJ ) – also known as a superconductor–insulator–superconductor tunnel junction ( SIS ) – is an electronic device consisting of two superconductors separated by a very thin layer of insulating material. Current passes through the junction via the process of quantum tunneling . The STJ is a type of Josephson junction , though not all the properties of the STJ are described by the Josephson effect.
These devices have a wide range of applications, including high-sensitivity detectors of electromagnetic radiation , magnetometers , high speed digital circuit elements, and quantum computing circuits.
All currents flowing through the STJ pass through the insulating layer via the process of quantum tunneling . There are two components to the tunneling current. The first is from the tunneling of Cooper pairs . This supercurrent is described by the ac and dc Josephson relations , first predicted by Brian David Josephson in 1962. [ 1 ] For this prediction, Josephson received the Nobel Prize in Physics in 1973. The second is the quasiparticle current, which, in the limit of zero temperature, arises when the energy from the bias voltage e V {\displaystyle eV} exceeds twice the value of superconducting energy gap Δ. At finite temperature, a small quasiparticle tunneling current – called the subgap current – is present even for voltages less than twice the energy gap due to the thermal promotion of quasiparticles above the gap.
If the STJ is irradiated with photons of frequency f {\displaystyle f} , the dc current-voltage curve will exhibit both Shapiro steps and steps due to photon-assisted tunneling. Shapiro steps arise from the response of the supercurrent and occur at voltages equal to n h f / ( 2 e ) {\displaystyle nhf/(2e)} , where h {\displaystyle h} is the Planck constant , e {\displaystyle e} is the electron charge, and n {\displaystyle n} is an integer . [ 2 ] Photon-assisted tunneling arises from the response of the quasiparticles and gives rise to steps displaced in voltage by n h f / e {\displaystyle nhf/e} relative to the gap voltage. [ 3 ]
The device is typically fabricated by first depositing a thin film of a superconducting metal such as aluminum on an insulating substrate such as silicon . The deposition is performed inside a vacuum chamber . Oxygen gas is then introduced into the chamber, resulting in the formation of an insulating layer of aluminum oxide (Al 2 {\displaystyle _{2}} O 3 {\displaystyle _{3}} ) with a typical thickness of several nanometres . After the vacuum is restored, an overlapping layer of superconducting metal is deposited, completing the STJ. To create a well-defined overlap region, a procedure known as the Niemeyer-Dolan technique is commonly used. This technique uses a suspended bridge of resist with a double-angle deposition to define the junction.
Aluminum is widely used for making superconducting tunnel junctions because of its unique ability to form a very thin (2–3 nm) insulating oxide layer with no defects that short-circuit the insulating layer. The superconducting critical temperature of aluminum is approximately 1.2 K . For many applications, it is convenient to have a device that is superconducting at a higher temperature, in particular at a temperature above the boiling point of liquid helium , which is 4.2 K at atmospheric pressure. One approach to achieving this is to use niobium , which has a superconducting critical temperature in bulk form of 9.3 K. Niobium, however, does not form an oxide that is suitable for making tunnel junctions. To form an insulating oxide, the first layer of niobium can be coated with a very thin layer (approximately 5 nm) of aluminum, which is then oxidized to form a high quality aluminum oxide tunnel barrier before the final layer of niobium is deposited. The thin aluminum layer is proximitized by the thicker niobium, and the resulting device has a superconducting critical temperature above 4.2 K. [ 4 ] Early work used lead -lead oxide-lead tunnel junctions. [ 5 ] Lead has a superconducting critical temperature of 7.2 K in bulk form, but lead oxide tends to develop defects (sometimes called pinhole defects) that short-circuit the tunnel barrier when the device is thermally cycled between cryogenic temperatures and room temperature, so lead is no longer widely used to make STJs.
STJs are the most sensitive heterodyne receivers in the 100 GHz to 1000 GHz frequency range, and hence are used for radio astronomy at these frequencies. [ 6 ] In this application, the STJ is dc biased at a voltage just below the gap voltage ( | V | = 2 Δ / e {\displaystyle |V|=2\Delta /e} ). A high frequency signal from an astronomical object of interest is focused onto the STJ, along with a local oscillator source. Photons absorbed by the STJ allow quasiparticles to tunnel via the process of photon-assisted tunneling. This photon-assisted tunneling changes the current-voltage curve, creating a nonlinearity that produces an output at the difference frequency of the astronomical signal and the local oscillator. This output is a frequency down-converted version of the astronomical signal. [ 7 ] These receivers are so sensitive that an accurate description of the device performance must take into account the effects of quantum noise . [ 8 ]
In addition to heterodyne detection, STJs can also be used as direct detectors. In this application, the STJ is biased with a dc voltage less than the gap voltage. A photon absorbed in the superconductor breaks Cooper pairs and creates quasiparticles . The quasiparticles tunnel across the junction in the direction of the applied voltage, and the resulting tunneling current is proportional to the photon energy. STJ devices have been employed as single-photon detectors for photon frequencies ranging from X-rays to the infrared . [ 9 ]
The superconducting quantum interference device or SQUID is based on a superconducting loop containing Josephson junctions. SQUIDs are the world's most sensitive magnetometers , capable of measuring a single magnetic flux quantum .
Superconducting quantum computing utilizes STJ-based circuits, including charge qubits , flux qubits and phase qubits .
The STJ is the primary active element in rapid single flux quantum or RSFQ fast logic circuits. [ 10 ]
When a high frequency current is applied to a Josephson junction, the ac Josephson current will synchronize with the applied frequency giving rise to regions of constant voltage in the I–V curve of the device (Shapiro steps). For the purpose of voltage standards, these steps occur at the voltages n f / K J {\displaystyle nf/K_{\text{J}}} where n {\displaystyle n} is an integer, f {\displaystyle f} is the applied frequency and the Josephson constant K J {\displaystyle K_{\text{J}}} = 483 597 .8484... × 10 9 Hz⋅V −1 [ 11 ] is a constant that is equal to 2 e / h {\displaystyle 2e/h} . These steps provide an exact conversion from frequency to voltage. Because frequency can be measured with very high precision, this effect is used as the basis of the Josephson voltage standard, which implements the SI definition of the volt . [ 12 ] [ 13 ]
In the case that the STJ shows asymmetric Josephson tunneling, the junction can become a Josephson diode . [ 14 ] | https://en.wikipedia.org/wiki/Superconducting_tunnel_junction |
Superconducting wires are electrical wires made of superconductive material. When cooled below their transition temperatures , they have zero electrical resistance . Most commonly, conventional superconductors such as niobium–titanium are used, [ 1 ] but high-temperature superconductors such as YBCO are entering the market.
Superconducting wire's advantages over copper or aluminum include higher maximum current densities and zero power dissipation . Its disadvantages include the cost of refrigeration of the wires to superconducting temperatures (often requiring cryogens such as liquid nitrogen or liquid helium ), the danger of the wire quenching (a sudden loss of superconductivity), the inferior mechanical properties of some superconductors, and the cost of wire materials and construction. [ 2 ]
Its main application is in superconducting magnets , which are used in scientific and medical equipment where high magnetic fields are necessary.
The construction and operating temperature will typically be chosen to maximise:
Superconducting wires/tapes/cables usually consist of two key features:
The current sharing temperature T cs is the temperature at which the current transported through the superconductor also starts to flow through the stabilizer. [ 5 ] [ 6 ] However, T cs is not the same as the quench temperature (or critical temperature) T c ; in the former case, there is partial loss of superconductivity, while in the latter case, the superconductivity is entirely lost. [ 7 ]
Low-temperature superconductor (LTS) wires are made from superconductors with low critical temperature , such as Nb 3 Sn ( niobium–tin ) and NbTi ( niobium–titanium ). Often the superconductor is in filament form in a copper or aluminium matrix which carries the current should the superconductor quench for any reason. The superconductor filaments can form a third of the total volume of the wire.
The normal wire-drawing process can be used for malleable alloys such as niobium–titanium.
Vanadium–gallium (V 3 Ga) can be prepared by surface diffusion where the high temperature component as a solid is bathed in the other element as liquid or gas. [ 8 ] When all components remain in the solid state during high temperature diffusion this is known as the bronze process. [ 9 ]
High-temperature superconductor (HTS) wires are made from superconductors with high critical temperature ( high-temperature superconductivity ), such as YBCO and BSCCO .
The powder-in-tube (PIT, or oxide powder in tube , OPIT) process is an extrusion process often used for making electrical conductors from brittle superconducting materials such as niobium–tin [ 10 ] or magnesium diboride , [ 11 ] and ceramic cuprate superconductors such as BSCCO . [ 12 ] [ 13 ] It has been used to form wires of the iron pnictides . [ 14 ] (PIT is not used for yttrium barium copper oxide as it does not have the weak layers required to generate adequate ' texture ' (alignment) in the PIT process.)
This process is used because the high-temperature superconductors are too brittle for normal wire forming processes . The tubes are metal, often silver . Often the tubes are heated to react the mix of powders. Once reacted the tubes are sometimes flattened to form a tape-like conductor. The resulting wire is not as flexible as conventional metal wire, but is sufficient for many applications.
There are in situ and ex situ variants of the process, as well a 'double core' method that combines both. [ 15 ]
These wires are in a form of a metal tape of about 10 mm width and about 100 micrometer thickness, coated with superconductor materials such as YBCO . A few years after the discovery of High-temperature superconductivity materials such as the YBCO , it was demonstrated that epitaxial YBCO thin films grown on lattice matched single crystals such as magnesium oxide MgO , strontium titanate (SrTiO 3 ) and sapphire had high supercritical current densities of 10–40 kA/mm 2 . [ 16 ] [ 17 ] However, a lattice-matched flexible material was needed for producing a long tape. YBCO films deposited directly on metal substrate materials exhibit poor superconducting properties. It was demonstrated that a c-axis oriented yttria-stabilized zirconia (YSZ) intermediate layer on a metal substrate can yield YBCO films of higher quality, which had still one to two orders less critical current density than that produced on the single crystal substrates. [ 18 ] [ 19 ]
The breakthrough came with the invention of ion beam-assisted deposition (IBAD) technique to produce of biaxially aligned yttria-stabilized zirconia (YSZ) thin films on metal tapes and the Rolling-Assisted-Biaxially-Textured-Substrates (RABiTS) process to produce biaxially textured metallic substrates via thermomechanically processing. [ 20 ] [ 21 ]
In the IBAD process, the biaxially-textured YSZ film provided a single-crystal-like template for the epitaxial growth of the YBCO films. These YBCO films achieved critical current density of more than 1 MA/cm 2 . Other buffer layers such as cerium oxide (CeO 2 ) and magnesium oxide (MgO) were produced using the IBAD technique for the superconductor films. [ 22 ] [ 23 ] Details of the IBAD substrates and technology were reviewed by Arendt. [ 24 ] The process of LMO-enabled IBAD–MgO was invented and developed at the Oak Ridge National Laboratory and won a R&D100 Award in 2007. [ 25 ] This LMO-enabled substrate process is now being used by essentially all manufacturers of HST wire based on the IBAD substrate.
In the RABiTS substrates, the metallic template itself was biaxially-textured and heteroepitaxial buffer layers of Y 2 O 3 , YSZ and CeO 2 were then deposited on the metallic template, followed by heterepitaxial deposition of the superconductor layer. Details of the RABiTS substrates and technology were reviewed by Goyal. [ 26 ]
As of 2015 [update] , YBCO coated superconductor tapes capable of carrying more than 500 A/cm-width at 77 K and 1000 A/cm-width at 30 K under high magnetic field have been demonstrated. [ 27 ] [ 28 ] [ 29 ] [ 30 ] In 2021 YBCO coated superconductor tapes capable of carrying more than 250 A/cm-width at 77 K and 2500 A/cm-width at 20 K were reported for commercially produced wires. [ 31 ] In 2021 an experimental demonstration of an over-doped YBCO film reported 90 MA/cm 2 at 5 K and 6 MA/cm 2 at 77 K in a 7 T magnetic field. [ 32 ]
Metal organic chemical vapor deposition (MOCVD) is one of the deposition processes used for fabrication of YBCO coated conductor tapes. Ignatiev provides an overview of MOCVD processes used to deposit YBCO films via MOCVD deposition. [ 33 ]
Superconducting layer in the 2nd generation superconducting wires can also be grown by thermal evaporation of constituent metals, rare-earth element , barium , and copper . Prusseit provides an overview of the thermal evaporation process used to deposit high-quality YBCO films. [ 34 ]
Superconducting layer in the 2nd generation superconducting wires can also be grown by pulsed laser deposition (PLD). Christen provides an overview of the PLD process used to deposit high-quality YBCO films. [ 35 ]
There are several IEC ( International Electrotechnical Commission ) standards related to superconducting wires under TC90. | https://en.wikipedia.org/wiki/Superconducting_wire |
Superconductivity is a set of physical properties observed in superconductors : materials where electrical resistance vanishes and magnetic fields are expelled from the material. Unlike an ordinary metallic conductor , whose resistance decreases gradually as its temperature is lowered, even down to near absolute zero , a superconductor has a characteristic critical temperature below which the resistance drops abruptly to zero. [ 1 ] [ 2 ] An electric current through a loop of superconducting wire can persist indefinitely with no power source. [ 3 ] [ 4 ] [ 5 ] [ 6 ]
The superconductivity phenomenon was discovered in 1911 by Dutch physicist Heike Kamerlingh Onnes . Like ferromagnetism and atomic spectral lines , superconductivity is a phenomenon which can only be explained by quantum mechanics . It is characterized by the Meissner effect , the complete cancellation of the magnetic field in the interior of the superconductor during its transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of perfect conductivity in classical physics .
In 1986, it was discovered that some cuprate - perovskite ceramic materials have a critical temperature above 35 K (−238 °C). [ 7 ] It was shortly found (by Ching-Wu Chu ) that replacing the lanthanum with yttrium , i.e. making YBCO , raised the critical temperature to 92 K (−181 °C), which was important because liquid nitrogen could then be used as a refrigerant. Such a high transition temperature is theoretically impossible for a conventional superconductor , leading the materials to be termed high-temperature superconductors . The cheaply available coolant liquid nitrogen boils at 77 K (−196 °C) and thus the existence of superconductivity at higher temperatures than this facilitates many experiments and applications that are less practical at lower temperatures.
Superconductivity was discovered on April 8, 1911, by Heike Kamerlingh Onnes, who was studying the resistance of solid mercury at cryogenic temperatures using the recently produced liquid helium as a refrigerant . [ 8 ] At the temperature of 4.2 K, he observed that the resistance abruptly disappeared. [ 9 ] In the same experiment, he also observed the superfluid transition of helium at 2.2 K, without recognizing its significance. The precise date and circumstances of the discovery were only reconstructed a century later, when Onnes's notebook was found. [ 10 ] In subsequent decades, superconductivity was observed in several other materials. In 1913, lead was found to superconduct at 7 K, and in 1941 niobium nitride was found to superconduct at 16 K.
Great efforts have been devoted to finding out how and why superconductivity works; the important step occurred in 1933, when Meissner and Ochsenfeld discovered that superconductors expelled applied magnetic fields, a phenomenon which has come to be known as the Meissner effect. [ 11 ] In 1935, Fritz and Heinz London showed that the Meissner effect was a consequence of the minimization of the electromagnetic free energy carried by superconducting current. [ 12 ]
The theoretical model that was first conceived for superconductivity was completely classical: it is summarized by London constitutive equations . It was put forward by the brothers Fritz and Heinz London in 1935, shortly after the discovery that magnetic fields are expelled from superconductors. A major triumph of the equations of this theory is their ability to explain the Meissner effect, [ 11 ] wherein a material exponentially expels all internal magnetic fields as it crosses the superconducting threshold. By using the London equation, one can obtain the dependence of the magnetic field inside the superconductor on the distance to the surface. [ 13 ]
The two constitutive equations for a superconductor by London are:
∂ j ∂ t = n e 2 m E , ∇ × j = − n e 2 m B . {\displaystyle {\frac {\partial \mathbf {j} }{\partial t}}={\frac {ne^{2}}{m}}\mathbf {E} ,\qquad \mathbf {\nabla } \times \mathbf {j} =-{\frac {ne^{2}}{m}}\mathbf {B} .}
The first equation follows from Newton's second law for superconducting electrons.
During the 1950s, theoretical condensed matter physicists arrived at an understanding of "conventional" superconductivity, through a pair of remarkable and important theories: the phenomenological Ginzburg–Landau theory (1950) and the microscopic BCS theory (1957). [ 14 ] [ 15 ]
In 1950, the phenomenological Ginzburg–Landau theory of superconductivity was devised by Landau and Ginzburg . [ 16 ] This theory, which combined Landau's theory of second-order phase transitions with a Schrödinger -like wave equation, had great success in explaining the macroscopic properties of superconductors. In particular, Abrikosov showed that Ginzburg–Landau theory predicts the division of superconductors into the two categories now referred to as Type I and Type II. Abrikosov and Ginzburg were awarded the 2003 Nobel Prize for their work (Landau had received the 1962 Nobel Prize for other work, and died in 1968). The four-dimensional extension of the Ginzburg–Landau theory, the Coleman-Weinberg model , is important in quantum field theory and cosmology .
Also in 1950, Maxwell and Reynolds et al. found that the critical temperature of a superconductor depends on the isotopic mass of the constituent element. [ 17 ] [ 18 ] This important discovery pointed to the electron – phonon interaction as the microscopic mechanism responsible for superconductivity.
The complete microscopic theory of superconductivity was finally proposed in 1957 by Bardeen , Cooper and Schrieffer . [ 15 ] This BCS theory explained the superconducting current as a superfluid of Cooper pairs, pairs of electrons interacting through the exchange of phonons. For this work, the authors were awarded the Nobel Prize in 1972.
The BCS theory was set on a firmer footing in 1958, when N. N. Bogolyubov showed that the BCS wavefunction, which had originally been derived from a variational argument, could be obtained using a canonical transformation of the electronic Hamiltonian . [ 19 ] In 1959, Lev Gor'kov showed that the BCS theory reduced to the Ginzburg–Landau theory close to the critical temperature. [ 20 ] [ 21 ]
Generalizations of BCS theory for conventional superconductors form the basis for the understanding of the phenomenon of superfluidity , because they fall into the lambda transition universality class. The extent to which such generalizations can be applied to unconventional superconductors is still controversial.
The first practical application of superconductivity was developed in 1954 with Dudley Allen Buck 's invention of the cryotron . [ 22 ] Two superconductors with greatly different values of the critical magnetic field are combined to produce a fast, simple switch for computer elements.
Soon after discovering superconductivity in 1911, Kamerlingh Onnes attempted to make an electromagnet with superconducting windings but found that relatively low magnetic fields destroyed superconductivity in the materials he investigated. Much later, in 1955, G. B. Yntema [ 23 ] succeeded in constructing a small 0.7-tesla iron-core electromagnet with superconducting niobium wire windings. Then, in 1961, J. E. Kunzler , E. Buehler, F. S. L. Hsu, and J. H. Wernick [ 24 ] made the startling discovery that, at 4.2 kelvin, niobium–tin , a compound consisting of three parts niobium and one part tin, was capable of supporting a current density of more than 100,000 amperes per square centimeter in a magnetic field of 8.8 tesla. The alloy was brittle and difficult to fabricate, but niobium–tin proved useful for generating magnetic fields as high as 20 tesla.
In 1962, T. G. Berlincourt and R. R. Hake [ 25 ] [ 26 ] discovered that more ductile alloys of niobium and titanium are suitable for applications up to 10 tesla. Commercial production of niobium–titanium supermagnet wire immediately commenced at Westinghouse Electric Corporation and at Wah Chang Corporation . Although niobium–titanium boasts less-impressive superconducting properties than those of niobium–tin, niobium–titanium became the most widely used "workhorse" supermagnet material, in large measure a consequence of its very high ductility and ease of fabrication. However, both niobium–tin and niobium–titanium found wide application in MRI medical imagers, bending and focusing magnets for enormous high-energy-particle accelerators, and other applications. Conectus, a European superconductivity consortium, estimated that in 2014, global economic activity for which superconductivity was indispensable amounted to about five billion euros, with MRI systems accounting for about 80% of that total.
In 1962, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. [ 27 ] This phenomenon, now called the Josephson effect , is exploited by superconducting devices such as SQUIDs . It is used in the most accurate available measurements of the magnetic flux quantum Φ 0 = h /(2 e ), where h is the Planck constant . Coupled with the quantum Hall resistivity , this leads to a precise measurement of the Planck constant. Josephson was awarded the Nobel Prize for this work in 1973. [ 28 ]
In 2008, it was proposed that the same mechanism that produces superconductivity could produce a superinsulator state in some materials, with almost infinite electrical resistance . [ 29 ] The first development and study of superconducting Bose–Einstein condensate (BEC) in 2020 suggested a "smooth transition between" BEC and Bardeen-Cooper-Shrieffer regimes. [ 30 ] [ 31 ]
Multiple types of superconductivity are reported in devices made of single-layer materials . Some of these materials can switch between conducting, insulating, and other behaviors. [ 32 ]
Twisting materials imbues them with a “ moiré ” pattern involving tiled hexagonal cells that act like atoms and host electrons. In this environment, the electrons move slowly enough for their collective interactions to guide their behavior. When each cell has a single electron, the electrons take on an antiferromagnetic arrangement; each electron can have a preferred location and magnetic orientation. Their intrinsic magnetic fields tend to alternate between pointing up and down. Adding electrons allows superconductivity by causing Cooper pairs to form. Fu and Schrade argued that electron-on-electron action was allowing both antiferromagnetic and superconducting states. [ 33 ]
The first success with 2D materials involved a twisted bilayer graphene sheet (2018, Tc ~1.7 K, 1.1° twist). A twisted three-layer graphene device was later shown to superconduct (2021, Tc ~2.8 K). Then an untwisted trilayer graphene device was reported to superconduct (2022, Tc 1-2 K). The latter was later shown to be tunable, easily reproducing behavior found millions of other configurations. Directly observing what happens when electrons are added to a material or slightly weakening its electric field lets physicists quickly try out an unprecedented number of recipes to see which lead to superconductivity. [ 32 ]
These devices have applications i.n quantum computing .
2D materials other than graphene have also been made to superconduct. Transition metal dichalcogenide (TMD) sheets twisted at 5 degrees intermittently achieved superconduction. by creating a Josephson junction. The device used used thin layers of palladium to connect to the sides of a tungsten telluride layer surrounded and protected by boron nitride . [ 34 ] Another group demonstrated superconduction in molybdenum telluride (MoTe₂) in 2D van der Waals materials using ferroelectric domain walls. The Tc was implied to be higher than typical TMDs (~5–10 K). [ 35 ]
A Cornell group added a 3.5-degree twist to an insulator that allowed electrons to slow down and interact strongly, leaving one electron per cell, exhibiting superconduction. Existing theories do not explain this behavior.
Fu and collaborators proposed that electrons arranged to form a repeating crystal that allows the electron grid to float independently of the background atomic nuclei allows the electron grid to relax. Its ripples pair electrons the way phonons do, although this is unconfirmed.
Superconductors are classified according to many criteria. The most common are:
A superconductor can be Type I , meaning it has a single critical field , above which superconductivity is lost and below which the magnetic field is completely expelled from the superconductor; or Type II , meaning it has two critical fields, between which it allows partial penetration of the magnetic field through isolated points [ 36 ] called vortices . [ 37 ] Furthermore, in multicomponent superconductors it is possible to combine the two behaviours. In that case the superconductor is of Type-1.5 . [ 38 ]
A superconductor is conventional if it is driven by electron–phonon interaction and explained by the BCS theory or its extension, the Eliashberg theory. Otherwise, it is unconventional . [ 39 ] [ 40 ] Alternatively, a superconductor is called unconventional if the superconducting order parameter transforms according to a non-trivial irreducible representation of the system's point group or space group . [ 41 ]
A superconductor is generally considered high-temperature if it reaches a superconducting state above a temperature of 30 K (−243.15 °C); [ 42 ] as in the initial discovery by Georg Bednorz and K. Alex Müller . [ 7 ] It may also reference materials that transition to superconductivity when cooled using liquid nitrogen – that is, at only T c > 77 K, although this is generally used only to emphasize that liquid nitrogen coolant is sufficient. Low temperature superconductors refer to materials with a critical temperature below 30 K, and are cooled mainly by liquid helium ( T c > 4.2 K). One exception to this rule is the iron pnictide group of superconductors that display behaviour and properties typical of high-temperature superconductors, yet some of the group have critical temperatures below 30 K.
Superconductor material classes include chemical elements (e.g. mercury or lead ), alloys (such as niobium–titanium , germanium–niobium , and niobium nitride ), ceramics ( YBCO and magnesium diboride ), superconducting pnictides (like fluorine-doped LaOFeAs), single-layer materials such as graphene and transition metal dichalcogenides , [ 44 ] or organic superconductors ( fullerenes and carbon nanotubes ; though perhaps these examples should be included among the chemical elements, as they are composed entirely of carbon ). [ 45 ] [ 46 ]
Several physical properties of superconductors vary from material to material, such as the critical temperature, the value of the superconducting gap , the critical magnetic field, and the critical current density at which superconductivity is destroyed. On the other hand, there is a class of properties that are independent of the underlying material. The Meissner effect, the quantization of the magnetic flux or permanent currents, i.e. the state of zero resistance are the most important examples. The existence of these "universal" properties is rooted in the nature of the broken symmetry of the superconductor and the emergence of off-diagonal long range order . Superconductivity is a thermodynamic phase , and thus possesses certain distinguishing properties which are largely independent of microscopic details. Off diagonal long range order is closely connected to the formation of Cooper pairs .
The simplest method to measure the electrical resistance of a sample of some material is to place it in an electrical circuit in series with a current source I and measure the resulting voltage V across the sample. The resistance of the sample is given by Ohm's law as R = V / I . If the voltage is zero, this means that the resistance is zero.
Superconductors are also able to maintain a current with no applied voltage whatsoever, a property exploited in superconducting electromagnets such as those found in MRI machines. Experiments have demonstrated that currents in superconducting coils can persist for years without any measurable degradation. Experimental evidence points to a lifetime of at least 100,000 years. Theoretical estimates for the lifetime of a persistent current can exceed the estimated lifetime of the universe, depending on the wire geometry and the temperature. [ 5 ] In practice, currents injected in superconducting coils persisted for 28 years, 7 months, 27 days in a superconducting gravimeter in Belgium, from August 4, 1995 until March 31, 2024. [ 47 ] [ 48 ] In such instruments, the measurement is based on the monitoring of the levitation of a superconducting niobium sphere with a mass of four grams.
In a normal conductor, an electric current may be visualized as a fluid of electrons moving across a heavy ionic lattice. The electrons are constantly colliding with the ions in the lattice, and during each collision some of the energy carried by the current is absorbed by the lattice and converted into heat , which is essentially the vibrational kinetic energy of the lattice ions. As a result, the energy carried by the current is constantly being dissipated. This is the phenomenon of electrical resistance and Joule heating .
The situation is different in a superconductor. In a conventional superconductor, the electronic fluid cannot be resolved into individual electrons. Instead, it consists of bound pairs of electrons known as Cooper pairs . This pairing is caused by an attractive force between electrons from the exchange of phonons . This pairing is very weak, and small thermal vibrations can fracture the bond. Due to quantum mechanics , the energy spectrum of this Cooper pair fluid possesses an energy gap , meaning there is a minimum amount of energy Δ E that must be supplied in order to excite the fluid. Therefore, if Δ E is larger than the thermal energy of the lattice, given by kT , where k is the Boltzmann constant and T is the temperature , the fluid will not be scattered by the lattice. [ 49 ] The Cooper pair fluid is thus a superfluid , meaning it can flow without energy dissipation.
In the class of superconductors known as type II superconductors , including all known high-temperature superconductors , an extremely low but non-zero resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of magnetic vortices in the electronic superfluid, which dissipates some of the energy carried by the current. If the current is sufficiently small, the vortices are stationary, and the resistivity vanishes. The resistance due to this effect is minuscule compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen into a disordered but stationary phase known as a "vortex glass". Below this vortex glass transition temperature, the resistance of the material becomes truly zero.
In superconducting materials, the characteristics of superconductivity appear when the temperature T is lowered below a critical temperature T c . The value of this critical temperature varies from material to material. Conventional superconductors usually have critical temperatures ranging from around 20 K to less than 1 K. Solid mercury , for example, has a critical temperature of 4.2 K. As of 2015, the highest critical temperature found for a conventional superconductor is 203 K for H 2 S, although high pressures of approximately 90 gigapascals were required. [ 50 ] Cuprate superconductors can have much higher critical temperatures: YBa 2 Cu 3 O 7 , one of the first cuprate superconductors to be discovered, has a critical temperature above 90 K, and mercury-based cuprates have been found with critical temperatures in excess of 130 K. The basic physical mechanism responsible for the high critical temperature is not yet clear. However, it is clear that a two-electron pairing is involved, although the nature of the pairing ( s {\displaystyle s} wave vs. d {\displaystyle d} wave) remains controversial. [ 51 ]
Similarly, at a fixed temperature below the critical temperature, superconducting materials cease to superconduct when an external magnetic field is applied which is greater than the critical magnetic field . This is because the Gibbs free energy of the superconducting phase increases quadratically with the magnetic field while the free energy of the normal phase is roughly independent of the magnetic field. If the material superconducts in the absence of a field, then the superconducting phase free energy is lower than that of the normal phase and so for some finite value of the magnetic field (proportional to the square root of the difference of the free energies at zero magnetic field) the two free energies will be equal and a phase transition to the normal phase will occur. More generally, a higher temperature and a stronger magnetic field lead to a smaller fraction of electrons that are superconducting and consequently to a longer London penetration depth of external magnetic fields and currents. The penetration depth becomes infinite at the phase transition.
The onset of superconductivity is accompanied by abrupt changes in various physical properties, which is the hallmark of a phase transition . For example, the electronic heat capacity is proportional to the temperature in the normal (non-superconducting) regime. At the superconducting transition, it suffers a discontinuous jump and thereafter ceases to be linear. At low temperatures, it varies instead as e − α / T for some constant, α . This exponential behavior is one of the pieces of evidence for the existence of the energy gap .
The order of the superconducting phase transition was long a matter of debate. Experiments indicate that the transition is second-order, meaning there is no latent heat . However, in the presence of an external magnetic field there is latent heat, because the superconducting phase has a lower entropy below the critical temperature than the normal phase. It has been experimentally demonstrated [ 52 ] that, as a consequence, when the magnetic field is increased beyond the critical field, the resulting phase transition leads to a decrease in the temperature of the superconducting material.
Calculations in the 1970s suggested that it may actually be weakly first-order due to the effect of long-range fluctuations in the electromagnetic field. In the 1980s it was shown theoretically with the help of a disorder field theory , in which the vortex lines of the superconductor play a major role, that the transition is of second order within the type II regime and of first order (i.e., latent heat ) within the type I regime, and that the two regions are separated by a tricritical point . [ 53 ] The results were strongly supported by Monte Carlo computer simulations. [ 54 ]
When a superconductor is placed in a weak external magnetic field H , and cooled below its transition temperature, the magnetic field is ejected. The Meissner effect does not cause the field to be completely ejected but instead, the field penetrates the superconductor but only to a very small distance, characterized by a parameter λ , called the London penetration depth , decaying exponentially to zero within the bulk of the material. The Meissner effect is a defining characteristic of superconductivity. For most superconductors, the London penetration depth is on the order of 100 nm.
The Meissner effect is sometimes confused with the kind of diamagnetism one would expect in a perfect electrical conductor: according to Lenz's law , when a changing magnetic field is applied to a conductor, it will induce an electric current in the conductor that creates an opposing magnetic field. In a perfect conductor, an arbitrarily large current can be induced, and the resulting magnetic field exactly cancels the applied field.
The Meissner effect is distinct from this – it is the spontaneous expulsion that occurs during transition to superconductivity. Suppose we have a material in its normal state, containing a constant internal magnetic field. When the material is cooled below the critical temperature, we would observe the abrupt expulsion of the internal magnetic field, which we would not expect based on Lenz's law.
The Meissner effect was given a phenomenological explanation by the brothers Fritz and Heinz London , who showed that the electromagnetic free energy in a superconductor is minimized provided ∇ 2 H = λ − 2 H {\displaystyle \nabla ^{2}\mathbf {H} =\lambda ^{-2}\mathbf {H} \,} where H is the magnetic field and λ is the London penetration depth.
This equation, which is known as the London equation , predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface.
A superconductor with little or no magnetic field within it is said to be in the Meissner state. The Meissner state breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value H c . Depending on the geometry of the sample, one may obtain an intermediate state [ 55 ] consisting of a baroque pattern [ 56 ] of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors, raising the applied field past a critical value H c1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength H c2 , superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized . Most pure elemental superconductors, except niobium and carbon nanotubes , are Type I, while almost all impure and compound superconductors are Type II.
Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axis. The effect, the London moment, was put to good use in Gravity Probe B . This experiment measured the magnetic fields of four superconducting gyroscopes to determine their spin axes. This was critical to the experiment since it is one of the few ways to accurately determine the spin axis of an otherwise featureless sphere.
High-temperature superconductivity (high- T c or HTS) is superconductivity in materials with a critical temperature (the temperature below which the material behaves as a superconductor) above 77 K (−196.2 °C; −321.1 °F), the boiling point of liquid nitrogen . [ 57 ] They are "high-temperature" only relative to previously known superconductors, which function only closer to absolute zero. The first high-temperature superconductor was discovered in 1986 by IBM researchers Georg Bednorz and K. Alex Müller . [ 58 ] [ 59 ] Although the critical temperature is around 35.1 K (−238.1 °C; −396.5 °F), this material was modified by Ching-Wu Chu to make the first high-temperature superconductor with critical temperature 93 K (−180.2 °C; −292.3 °F). [ 60 ] Bednorz and Müller were awarded the Nobel Prize in Physics in 1987 "for their important break-through in the discovery of superconductivity in ceramic materials". [ 61 ] Most high- T c materials are type-II superconductors .
The major advantage of high-temperature superconductors is that they can be cooled using liquid nitrogen, [ 58 ] in contrast to previously known superconductors, which require expensive and hard-to-handle coolants, primarily liquid helium . A second advantage of high- T c materials is they retain their superconductivity in higher magnetic fields than previous materials. This is important when constructing superconducting magnets , a primary application of high- T c materials.
The majority of high-temperature superconductors are ceramics , rather than the previously known metallic materials. Ceramic superconductors are suitable for some practical uses but encounter manufacturing issues. For example, most ceramics are brittle , which complicates wire fabrication. [ 62 ]
Superconductors are promising candidate materials for devising fundamental circuit elements of electronic, spintronic, and quantum technologies. One such example is a superconducting diode, [ 65 ] in which supercurrent flows along one direction only, that promise dissipationless superconducting and semiconducting-superconducting hybrid technologies.
Superconducting magnets are some of the most powerful electromagnets known. They are used in MRI / NMR machines, mass spectrometers , the beam-steering magnets used in particle accelerators and plasma confining magnets in some tokamaks . They can also be used for magnetic separation, where weakly magnetic particles are extracted from a background of less or non-magnetic particles, as in the pigment industries. They can also be used in large wind turbines to overcome the restrictions imposed by high electrical currents, with an industrial grade 3.6 megawatt superconducting windmill generator having been tested successfully in Denmark. [ 66 ]
In the 1950s and 1960s, superconductors were used to build experimental digital computers using cryotron switches. [ 67 ] More recently, superconductors have been used to make digital circuits based on rapid single flux quantum technology and RF and microwave filters for mobile phone base stations.
Superconductors are used to build Josephson junctions which are the building blocks of SQUIDs (superconducting quantum interference devices), the most sensitive magnetometers known. SQUIDs are used in scanning SQUID microscopes and magnetoencephalography . Series of Josephson devices are used to realize the SI volt . Superconducting photon detectors [ 68 ] can be realised in a variety of device configurations. Depending on the particular mode of operation, a superconductor–insulator–superconductor Josephson junction can be used as a photon detector or as a mixer . The large resistance change at the transition from the normal to the superconducting state is used to build thermometers in cryogenic micro-calorimeter photon detectors . The same effect is used in ultrasensitive bolometers made from superconducting materials. Superconducting nanowire single-photon detectors offer high speed, low noise single-photon detection and have been employed widely in advanced photon-counting applications. [ 69 ]
Other early markets are arising where the relative efficiency, size and weight advantages of devices based on high-temperature superconductivity outweigh the additional costs involved. For example, in wind turbines the lower weight and volume of superconducting generators could lead to savings in construction and tower costs, offsetting the higher costs for the generator and lowering the total levelized cost of electricity (LCOE). [ 70 ]
Promising future applications include high-performance smart grid , electric power transmission , transformers , power storage devices , compact fusion power devices , electric motors (e.g. for vehicle propulsion, as in vactrains or maglev trains ), magnetic levitation devices , fault current limiters , enhancing spintronic devices with superconducting materials, [ 71 ] and superconducting magnetic refrigeration . However, superconductivity is sensitive to moving magnetic fields, so applications that use alternating current (e.g. transformers) will be more difficult to develop than those that rely upon direct current . Compared to traditional power lines, superconducting transmission lines are more efficient and require only a fraction of the space, which would not only lead to a better environmental performance but could also improve public acceptance for expansion of the electric grid. [ 72 ] Another attractive industrial aspect is the ability for high power transmission at lower voltages. [ 73 ] Advancements in the efficiency of cooling systems and use of cheap coolants such as liquid nitrogen have also significantly decreased cooling costs needed for superconductivity.
As of 2022, there have been five Nobel Prizes in Physics for superconductivity related subjects: | https://en.wikipedia.org/wiki/Superconductivity |
Superconductors can be classified in accordance with several criteria that depend on physical properties, current understanding, and the expense of cooling them or their material.
This criterion is useful as BCS theory has successfully explained the properties of conventional superconductors since 1957, yet there have been no satisfactory theories to fully explain unconventional superconductors. In most cases conventional superconductors are type I, but there are exceptions such as niobium , which is both conventional and type II.
77 K is used as the demarcation point to emphasize whether or not superconductivity in the materials can be achieved with liquid nitrogen (whose boiling point is 77K), which is much more feasible than liquid helium (an alternative to achieve the temperatures needed to get low-temperature superconductors). | https://en.wikipedia.org/wiki/Superconductor_classification |
The superconductor–insulator transition is an example of a quantum phase transition , whereupon tuning some parameter in the Hamiltonian , a dramatic change in the behavior of the electrons occurs. The nature of how this transition occurs is disputed, and many studies seek to understand how the order parameter, Ψ = Δ exp ( i θ ) {\displaystyle \Psi =\Delta \exp(i\theta )} , changes. Here Δ {\displaystyle \Delta } is the amplitude of the order parameter, and θ {\displaystyle \theta } is the phase. Most theories involve either the destruction of the amplitude of the order parameter - by a reduction in the density of states at the Fermi surface , or by destruction of the phase coherence; which results from the proliferation of vortices.
In two dimensions, the subject of superconductivity becomes very interesting because the existence of true long-range order is not possible . In the 1970s, J. Michael Kosterlitz and David J. Thouless (along with Vadim Berezinski ) showed that a different kind of long-range order could exist - topological order - which showed power law correlations (meaning that by measuring the two-point correlation function ⟨ Ψ ( 0 ) Ψ ( r ) ⟩ ∝ r − γ {\displaystyle \langle \Psi (0)\Psi (r)\rangle \propto r^{-\gamma }} it decays algebraically).
This picture changes if disorder is included. Kosterlitz-Thouless behavior can be obtained, but the fluctuations of the order parameter are greatly enhanced, and the transition temperature is suppressed.
The model to keep in mind in the understanding of how superconductivity occurs in a two-dimensional disordered superconductor is the following. At high temperatures, the system is in the normal state. As the system is cooled towards its transition temperature, superconducting grains begin to fluctuate in and out of existence. When one of these grains "pops" into existence, it is accelerated without dissipation for a time τ {\displaystyle \tau } before decaying back into the normal state. This has the effect of increasing the conductivity even before the system has condensed into the superconducting state. This increased conductivity above T c 0 {\displaystyle T_{c0}} is referred to as paraconductivity, or fluctuation conductivity, and was first correctly described by Lev G. Aslamazov and Anatoly Larkin . As the system is cooled further, the lifetime of these fluctuations increase, and becomes comparable to the Ginzburg-Landau time
Eventually, the amplitude Δ {\displaystyle \Delta } of the order parameter becomes well defined (it is non-zero wherever there are superconducting patches), and it can begin to support phase fluctuations. These phase fluctuations set in at a lower temperature, and are caused by vortices - which are topological defects in the order parameter. It is the motion of vortices that gives rise to inflation of resistance below T c 0 {\displaystyle T_{\mathrm {c} 0}} . Eventually the system is cooled further, below the Kosterlitz-Thouless temperature T c {\displaystyle T_{\mathrm {c} }} , all of the free vortices become bound into vortex-antivortex pairs, and the systems attains a state with zero resistance.
Cooling the system to T = 0 {\displaystyle T=0} and turning on a magnetic field has certain effects. For very small fields ( B < B c 1 {\displaystyle B<B_{c1}} ) the magnetic field is shielded from the interior of the sample. Above B c 1 {\displaystyle B_{c1}} however, the energy cost to keep out the external field becomes too great, and the superconductor allows the field to penetrate in quantized fluxons. Now the superconductor has transitioned into the "mixed state", in which there is a superfluid along with vortices - which now have only one circulation.
Increasing the field adds vortices to the system. Eventually the density of vortices becomes so large that they overlap. The core of the vortex contains normal electrons (i.e. the amplitude of the superconducting order parameter is zero), so when they overlap, superconductivity is killed by destroying the amplitude of the order parameter. Increasing the field further leads to a very interesting possibility - in two-dimensions where the fluctuations are enhanced - that the vortices may condense into a Bose-condensate, which localizes the superconducting pairs. | https://en.wikipedia.org/wiki/Superconductor–insulator_transition |
Supercooling , [ 1 ] also known as undercooling , [ 2 ] [ 3 ] is the process of lowering the temperature of a liquid below its freezing point without it becoming a solid. Per the established international definition, supercooling means ‘cooling a substance below the normal freezing point without solidification’. [ 4 ] [ 5 ] While it can be achieved by different physical means, the postponed solidification is most often due to the absence of seed crystals or nuclei around which a crystal structure can form. The supercooling of water can be achieved without any special techniques other than chemical demineralization, down to −48.3 °C (−54.9 °F). Supercooled water can occur naturally, for example in the atmosphere, animals or plants.
This phenomenon was first identified in 1724 by Daniel Gabriel Fahrenheit , while developing Fahrenheit scale. [ 6 ] [ 7 ]
A liquid crossing its standard freezing point will crystalize in the presence of a seed crystal or nucleus around which a crystal structure can form creating a solid. Lacking any such nuclei , the liquid phase can be maintained all the way down to the temperature at which crystal homogeneous nucleation occurs. [ 8 ]
Homogeneous nucleation can occur above the glass transition temperature , but if homogeneous nucleation has not occurred above that temperature, an amorphous (non-crystalline) solid will form.
Water normally freezes at 273.15 K (0.0 °C; 32 °F), but it can be "supercooled" at standard pressure down to its crystal homogeneous nucleation at almost 224.8 K (−48.3 °C; −55.0 °F). [ 9 ] [ 10 ] The process of supercooling requires water to be pure and free of nucleation sites, which can be achieved by processes like reverse osmosis or chemical demineralization , but the cooling itself does not require any specialised technique. If water is cooled at a rate on the order of 10 6 K/s, the crystal nucleation can be avoided and water becomes a glass —that is, an amorphous (non-crystalline) solid. Its glass transition temperature is much colder and harder to determine, but studies estimate it at about 136 K (−137 °C; −215 °F). [ 11 ] Glassy water can be heated up to approximately 150 K (−123 °C; −190 °F) without nucleation occurring. [ 10 ] In the range of temperatures between 150 and 231 K (−123 and −42.2 °C; −190 and −43.9 °F), experiments find only crystal ice.
Droplets of supercooled water often exist in stratus and cumulus clouds . An aircraft flying through such a cloud sees an abrupt crystallization of these droplets, which can result in the formation of ice on the aircraft's wings or blockage of its instruments and probes, unless the aircraft is equipped with an appropriate ice protection system . Freezing rain is also caused by supercooled droplets.
The process opposite to supercooling, the melting of a solid above the freezing point, is much more difficult, and a solid will almost always melt at the same temperature for a given pressure . For this reason, it is the melting point which is usually identified, using melting point apparatus ; even when the subject of a paper is "freezing-point determination", the actual methodology is "the principle of observing the disappearance rather than the formation of ice". [ 12 ] It is possible, at a given pressure, to superheat a liquid above its boiling point without it becoming gaseous.
Supercooling should not be confused with freezing-point depression . Supercooling is the cooling of a liquid below its freezing point without it becoming solid. Freezing point depression is when a solution can be cooled below the freezing point of the corresponding pure liquid due to the presence of the solute ; an example of this is the freezing point depression that occurs when salt is added to pure water.
Constitutional supercooling, which occurs during solidification, is due to compositional solid changes, and results in cooling a liquid below the freezing point ahead of the solid–liquid interface . When solidifying a liquid, the interface is often unstable, and the velocity of the solid–liquid interface must be small in order to avoid constitutional supercooling.
Constitutional supercooling is observed when the liquidus temperature gradient at the interface (the position x=0) is larger than the imposed temperature gradient:
The liquidus slope from the binary phase diagram is given by m = ∂ T L / ∂ C L {\displaystyle m=\partial T_{L}/\partial C_{L}} , so the constitutional supercooling criterion for a binary alloy can be written in terms of the concentration gradient at the interface:
The concentration gradient ahead of a planar interface is given by
where v {\displaystyle v} is the interface velocity, D {\displaystyle D} the diffusion coefficient , and C L S {\displaystyle C^{LS}} and C S L {\displaystyle C^{SL}} are the compositions of the liquid and solid at the interface, respectively (i.e., C L S = C L ( x = 0 ) {\displaystyle C^{LS}=C_{L}(x=0)} ).
For the steady-state growth of a planar interface, the composition of the solid is equal to the nominal alloy composition, C S L = C 0 {\displaystyle C^{SL}=C_{0}} , and the partition coefficient , k = C S L / C L S {\displaystyle k=C^{SL}/C^{LS}} , can be assumed constant. Therefore, the minimum thermal gradient necessary to create a stable solid front is given by
For more information, see Chapter 3 of [ 13 ]
In order to survive extreme low temperatures in certain environments, some animals use the phenomenon of supercooling that allow them to remain unfrozen and avoid cell damage and death. There are many techniques that aid in maintaining a liquid state, such as the production of antifreeze proteins , or AFPs, which bind to ice crystals to prevent water molecules from binding and spreading the growth of ice. [ 14 ] The winter flounder is one such fish that utilizes these proteins to survive in its frigid environment. The liver secretes noncolligative proteins into the bloodstream. [ 15 ] Other animals use colligative antifreezes, which increases the concentration of solutes in their bodily fluids, thus lowering their freezing point. Fish that rely on supercooling for survival must also live well below the water surface, because if they came into contact with ice nuclei they would freeze immediately. Animals that undergo supercooling to survive must also remove ice-nucleating agents from their bodies because they act as a starting point for freezing. Supercooling is also a common feature in some insect, reptile, and other ectotherm species. The potato cyst nematode larva ( Globodera rostochiensis ) could survive inside their cysts in a supercooled state to temperatures as low as −38 °C (−36 °F), even with the cyst encased in ice.
As an animal gets farther and farther below its melting point the chance of spontaneous freezing increases dramatically for its internal fluids, as this is a thermodynamically unstable state. The fluids eventually reach the supercooling point, which is the temperature at which the supercooled solution freezes spontaneously due to being so far below its normal freezing point. [ 16 ] Animals unintentionally undergo supercooling and are only able to decrease the odds of freezing once supercooled. Even though supercooling is essential for survival, there are many risks associated with it.
Plants can also survive extreme cold conditions brought forth during the winter months. Many plant species located in northern climates can acclimate under these cold conditions by supercooling, thus these plants survive temperatures as low as −40 °C (−40 °F). [ 17 ] Although this supercooling phenomenon is poorly understood, it has been recognized through infrared thermography . Ice nucleation occurs in certain plant organs and tissues, debatably beginning in the xylem tissue and spreading throughout the rest of the plant. [ 18 ] [ 19 ] Infrared thermography allows for droplets of water to be visualized as they crystalize in extracellular spaces. [ 20 ]
Supercooling inhibits the formation of ice within the tissue by ice nucleation and allows the cells to maintain water in a liquid state and further allows the water within the cell to stay separate from extracellular ice. [ 20 ] Cellular barriers such as lignin , suberin and the cuticle inhibit ice nucleators and force water into the supercooled tissue. [ 21 ] The xylem and primary tissue of plants are very susceptible to cold temperatures because of the large proportion of water in the cell. Many boreal hardwood species in northern climates have the ability to prevent ice spreading into the shoots allowing the plant to tolerate the cold. [ 22 ] Supercooling has been identified in the evergreen shrubs Rhododendron ferrugineum and Vaccinium vitis-idaea as well as Abies , Picea and Larix species. [ 22 ] Freezing outside of the cell and within the cell wall does not affect the survival of the plant. [ 23 ] However, the extracellular ice may lead to plant dehydration. [ 19 ]
The presence of salt in seawater affects the freezing point. For that reason, it is possible for seawater to remain in the liquid state at temperatures below melting point. This is "pseudo-supercooling" because the phenomenon is the result of freezing point lowering caused by the presence of salt, not supercooling. This condition is most commonly observed in the oceans around Antarctica where melting of the undersides of ice shelves at high-pressure results in liquid melt-water that can be below the freezing temperature. It is supposed that the water does not immediately refreeze due to a lack of nucleation sites. [ 24 ] This provides a challenge to oceanographic instrumentation as ice crystals will readily form on the equipment, potentially affecting the data quality. [ 25 ] Ultimately the presence of extremely cold seawater will affect the growth of sea ice .
One commercial application of supercooling is in refrigeration . Freezers can cool drinks to a supercooled level [ 26 ] so that when they are opened, they form a slush . Another example is a product that can supercool the beverage in a conventional freezer. [ 27 ] The Coca-Cola Company briefly marketed special vending machines containing Sprite in the UK, and Coke in Singapore, which stored the bottles in a supercooled state so that their content would turn to slush upon opening. [ 28 ]
Supercooling was successfully applied to organ preservation at Massachusetts General Hospital/ Harvard Medical School . Livers that were later transplanted into recipient animals were preserved by supercooling for up to 4 days, quadrupling the limits of what could be achieved by conventional liver preservation methods. The livers were supercooled to a temperature of −6 °C (21 °F) in a specialized solution that protected against freezing and injury from the cold temperature. [ 29 ]
Another potential application is drug delivery. In 2015, researchers crystallized membranes at a specific time. Liquid-encapsulated drugs could be delivered to the site and, with a slight environmental change, the liquid rapidly changes into a crystalline form that releases the drug. [ 30 ]
In 2016, a team at Iowa State University proposed a method for "soldering without heat" by using encapsulated droplets of supercooled liquid metal to repair heat sensitive electronic devices. [ 31 ] [ 32 ] In 2019, the same team demonstrated the use of undercooled metal to print solid metallic interconnects on surfaces ranging from polar (paper and Jello) to superhydrophobic (rose petals), with all the surfaces being lower modulus than the metal. [ 33 ] [ 34 ]
Eftekhari et al. proposed an empirical theory explaining that supercooling of ionic liquid crystals can build ordered channels for diffusion for energy storage applications. In this case, the electrolyte has a rigid structure comparable to a solid electrolyte, but the diffusion coefficient can be as large as in liquid electrolytes. Supercooling increases the medium viscosity but keeps the directional channels open for diffusion. [ 35 ] | https://en.wikipedia.org/wiki/Supercooling |
Supercritical adsorption also referred to as the adsorption of supercritical fluids , is the adsorption at above-critical temperatures. There are different tacit understandings of supercritical fluids. For example, “a fluid is considered to be ‘supercritical’ when its temperature and pressure exceed the temperature and pressure at the critical point”. In the studies of supercritical extraction, however, “supercritical fluid” is applied for a narrow temperature region of 1-1.2 T c {\displaystyle T_{c}} or T c {\displaystyle T_{c}} to T c {\displaystyle T_{c}} +10 K, which is called the supercritical region. ( T c {\displaystyle T_{c}} is the critical temperature)
Observations of supercritical adsorption reported before 1930 was covered in studies by McBain and Britton. All of the important articles on this subject published between 1930 and 1966 have been reviewed by Menon. During the last 20 years, a growing interest in supercritical adsorption research under the impetus of the quest for clean alternative fuels has been observed. Considerable progress has been made in both adsorption measurement techniques and molecular simulation of adsorption on computers, rendering new insights into the nature of supercritical adsorption.
According to the adsorption behavior, the adsorption of gases on solids can be classified into three temperature ranges relative to T c {\displaystyle T_{c}} :
1.Subcritical region (T< T c {\displaystyle T_{c}} )
2.Near-critical region ( T c {\displaystyle T_{c}} <T< T c {\displaystyle T_{c}} +10)
3. The region T> T c {\displaystyle T_{c}} +10
Isotherms in the first region will show the feature of subcritical adsorption. Isotherms in the second region will show the feature of mechanism transition. Isotherms in the third region will show the feature of supercritical adsorption. The transition will take a continuous way if the isotherms in both sides of the critical temperature belong to the same type, such as adsorption on microporous activated carbon . However, discontinuous transition could be observed on isotherms in the second region if there is a transformation of isotherm types, such as adsorption on mesoporous silica gel . The decisive factor in such a classification of adsorption is merely temperature, irrespective of pressure. This is because a fluid cannot undergo a transition to a liquid phase at above-critical temperature, regardless of the pressure applied. This fundamental law determines the different adsorption mechanism for the subcritical and supercritical regions. For the subcritical region, the highest equilibrium pressure of adsorption is the saturation pressure P s {\displaystyle P_{s}} of adsorbate . Beyond P s {\displaystyle P_{s}} condensation happens. Adsorbate in the adsorbed phase is largely in liquid state, based on which different adsorption and thermodynamic theories as well as their applications were developed. For supercritical region, condensation cannot happen, no matter how great the pressure is.
An adsorption isotherm depicts the relation between the quantity adsorbate and the bulk phase pressure (or density) at equilibrium for a constant temperature. It is a dataset of specified adsorption equilibrium. Such equilibrium data are required for optimal design of process relying on adsorption and are considered fundamental information for theoretical studies.
Volumetric method was used in the early days of adsorption studies by Langmuir, Dubinin and others. It basically comprises a gas expansion process from a storage vessel (reference cell) to an adsorption chamber including adsorbent (adsorption cell) through a controlling valve C, as schematically shown in Figure 1. The reference cell with volume V r e f {\displaystyle V_{ref}} is kept at a constant temperature T r e f {\displaystyle T_{ref}} . The value of V r e f {\displaystyle V_{ref}} includes the volume of the tube between the reference cell and valve C. The adsorption cell is kept at the specified equilibrium temperature T a d {\displaystyle T_{ad}} . The volume of the connecting tube between the adsorption cell and valve is divided into two parts: one part with volume V t {\displaystyle V_{t}} with same temperature as the reference cell. The other part is buried in an atmosphere of temperature T a d {\displaystyle T_{ad}} . Its volume is added to the volume of adsorption cell V a d {\displaystyle V_{ad}} .
The amount adsorbed can be calculated from the pressure readings before and after opening valve C based on the p-V-T relationship of real gases. A dry and degassed adsorbent sample of known weight was enclosed in the adsorption cell. An amount of gas is let into V r e f {\displaystyle V_{ref}} to maintain a pressure p 1 {\displaystyle p_{1}} . The moles of gas confined in V r e f {\displaystyle V_{ref}} are calculated as:
n 1 = p 1 V r e f z f 1 R T r e f {\displaystyle n_{1}={\frac {p_{1}V_{ref}}{z_{f1}RT_{ref}}}}
The pressure drops to p 2 {\displaystyle p_{2}} after opening valve C. The amount of gas maintained in V r e f {\displaystyle V_{ref}} , V t {\displaystyle V_{t}} , and V a d {\displaystyle V_{ad}} are respectively:
n 2 = p 2 V r e f z f 2 R T r e f {\displaystyle n_{2}={\frac {p_{2}V_{ref}}{z_{f2}RT_{ref}}}}
n 3 = p 2 V t z f 2 R T r e f {\displaystyle n_{3}={\frac {p_{2}V_{t}}{z_{f2}RT_{ref}}}}
n 4 = p 2 V a d z d 2 R T a d {\displaystyle n_{4}={\frac {p_{2}V_{ad}}{z_{d2}RT_{ad}}}}
The amount adsorbed or the excess adsorption N is then obtained:
N = n 1 + n 3 ′ + n 4 ′ − n 2 − n 3 − n 4 {\displaystyle N=n_{1}+n_{3}'+n_{4}'-n_{2}-n_{3}-n_{4}}
where n 3 ′ {\displaystyle n_{3}'} and n 4 ′ {\displaystyle n_{4}'} are the moles of the gas remaining in V t {\displaystyle V_{t}} and V a d {\displaystyle V_{ad}} before opening valve C. All of the compressibility factor values are calculated by a proper equation of state, which can generate appropriate z values for temperatures not close to the critical zone.
The main advantages of this method are simplicity in procedure, commercial availability of instruments, and the large ranges of pressure and temperature in which this method can be realized. The disadvantage of volumetric measurements is the considerable amount of adsorbent sample needed to overcome adsorption effects on the walls of the vessels. However, this may be a positive aspect if the sample is adequate. A larger amount of sample results in considerable adsorption and usually provides a larger void space in the adsorption cell, rendering the effect of uncertainty in “dead space” to a minimum.
In gravimetric method, the weight change of the adsorbent sample in the gravity field due to adsorption from the gas phase is recorded. Various types of sensitive microbalance have been developed for this purpose. A continuous-flow gravimetric technique coupled with wavelet rectification allows for higher precision, especially in the near-critical region.
Major advantages of gravimetric method include sensitivity, accuracy, and the possibility of checking the state of activation of an adsorbent sample. However, consideration must be given to buoyancy correction in gravimetric measurement. A counterpart is used for this purpose. The solid sample is placed in a sample holder on one arm of the microbalance while the counterpart is loaded on the other arm. Care must be taken to keep the volume of the sample and the counterpart as close as possible to reduce the buoyancy effect. The system is vacuumed and the balance is zeroed before starting experiments. Buoyancy is measured by introducing helium and pressurizing up to the highest pressure of the experiment. It is assumed that helium does not adsorb and any weight change (ΔW) is due to buoyancy. Knowing the density of helium ( ρ H e {\displaystyle \rho _{He}} ), one can determine the difference in volume (ΔV) between the sample and the counterpart:
Δ V = Δ W ρ H e ( p , T ) {\displaystyle \Delta V={\frac {\Delta W}{\rho _{He}(p,T)}}}
The measured weight can be corrected for the buoyancy effect at a specified temperature and pressure:
W = W e x p − Δ V ρ b ( p , T ) {\displaystyle W=W_{exp}-\Delta V\rho _{b}(p,T)}
W e x p {\displaystyle W_{exp}} is the weight reading before correction.
Monte Carlo and molecular dynamic approaches became useful tools for theoretical calculations aiming at predictions of adsorption equilibria and diffusivities in small pores of various simple geometries. The interactions between adsorbate molecules are represented by the Lenard-Jones potential:
V ( r ) = 4 ϵ f f [ ( σ f f r ) 12 − ( σ f f r ) 6 ] {\displaystyle V(r)=4\epsilon _{ff}\left[\left({\frac {\sigma _{ff}}{r}}\right)^{12}-\left({\frac {\sigma _{ff}}{r}}\right)^{6}\right]}
where r is the interparticle distance, σ f f {\displaystyle \sigma _{ff}} is the point at which the potential is zero, and ϵ f f {\displaystyle \epsilon _{ff}} is the well depth.
Li Zhou and coworkers used a volumetric apparatus to measure the adsorption equilibria of hydrogen and methane on activated carbon (Figure 2, 3). They also measure the adsorption equilibria of nitrogen on microporous activated carbon (Figure 4) and on a mesoporous silica gel (Figure 5) for both subcritical and supercritical region. Figure 6 shows the isotherms of methane on silica gel.
Adsorption of fluid at above-critical temperatures and elevated pressures is a field growing importance in both science and engineering. It is the physicochemical basis of many engineering processes and potential industrial applications. For example, separation or purification of light hydrocarbons, storage of fuel gases in microporous solids, adsorption from supercritical gases in extraction processes and chromatography. Besides, knowledge of gas/solid interface phenomenon at high pressures is fundamental to heterogeneous catalysis. However, the limited number of reliable high-pressure adsorption data hampered the progress of the theoretical study.
At least two problems have to be solved before a consistent system of theories for supercritical adsorption becomes sophisticated: first, how to set up a thermodynamically standard state for the supercritical adsorbed phase, so that the adsorption potential for supercritical adsorption can be evaluated? Second, how to determine the total amount in the adsorbed phase based on experimentally measured equilibrium data. Determination of the absolute adsorption is needed for establishing thermodynamic theory because as a reflection of statistical behavior of molecules, thermodynamic rules must rely on the total, not part of, material confined in the system studied.
From recent studies of supercritical adsorption, there seems to be an end in the high-pressure direction for supercritical adsorption. However, adsorbed-phase density is the decisive factor for the existence of this end. The state of adsorbate at the “end” provides the standard state of the supercritical adsorbed phase just like the saturated liquid, which is the end state of adsorbate in the subcritical adsorption. So the “end state” has to be precisely defined. To establish a definite relationship for the adsorbed phase density at the end state, abundant and reliable experimental data are still required. | https://en.wikipedia.org/wiki/Supercritical_adsorption |
Supercritical angle fluorescence microscopy ( SAF ) is a technique to detect and characterize fluorescent species (proteins, biomolecules, pharmaceuticals, etc.) and their behaviour close or even adsorbed or linked at surfaces. The method is able to observe molecules in a distance of less than 100 to 0 nanometer from the surface even in presence of high concentrations of fluorescent species around. Using an aspheric lens for excitation of a sample with laser light, fluorescence emitted by the specimen is collected above the critical angle of total internal reflection selectively and directed by a parabolic optics onto a detector. The method was invented in 1998 in the laboratories of Stefan Seeger at University of Regensburg /Germany and later at University of Zurich /Switzerland.
The principle how SAF Microscopy works is as follows: A fluorescent specimen does not emit fluorescence isotropically when it comes close to a surface, but approximately 70% of the fluorescence emitted is directed into the solid phase. Here, the main part enters the solid body above the critical angle. [ 1 ] When the emitter is located just 200 nm above the surface, fluorescent light entering the solid body above the critical angle is decreased dramatically. Hence, SAF Microscopy is ideally suited to discriminate between molecules and particles at or close to surfaces and all other specimen present in the bulk. [ 2 ] [ 3 ]
The typical SAF setup consists of a laser line (typically 450-633 nm), which is reflected into the aspheric lens by a dichroic mirror. The lens focuses the laser beam in the sample, causing the particles to fluoresce. The fluorescent light then passes through a parabolic lens before reaching a detector, typically a photomultiplier tube or avalanche photodiode detector. It is also possible to arrange SAF elements as arrays, and image the output onto a CCD, allowing the detection of multiple analytes. [ 4 ]
This article about analytical chemistry is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Supercritical_angle_fluorescence_microscopy |
Supercritical carbon dioxide ( s CO 2 ) is a fluid state of carbon dioxide where it is held at or above its critical temperature and critical pressure .
Carbon dioxide usually behaves as a gas in air at standard temperature and pressure (STP), or as a solid called dry ice when cooled and/or pressurised sufficiently. If the temperature and pressure are both increased from STP to be at or above the critical point for carbon dioxide, it can adopt properties midway between a gas and a liquid . More specifically, it behaves as a supercritical fluid above its critical temperature (304.128 K, 30.9780 °C, 87.7604 °F) [ 1 ] and critical pressure (7.3773 MPa, 72.808 atm, 1,070.0 psi, 73.773 bar), [ 1 ] expanding to fill its container like a gas but with a density like that of a liquid.
Supercritical CO 2 is becoming an important commercial and industrial solvent due to its role in chemical extraction , in addition to its relatively low toxicity and environmental impact. The relatively low temperature of the process and the stability of CO 2 also allows compounds to be extracted with little damage or denaturing . In addition, the solubility of many extracted compounds in CO 2 varies with pressure, [ 2 ] permitting selective extractions.
Carbon dioxide is gaining popularity among coffee manufacturers looking to move away from classic decaffeinating solvents . s CO 2 is forced through green coffee beans which are then sprayed with water at high pressure to remove the caffeine. The caffeine can then be isolated for resale (e.g., to pharmaceutical or beverage manufacturers) by passing the water through activated charcoal filters or by distillation , crystallization or reverse osmosis . Supercritical carbon dioxide is used to remove organochloride pesticides and metals from agricultural crops without adulterating the desired constituents from plant matter in the herbal supplement industry. [ 3 ]
Supercritical carbon dioxide can be used as a solvent in dry cleaning . [ 4 ]
Supercritical carbon dioxide is used as the extraction solvent for creation of essential oils and other herbal distillates . [ 5 ] Its main advantages over solvents such as hexane and acetone in this process are that it is non-flammable and does not leave toxic residue. Furthermore, separation of the reaction components from the starting material is much simpler than with traditional organic solvents . The CO 2 can evaporate into the air or be recycled by condensation into a recovery vessel. Its advantage over steam distillation is that it operates at a lower temperature, which can separate the plant waxes from the oils. [ 6 ]
In laboratories , s CO 2 is used as an extraction solvent, for example for determining total recoverable hydrocarbons from soils, sediments, fly-ash, and other media, [ 7 ] and determination of polycyclic aromatic hydrocarbons in soil and solid wastes. [ 8 ] Supercritical fluid extraction has been used in determining hydrocarbon components in water. [ 9 ]
Processes that use s CO 2 to produce micro and nano scale particles, often for pharmaceutical uses, are under development. The gas antisolvent process, rapid expansion of supercritical solutions, and supercritical antisolvent precipitation (as well as several related methods) process a variety of substances into particles. [ 10 ]
Due to its ability to selectively dissolve organic compounds and assist enzyme functioning, s CO 2 has been suggested as a potential solvent to support biological activity on Venus - or super-Earth -type planets. [ 11 ]
Environmentally beneficial, low-cost substitutes for rigid thermoplastic and fired ceramic are made using s CO 2 as a chemical reagent . The s CO 2 in these processes is reacted with the alkaline components of fully hardened hydraulic cement or gypsum plaster to form various carbonates. [ 12 ] The primary byproduct is water.
s CO 2 is used in the foaming of polymers . Supercritical carbon dioxide can saturate the polymer with solvent. Upon depressurization and heating, the carbon dioxide rapidly expands, causing voids within the polymer matrix, i.e., creating a foam . Research is ongoing on microcellular foams.
An electrochemical carboxylation of a para- isobutyl benzyl chloride to ibuprofen is promoted under s CO 2 . [ 13 ]
s CO 2 is chemically stable, reliable, low-cost, non-flammable and readily available, making it a desirable candidate working fluid for transcritical cycles . [ 14 ]
Supercritical CO 2 is used as the working fluid in domestic water heat pumps . Manufactured and widely used, heat pumps are available for domestic and business heating and cooling. [ 14 ] While some of the more common domestic water heat pumps remove heat from the space in which they are located, such as a basement or garage, CO 2 heat pump water heaters are typically located outside, where they remove heat from the outside air. [ 14 ]
The unique properties of s CO 2 present advantages for closed-loop power generation and can be applied to power generation applications. Power generation systems that use traditional air Brayton and steam Rankine cycles can use s CO 2 to increase efficiency and power output.
The relatively new Allam power cycle uses sCO 2 as the working fluid in combination with fuel and pure oxygen. The CO 2 produced by combustion mixes with the sCO 2 working fluid. A corresponding amount of pure CO 2 must be removed from the process (for industrial use or sequestration). This process reduces atmospheric emissions to zero.
sCO 2 promises substantial efficiency improvements. Due to its high fluid density, sCO 2 enables compact and efficient turbomachinery. It can use simpler, single casing body designs while steam turbines require multiple turbine stages and associated casings, as well as additional inlet and outlet piping. The high density allows more compact, microchannel-based heat exchanger technology. [ 15 ]
For concentrated solar power , carbon dioxide critical temperature is not high enough to obtain the maximum energy conversion efficiency. Solar thermal plants are usually located in arid areas, so it is impossible to cool down the heat sink to sub-critical temperatures. Therefore, supercritical carbon dioxide blends , with higher critical temperatures, are in development to improve concentrated solar power electricity production.
Further, due to its superior thermal stability and non-flammability, direct heat exchange from high temperature sources is possible, permitting higher working fluid temperatures and therefore higher cycle efficiency. Unlike two-phase flow, the single-phase nature of s CO 2 eliminates the necessity of a heat input for phase change that is required for the water to steam conversion, thereby also eliminating associated thermal fatigue and corrosion. [ 16 ]
The use of s CO 2 presents corrosion engineering , material selection and design issues. Materials in power generation components must display resistance to damage caused by high-temperature , oxidation and creep . Candidate materials that meet these property and performance goals include incumbent alloys in power generation, such as nickel-based superalloys for turbomachinery components and austenitic stainless steels for piping. Components within s CO 2 Brayton loops suffer from corrosion and erosion, specifically erosion in turbomachinery and recuperative heat exchanger components and intergranular corrosion and pitting in the piping. [ 17 ]
Testing has been conducted on candidate Ni-based alloys, austenitic steels, ferritic steels and ceramics for corrosion resistance in s CO 2 cycles. The interest in these materials derive from their formation of protective surface oxide layers in the presence of carbon dioxide, however in most cases further evaluation of the reaction mechanics and corrosion/erosion kinetics and mechanisms is required, as none of the materials meet the necessary goals. [ 18 ] [ 19 ]
In 2016, General Electric announced a sCO 2 -based turbine that enabled a 50% efficiency of converting heat energy to electrical energy. In it the CO 2 is heated to 700 °C. It requires less compression and allows heat transfer. It reaches full power in 2 minutes, whereas steam turbines need at least 30 minutes. The prototype generated 10 MW and is approximately 10% the size of a comparable steam turbine. [ 20 ] The 10 MW US$155-million Supercritical Transformational Electric Power (STEP) pilot plant was completed in 2023 in San Antonio. It is the size of a desk and can power around 10,000 homes. [ 21 ]
Work is underway to develop a s CO 2 closed-cycle gas turbine to operate at temperatures near 550 °C. This would have implications for bulk thermal and nuclear generation of electricity, because the supercritical properties of carbon dioxide at above 500 °C and 20 MPa enable thermal efficiencies approaching 45 percent. This could increase the electrical power produced per unit of fuel required by 40 percent or more. Given the volume of carbon fuels used in producing electricity, the environmental impact of cycle efficiency increases would be significant. [ 22 ]
Supercritical CO 2 is an emerging natural refrigerant, used in new, low carbon solutions for domestic heat pumps . Supercritical CO 2 heat pumps are commercially marketed in Asia. EcoCute systems from Japan, developed by Mayekawa, develop high temperature domestic water with small inputs of electric power by moving heat into the system from the surroundings. [ 23 ]
Supercritical CO 2 has been used since the 1980s to enhance recovery in mature oil fields.
" Clean coal " technologies are emerging that could combine such enhanced recovery methods with carbon sequestration . Using gasifiers instead of conventional furnaces, coal and water is reduced to hydrogen gas, carbon dioxide and ash. This hydrogen gas can be used to produce electrical power In combined cycle gas turbines, CO 2 is captured, compressed to the supercritical state and injected into geological storage, possibly into existing oil fields to improve yields. [ 24 ] [ 25 ] [ 26 ]
Supercritical CO 2 can be used as a working fluid for geothermal electricity generation in both enhanced geothermal systems [ 27 ] [ 28 ] [ 29 ] [ 30 ] and sedimentary geothermal systems (so-called CO 2 Plume Geothermal). [ 31 ] [ 32 ] EGS systems utilize an artificially fractured reservoir in basement rock while CPG systems utilize shallower naturally-permeable sedimentary reservoirs. Possible advantages of using CO 2 in a geologic reservoir, compared to water, include higher energy yield resulting from its lower viscosity, better chemical interaction, and permanent CO 2 storage as the reservoir must be filled with large masses of CO 2 . As of 2011, the concept had not been tested in the field. [ 33 ]
Supercritical carbon dioxide is used in the production of silica, carbon and metal based aerogels . For example, silicon dioxide gel is formed and then exposed to s CO 2 . When the CO 2 goes supercritical, all surface tension is removed, allowing the liquid to leave the aerogel and produce nanometer sized pores. [ 34 ]
Supercritical CO 2 is an alternative for thermal sterilization of biological materials and medical devices with combination of the additive peracetic acid (PAA). Supercritical CO 2 does not sterilize the media, because it does not kill the spores of microorganisms. Moreover, this process is gentle, as the morphology, ultrastructure and protein profiles of inactivated microbes are preserved. [ 35 ]
Supercritical CO 2 is used in certain industrial cleaning processes . | https://en.wikipedia.org/wiki/Supercritical_carbon_dioxide |
Supercritical drying , also known as critical point drying , is a process to remove liquid in a precise and controlled way. [ 1 ] It is useful in the production of microelectromechanical systems (MEMS), the drying of spices , the production of aerogel , the decaffeination of coffee and in the preparation of biological specimens. [ 2 ]
As the substance in a liquid body crosses the boundary from liquid to gas (see green arrow in phase diagram ), the liquid changes into gas at a finite rate, while the amount of liquid decreases. When this happens within a heterogeneous environment, surface tension in the liquid body pulls against any solid structures the liquid might be in contact with. Delicate structures such as cell walls , the dendrites in silica gel , and the tiny machinery of microelectromechanical devices, tend to be broken apart by this surface tension as the liquid–gas–solid junction moves by.
To avoid this, the sample can be brought via two possible alternate paths from the liquid phase to the gas phase without crossing the liquid–gas boundary on the phase diagram. In freeze-drying , this means going around to the left (low temperature, low pressure; blue arrow). However, some structures are disrupted even by the solid–gas boundary . Supercritical drying, on the other hand, goes around the line to the right, on the high-temperature, high-pressure side (red arrow). This route from liquid to gas does not cross any phase boundary , instead passing through the supercritical region, where the distinction between gas and liquid ceases to apply. Densities of the liquid phase and vapor phase become equal at critical point of drying.
Almost all fluids can undergo supercritical drying as a physical chemistry process, but the harsh conditions involved will often make it impractical as part of an industrial process. Fluids which do see industrial application of supercritical drying include carbon dioxide ( critical point 304.25 K at 7.39 MPa or 31.1 °C at 1072 psi ) and freon (≈300 K at 3.5–4 MPa or 25–0 °C at 500–600 psi). Nitrous oxide has similar physical behavior to carbon dioxide, but is a powerful oxidizer in its supercritical state. Supercritical water is inconvenient due to possible heat damage to a sample at its critical point temperature (647 K, 374 °C) and corrosiveness of water at such high temperatures and pressures (22.064 MPa, 3,212 psi).
In most such processes, acetone is first used to wash away all water, exploiting the complete miscibility of these two fluids. The acetone is then washed away with high pressure liquid carbon dioxide, the industry standard now that freon is unavailable. The liquid carbon dioxide is then heated until its temperature goes beyond the critical point, at which time the pressure can be gradually released, allowing the gas to escape and leaving a dried product. | https://en.wikipedia.org/wiki/Supercritical_drying |
A supercritical flow is a flow whose velocity is larger than the wave velocity. [ clarification needed ] The analogous condition in gas dynamics is supersonic speed .
According to the website Civil Engineering Terms, supercritical flow is defined as follows:
The flow at which depth of the channel is less than critical depth, velocity of flow is greater than critical velocity and slope of the channel is also greater than the critical slope is known as supercritical flow. [ 1 ]
Information travels at the wave velocity. This is the velocity at which waves travel outwards from a pebble thrown into a lake. The flow velocity is the velocity at which a leaf in the flow travels. If a pebble is thrown into a supercritical flow then the ripples will all move down stream whereas in a subcritical flow some would travel up stream and some would travel down stream. It is only in supercritical flows that hydraulic jumps ( bores ) can occur. In fluid dynamics , the change from one behaviour to the other is often described by a dimensionless quantity , where the transition occurs whenever this number becomes less or more than one. One of these numbers is the Froude number :
where
If F r < 1 {\displaystyle Fr<1} , we call the flow subcritical ; if F r > 1 {\displaystyle Fr>1} , we call the flow supercritical . If F r ≈ 1 {\displaystyle Fr\approx 1} , it is critical .
The Hydraulics of Open Channel Flow: An Introduction. Physical Modelling of Hydraulics Chanson, Hubert (1999) | https://en.wikipedia.org/wiki/Supercritical_flow |
A supercritical fluid ( SCF ) is a substance at a temperature and pressure above its critical point , where distinct liquid and gas phases do not exist, but below the pressure required to compress it into a solid . [ 1 ] It can effuse through porous solids like a gas, overcoming the mass transfer limitations that slow liquid transport through such materials. SCFs are superior to gases in their ability to dissolve materials like liquids or solids. Near the critical point, small changes in pressure or temperature result in large changes in density , allowing many properties of a supercritical fluid to be "fine-tuned".
Supercritical fluids occur in the atmospheres of the gas giants Jupiter and Saturn , the terrestrial planet Venus , and probably in those of the ice giants Uranus and Neptune . Supercritical water is found on Earth , such as the water issuing from black smokers , a type of hydrothermal vent . [ 2 ] SCFs are used as a substitute for organic solvents in a range of industrial and laboratory processes, most commonly carbon dioxide for decaffeination and water for steam boilers for power generation . Some substances are soluble in the supercritical state of a solvent (e.g., carbon dioxide) but insoluble in the gaseous or liquid state—or vice versa. This can be used to extract a substance and transport it elsewhere in solution before depositing it in the desired place by allowing or inducing a phase transition in the solvent.
Supercritical fluids generally have properties between those of a gas and a liquid. In Table 1, the critical properties are shown for some substances that are commonly used as supercritical fluids.
†Source: International Association for Properties of Water and Steam ( IAPWS ) [ 4 ]
Table 2 shows density, diffusivity and viscosity for typical liquids, gases and supercritical fluids.
Also, there is no surface tension in a supercritical fluid, as there is no liquid/gas phase boundary. By changing the pressure and temperature of the fluid, the properties can be "tuned" to be more liquid-like or more gas-like. One of the most important properties is the solubility of material in the fluid. Solubility in a supercritical fluid tends to increase with density of the fluid (at constant temperature). Since density increases with pressure, solubility tends to increase with pressure. The relationship with temperature is a little more complicated. At constant density, solubility will increase with temperature. However, close to the critical point, the density can drop sharply with a slight increase in temperature. Therefore, close to the critical temperature, solubility often drops with increasing temperature, then rises again. [ 6 ]
Typically, supercritical fluids are completely miscible with each other, so that a binary mixture forms a single gaseous phase if the critical point of the mixture is exceeded. However, exceptions are known in systems where one component is much more volatile than the other, which in some cases form two immiscible gas phases at high pressure and temperatures above the component critical points. This behavior has been found for example in the systems N 2 -NH 3 , NH 3 -CH 4 , SO 2 -N 2 and n-butane-H 2 O. [ 7 ]
The critical point of a binary mixture can be estimated as the arithmetic mean of the critical temperatures and pressures of the two components,
where χ i denotes the mole fraction of component i .
For greater accuracy, the critical point can be calculated using equations of state , such as the Peng–Robinson , or group-contribution methods . Other properties, such as density, can also be calculated using equations of state. [ 8 ]
Figures 1 and 2 show two-dimensional projections of a phase diagram . In the pressure-temperature phase diagram (Fig. 1) the boiling curve separates the gas and liquid region and ends in the critical point, where the liquid and gas phases disappear to become a single supercritical phase.
The appearance of a single phase can also be observed in the density-pressure phase diagram for carbon dioxide (Fig. 2). At well below the critical temperature, e.g., 280 K, as the pressure increases, the gas compresses and eventually (at just over 40 bar ) condenses into a much denser liquid, resulting in the discontinuity in the line (vertical dotted line). The system consists of 2 phases in equilibrium , a dense liquid and a low density gas. As the critical temperature is approached (300 K), the density of the gas at equilibrium becomes higher, and that of the liquid lower. At the critical point, (304.1 K and 7.38 MPa (73.8 bar)), there is no difference in density, and the 2 phases become one fluid phase. Thus, above the critical temperature a gas cannot be liquefied by pressure. At slightly above the critical temperature (310 K), in the vicinity of the critical pressure, the line is almost vertical. A small increase in pressure causes a large increase in the density of the supercritical phase. Many other physical properties also show large gradients with pressure near the critical point, e.g. viscosity , the relative permittivity and the solvent strength, which are all closely related to the density. At higher temperatures, the fluid starts to behave more like an ideal gas, with a more linear density/pressure relationship, as can be seen in Figure 2. For carbon dioxide at 400 K, the density increases almost linearly with pressure.
Many pressurized gases are actually supercritical fluids. For example, nitrogen has a critical point of 126.2 K (−147 °C) and 3.4 MPa (34 bar). Therefore, nitrogen (or compressed air) in a gas cylinder above this pressure is actually a supercritical fluid. These are more often known as permanent gases. At room temperature, they are well above their critical temperature, and therefore behave as a nearly ideal gas, similar to CO 2 at 400 K above. However, they cannot be liquified by mechanical pressure unless cooled below their critical temperature, requiring gravitational pressure such as within gas giants to produce a liquid or solid at high temperatures. [ citation needed ] Above the critical temperature, elevated pressures can increase the density enough that the SCF exhibits liquid-like density and behaviour. At very high pressures, an SCF can be compressed into a solid because the melting curve extends to the right of the critical point in the P/T phase diagram. While the pressure required to compress supercritical CO 2 into a solid can be, depending on the temperature, as low as 570 MPa, [ 9 ] that required to solidify supercritical water is 14,000 MPa. [ 10 ]
The Fisher–Widom line , the Widom line , or the Frenkel line are thermodynamic concepts that allow to distinguish liquid-like and gas-like states within the supercritical fluid.
In 1822, Baron Charles Cagniard de la Tour discovered the critical point of a substance in his famous cannon barrel experiments. Listening to discontinuities in the sound of a rolling flint ball in a sealed cannon filled with fluids at various temperatures, he observed the critical temperature. Above this temperature, the densities of the liquid and gas phases become equal and the distinction between them disappears, resulting in a single supercritical fluid phase. [ 11 ]
In recent years, a significant effort has been devoted to investigation of various properties of supercritical fluids. Supercritical fluids have found application in a variety of fields, ranging from the extraction of floral fragrance from flowers to applications in food science such as creating decaffeinated coffee, functional food ingredients, pharmaceuticals, cosmetics, polymers, powders, bio- and functional materials, nano-systems, natural products, biotechnology, fossil and bio-fuels, microelectronics, energy and environment. Much of the excitement and interest of the past decade is due to the enormous progress made in increasing the power of relevant experimental tools. The development of new experimental methods and improvement of existing ones continues to play an important role in this field, with recent research focusing on dynamic properties of fluids. [ citation needed ]
Hydrothermal circulation occurs within the Earth's crust wherever fluid becomes heated and begins to convect . These fluids are thought to reach supercritical conditions under a number of different settings, such as in the formation of porphyry copper deposits or high temperature circulation of seawater in the sea floor. At mid-ocean ridges, this circulation is most evident by the appearance of hydrothermal vents known as "black smokers". These are large (metres high) chimneys of sulfide and sulfate minerals which vent fluids up to 400 °C. The fluids appear like great black billowing clouds of smoke due to the precipitation of dissolved metals in the fluid. It is likely that at that depth many of these vent sites reach supercritical conditions, but most cool sufficiently by the time they reach the sea floor to be subcritical. One particular vent site, Turtle Pits, has displayed a brief period of supercriticality at the vent site. A further site, Beebe , in the Cayman Trough, is thought to display sustained supercriticality at the vent orifice. [ 12 ]
The atmosphere of Venus is 96.5% carbon dioxide and 3.5% nitrogen. The surface pressure is 9.3 megapascals (1,350 psi) and the surface temperature is 735 K (462 °C; 863 °F), above the critical points of both major constituents and making the surface atmosphere a supercritical fluid. [ 13 ]
The interior atmospheres of the Solar System's four giant planets are composed mainly of hydrogen and helium at temperatures well above their critical points. The gaseous outer atmospheres of the gas giants Jupiter and Saturn transition smoothly into the dense liquid interior, while the nature of the transition zones of the ice giants Neptune and Uranus is unknown. [ citation needed ] Theoretical models of extrasolar planet Gliese 876 d have posited an ocean of pressurized, supercritical fluid water with a sheet of solid high pressure water ice at the bottom. [ citation needed ]
The advantages of supercritical fluid extraction (compared with liquid extraction) are that it is relatively rapid because of the low viscosities and high diffusivities associated with supercritical fluids. Alternative solvents to supercritical fluids may be poisonous, flammable or an environmental hazard to a much larger extent than water or carbon dioxide are. The extraction can be selective to some extent by controlling the density of the medium, and the extracted material is easily recovered by simply depressurizing, allowing the supercritical fluid to return to gas phase and evaporate leaving little or no solvent residues. Carbon dioxide is the most common supercritical solvent. It is used on a large scale for the decaffeination of green coffee beans, the extraction of hops for beer production, [ 14 ] and the production of essential oils and pharmaceutical products from plants. [ 15 ] A few laboratory test methods include the use of supercritical fluid extraction as an extraction method instead of using traditional solvents . [ 16 ] [ 17 ] [ 18 ]
Supercritical water can be used to decompose biomass via Supercritical Water Gasification of biomass. [ 19 ] This type of biomass gasification can be used to produce hydrocarbon fuels for use in an efficient combustion device or to produce hydrogen for use in a fuel cell. In the latter case, hydrogen yield can be much higher than the hydrogen content of the biomass due to steam reforming where water is a hydrogen-providing participant in the overall reaction.
Supercritical carbon dioxide (SCD) can be used instead of PERC ( perchloroethylene ) or other undesirable solvents for dry-cleaning . Supercritical carbon dioxide sometimes intercalates into buttons, and, when the SCD is depressurized, the buttons pop, or break apart. Detergents that are soluble in carbon dioxide improve the solvating power of the solvent. [ 20 ] CO 2 -based dry cleaning equipment uses liquid CO 2 , not supercritical CO 2 , to avoid damage to the buttons.
Supercritical fluid chromatography (SFC) can be used on an analytical scale, where it combines many of the advantages of high performance liquid chromatography (HPLC) and gas chromatography (GC). It can be used with non-volatile and thermally labile analytes (unlike GC) and can be used with the universal flame ionization detector (unlike HPLC), as well as producing narrower peaks due to rapid diffusion. In practice, the advantages offered by SFC have not been sufficient to displace the widely used HPLC and GC, except in a few cases such as chiral separations and analysis of high-molecular-weight hydrocarbons. [ 21 ] For manufacturing, efficient preparative simulated moving bed units are available. [ 22 ] The purity of the final products is very high, but the cost makes it suitable only for very high-value materials such as pharmaceuticals.
Changing the conditions of the reaction solvent can allow separation of phases for product removal, or single phase for reaction. Rapid diffusion accelerates diffusion controlled reactions. Temperature and pressure can tune the reaction down preferred pathways, e.g., to improve yield of a particular chiral isomer . [ 23 ] There are also significant environmental benefits over conventional organic solvents. Industrial syntheses that are performed at supercritical conditions include those of polyethylene from supercritical ethene , isopropyl alcohol from supercritical propene , 2-butanol from supercritical butene , and ammonia from a supercritical mix of nitrogen and hydrogen . [ 24 ] Other reactions were, in the past, performed industrially in supercritical conditions, including the synthesis of methanol and thermal (non-catalytic) oil cracking. Because of the development of effective catalysts , the required temperatures of those two processes have been reduced and are no longer supercritical. [ 24 ]
Impregnation is, in essence, the converse of extraction. A substance is dissolved in the supercritical fluid, the solution flowed past a solid substrate, and is deposited on or dissolves in the substrate. Dyeing, which is readily carried out on polymer fibres such as polyester using disperse (non-ionic) dyes , is a special case of this. Carbon dioxide also dissolves in many polymers, considerably swelling and plasticising them and further accelerating the diffusion process.
The formation of small particles of a substance with a narrow size distribution is an important process in the pharmaceutical and other industries. Supercritical fluids provide a number of ways of achieving this by rapidly exceeding the saturation point of a solute by dilution, depressurization or a combination of these. These processes occur faster in supercritical fluids than in liquids, promoting nucleation or spinodal decomposition over crystal growth and yielding very small and regularly sized particles. Recent supercritical fluids have shown the capability to reduce particles up to a range of 5–2000 nm. [ 25 ]
Supercritical fluids act as a new medium for the generation of novel crystalline forms of APIs (Active Pharmaceutical Ingredients) named as pharmaceutical cocrystals. Supercritical fluid technology offers a new platform that allows a single-step generation of particles that are difficult or even impossible to obtain by traditional techniques. The generation of pure and dried new cocrystals (crystalline molecular complexes comprising the API and one or more conformers in the crystal lattice) can be achieved due to unique properties of SCFs by using different supercritical fluid properties: supercritical CO 2 solvent power, anti-solvent effect and its atomization enhancement. [ 26 ] [ 27 ]
Supercritical drying is a method of removing solvent without surface tension effects. As a liquid dries, the surface tension drags on small structures within a solid, causing distortion and shrinkage. Under supercritical conditions there is no surface tension, and the supercritical fluid can be removed without distortion. Supercritical drying is used in the manufacturing process of aerogels and drying of delicate materials such as archaeological samples and biological samples for electron microscopy .
Electrolysis of water in a supercritical state, reduces the overpotentials found in other electrolysers, thereby improving the electrical efficiency of the production of oxygen and hydrogen.
Increased temperature reduces thermodynamic barriers and increases kinetics. No bubbles of oxygen or hydrogen are formed on the electrodes, therefore no insulating layer is formed between catalyst and water, reducing the ohmic losses. The gas-like properties provide rapid mass transfer.
Supercritical water oxidation uses supercritical water as a medium in which to oxidize hazardous waste, eliminating production of toxic combustion products that burning can produce.
The waste product to be oxidised is dissolved in the supercritical water along with molecular oxygen (or an oxidising agent that gives up oxygen upon decomposition, e.g. hydrogen peroxide ) at which point the oxidation reaction occurs. [ citation needed ]
Supercritical hydrolysis is a method of converting all biomass polysaccharides as well the associated lignin into low molecular compounds by contacting with water alone under supercritical conditions. The supercritical water, acts as a solvent, a supplier of bond-breaking thermal energy, a heat transfer agent and as a source of hydrogen atoms. All polysaccharides are converted into simple sugars in near-quantitative yield in a second or less. The aliphatic inter-ring linkages of lignin are also readily cleaved into free radicals that are stabilized by hydrogen originating from the water. The aromatic rings of the lignin are unaffected under short reaction times so that the lignin-derived products are low molecular weight mixed phenols. To take advantage of the very short reaction times needed for cleavage a continuous reaction system must be devised. The amount of water heated to a supercritical state is thereby minimized.
Supercritical water gasification is a process of exploiting the beneficial effect of supercritical water to convert aqueous biomass streams into clean water and gases like H 2 , CH 4 , CO 2 , CO etc. [ 28 ]
The solubility of dissolved ions drops precipitously once a fluid becomes supercritical. This effect can be used to precipitate salts from high salinity desalination streams, with solubility of different salts decreasing rapidly as water approaches supercritical temperatures. Complex cycle design can enable selective precipitation and improved heat recovery. Some very saline water sources like produced water also have high hydrocarbon content, which can be oxidized by supercritical desalination [ 29 ]
The efficiency of a heat engine is ultimately dependent on the temperature difference between heat source and sink ( Carnot cycle ). To improve efficiency of power stations the operating temperature must be raised. Using water as the working fluid, this takes it into supercritical conditions. [ 30 ] Efficiencies can be raised from about 39% for subcritical operation to about 45% using current technology. [ 31 ] Many coal-fired supercritical steam generators are operational all over the world. Supercritical carbon dioxide is also proposed as a working fluid, which would have the advantage of lower critical pressure than water, but issues with corrosion are not yet fully solved. [ 32 ] [ 33 ] One proposed application is the Allam cycle .
Supercritical water reactors (SCWRs) are proposed advanced nuclear systems that offer similar thermal efficiency gains. [ 34 ]
Conversion of vegetable oil to biodiesel is via a transesterification reaction, where a triglyceride is converted to the methyl esters (of the fatty acids) plus glycerol . This is usually done using methanol and caustic or acid catalysts, but can be achieved using supercritical methanol without a catalyst. The method of using supercritical methanol for biodiesel production was first studied by Saka and his coworkers. This has the advantage of allowing a greater range and water content of feedstocks (in particular, used cooking oil), the product does not need to be washed to remove catalyst, and is easier to design as a continuous process. [ 35 ]
Supercritical carbon dioxide is used to enhance oil recovery in mature oil fields. At the same time, there is the possibility of using " clean coal technology " to combine enhanced recovery methods with carbon sequestration . The CO 2 is separated from other flue gases , compressed to the supercritical state, and injected into geological storage, possibly into existing oil fields to improve yields.
At present, only schemes isolating fossil CO 2 from natural gas actually use carbon storage, (e.g., Sleipner gas field ), [ 36 ] but there are many plans for future CCS schemes involving pre- or post-combustion CO 2 . [ 37 ] [ 38 ] [ 39 ] [ 40 ] There is also the possibility to reduce the amount of CO 2 in the atmosphere by using biomass to generate power and sequestering the CO 2 produced.
The use of supercritical carbon dioxide, instead of water, has been examined as a geothermal working fluid.
Supercritical carbon dioxide is also emerging as a useful high-temperature refrigerant , being used in new, CFC / HFC -free domestic heat pumps making use of the transcritical cycle . [ 41 ] These systems are undergoing continuous development with supercritical carbon dioxide heat pumps already being successfully marketed in Asia. The EcoCute systems from Japan are some of the first commercially successful high-temperature domestic water heat pumps.
Supercritical fluids can be used to deposit functional nanostructured films and nanometer-size particles of metals onto surfaces. The high diffusivities and concentrations of precursor in the fluid as compared to the vacuum systems used in chemical vapour deposition allow deposition to occur in a surface reaction rate limited regime, providing stable and uniform interfacial growth. [ 42 ] This is crucial in developing more powerful electronic components, and metal particles deposited in this way are also powerful catalysts for chemical synthesis and electrochemical reactions. Additionally, due to the high rates of precursor transport in solution, it is possible to coat high surface area particles which under chemical vapour deposition would exhibit depletion near the outlet of the system and also be likely to result in unstable interfacial growth features such as dendrites . The result is very thin and uniform films deposited at rates much faster than atomic layer deposition , the best other tool for particle coating at this size scale. [ 43 ]
CO 2 at high pressures has antimicrobial properties. [ 44 ] While its effectiveness has been shown for various applications, the mechanisms of inactivation have not been fully understood although they have been investigated for more than 60 years. [ 45 ] | https://en.wikipedia.org/wiki/Supercritical_fluid |
Supercritical fluid chromatography ( SFC ) [ 1 ] is a form of normal phase chromatography that uses a supercritical fluid such as carbon dioxide as the mobile phase. [ 2 ] [ 3 ] It is used for the analysis and purification of low to moderate molecular weight , thermally labile molecules and can also be used for the separation of chiral compounds. Principles are similar to those of high performance liquid chromatography (HPLC); however, SFC typically utilizes carbon dioxide as the mobile phase. Therefore, the entire chromatographic flow path must be pressurized. Because the supercritical phase represents a state whereby bulk liquid and gas properties converge, supercritical fluid chromatography is sometimes called convergence chromatography. [ 4 ] The idea of liquid and gas properties convergence was first envisioned by Giddings. [ 5 ]
SFC has been used primarily for separation of chiral molecules, mainly those which required normal phase conditions. While the mobile phase is a fluid in the supercritical state, the stationary phase is packed inside columns similar to those used in liquid chromatography. Since the use of normal phase mode of chromatography remained less common, so did SFC; therefore it is now commonly used for selected chiral and achiral separations and purification in the pharmaceutical industry. [ 6 ] [ 7 ]
Instrumentation of supercritical fluid chromatography [ 8 ] SFC has a similar setup to an HPLC instrument. The stationary phases are similar, and are packed inside similar column types. However, there are special features in these systems, because of the need to keep the mobile phase at supercritical fluidic state over the entire system. Temperature is critical to keep the fluids in a supercritical state, so there should be a heat control tool in the system, similar to that of GC. Also, there should be a precise pressure control mechanism, a restrictor to keep the pressure above a certain point, because pressure is another essential parameter to keep the mobile phase in a supercritical fluid state, so it is kept at the required minimal level. A microprocessor mechanism is placed in the instrument for SFC. This unit collects data for pressure, oven temperature, and detector performance to control the related pieces of the instrument.
CO 2 utilized in carbon dioxide dedicated pumps, which require that the incoming CO 2 and pump heads be kept cold, in order to maintain the carbon dioxide at a temperature and pressure fit for supercritical fluidic state, where it can be effectively metered at a specified flow rate range. The CO 2 subsequently becomes supercritical fluid throughout the injector and the column oven, when the temperature and pressure it is subjected to, are raised above the critical point of the liquid, thus the supercritical state is achieved.
Supercritical fluids combine useful properties of gas and liquid phases, as it can behave like both a gas and a liquid in various aspects. A supercritical fluid provides a gas-like characteristic when it fills a container and it takes the shape of the container. The motion and kinetics of the molecules are quite similar to gas molecules. On the other hand, a supercritical fluid behaves like a liquid because its density property is near liquid; thus, a supercritical fluid shows a similarity to the dissolving effect of a liquid. The result is that one can load masses, similar to those used in HPLC, on column per injection, and still maintain a high chromatographic efficiency similar to those attained in GC. Typically, gradient elution is employed in analytical SFC using a polar co-solvent such as methanol, possibly with a weak acid or base at low concentrations ~1%. The apparent plate count per analysis can be observed to exceed 500K plates per meter routinely with 5 um stationary phases. The operator uses software to set mobile phase flow rate, co-solvent composition, system back pressure and column oven temperature, which must exceed 40 °C for supercritical conditions needed to be achieved with CO 2 . In addition, SFC provides an additional control parameter – pressure – by using an automated static and dynamic back pressure regulator. From an operational standpoint, SFC is as simple and robust as HPLC , but fraction collection is more convenient because the primary mobile phase evaporates leaving only the analyte and a small volume of polar co-solvent. If the outlet CO 2 is captured, it can be re-compressed and recycled, allowing for >90% reuse of CO 2 .
Similar to HPLC, SFC uses a variety of detection methods including UV /VIS, mass spectrometry , FID (unlike HPLC) and evaporative light scattering.
A rule-of-thumb is that any molecule that will dissolve in methanol or a less polar solvent is compatible with SFC, including non-volatile polar solutes. CO 2 has polarity similar to n-heptane [ 9 ] at its critical point. The solvent's elution strength can be increased just by increasing density or alternatively, using a polar co-solvent. In practice, when the fraction of the co-solvent is high, the mobile phase might not be truly at supercritical fluid state, but this terminology is used regardless, and the chromatograms show better elution and higher efficiency nevertheless.
The mobile phase is composed primarily of supercritical carbon dioxide , but since CO 2 on its own is too non-polar to effectively elute many analytes, cosolvents are added to modify the mobile phase polarity. Cosolvents are typically simple alcohols like methanol , ethanol , or isopropyl alcohol . Other solvents such as acetonitrile , chloroform , or ethyl acetate can be used as modifiers. For food-grade materials, the selected cosolvent is often ethanol or ethyl acetate, both of which are generally recognized as safe ( GRAS ). The solvent limitations are system and column based.
There have been a few technical issues that have limited adoption of SFC technology in the past. First of all, is the need to keep a high gas pressure in the operating conditions. High-pressure vessels are expensive and bulky, and special materials are often needed to avoid dissolving gaskets and O-rings in the supercritical fluid. A second drawback is difficulty in maintaining pressure constant (by back-pressure regulation). Whereas liquids are nearly incompressible, so their densities are constant regardless of pressure, supercritical fluids are highly compressible and their physical properties change with pressure – such as the pressure drop across a packed-bed column. Currently, automated backpressure regulators can maintain a constant pressure in the column even if flow rate varies, mitigating this problem. A third drawback is difficulty in gas/liquid separation during collection of product. Upon depressurization, the CO 2 rapidly turns into gas and aerosolizes any dissolved analyte in the process. Cyclone separators have lessened difficulties in gas/liquid separations. | https://en.wikipedia.org/wiki/Supercritical_fluid_chromatography |
Supercritical fluid extraction (SFE) is the process of separating one component (the extractant) from another (the matrix) using supercritical fluids as the extracting solvent . Extraction is usually from a solid matrix, but can also be from liquids . SFE can be used as a sample preparation step for analytical purposes, or on a larger scale to either strip unwanted material from a product (e.g. decaffeination ) or collect a desired product (e.g. essential oils ). These essential oils can include limonene and other straight solvents. Carbon dioxide (CO 2 ) is the most used supercritical fluid, sometimes modified by co-solvents such as ethanol or methanol . Extraction conditions for supercritical carbon dioxide are above the critical temperature of 31 °C and critical pressure of 74 bar . Addition of modifiers may slightly alter this. The discussion below will mainly refer to extraction with CO 2 , except where specified.
The properties of the supercritical fluid can be altered by varying the pressure and temperature, allowing selective extraction. For example, volatile oils can be extracted from a plant with low pressures (100 bar), whereas liquid extraction would also remove lipids. Lipids can be removed using pure CO 2 at higher pressures, and then phospholipids can be removed by adding ethanol to the solvent. [ 1 ] The same principle can be used to extract polyphenols and unsaturated fatty acids separately from wine wastes. [ 2 ]
Extraction is a diffusion -based process, in which the solvent is required to diffuse into the matrix and the extracted material to diffuse out of the matrix into the solvent. Diffusivities are much faster in supercritical fluids than in liquids, and therefore extraction can occur faster. In addition, due to the lack of surface tension and negligible viscosities compared to liquids, the solvent can penetrate more into the matrix inaccessible to liquids. An extraction using an organic liquid may take several hours, whereas supercritical fluid extraction can be completed in 10 to 60 minutes. [ 3 ]
The requirement for high pressures increases the cost compared to conventional liquid extraction, so SFE will only be used where there are significant advantages. Carbon dioxide itself is non-polar, and has somewhat limited dissolving power, so cannot always be used as a solvent on its own, particularly for polar solutes. The use of modifiers increases the range of materials which can be extracted. Food grade modifiers such as ethanol can often be used, and can also help in the collection of the extracted material, but reduces some of the benefits of using a solvent which is gaseous at room temperature.
The system must contain a pump for the CO 2 , a pressure cell to contain the sample, a means of maintaining pressure in the system and a collecting vessel. The liquid is pumped to a heating zone, where it is heated to supercritical conditions. It then passes into the extraction vessel, where it rapidly diffuses into the solid matrix and dissolves the material to be extracted. The dissolved material is swept from the extraction cell into a separator at lower pressure, and the extracted material settles out. The CO 2 can then be cooled, re-compressed and recycled, or discharged to atmosphere.
Carbon dioxide (CO 2 ) is usually pumped as a liquid, usually below 5 °C (41 °F) and a pressure of about 50 bar. The solvent is pumped as a liquid as it is then almost incompressible; if it were pumped as a supercritical fluid, much of the pump stroke would be "used up" in compressing the fluid, rather than pumping it. For small scale extractions (up to a few grams / minute), reciprocating CO 2 pumps or syringe pumps are often used. For larger scale extractions, diaphragm pumps are most common. The pump heads will usually require cooling, and the CO 2 will also be cooled before entering the pump.
Pressure vessels can range from simple tubing to more sophisticated purpose built vessels with quick release fittings. The pressure requirement is at least 74 bar, and most extractions are conducted at under 350 bar. However, sometimes higher pressures will be needed, such as extraction of vegetable oils, where pressures of 800 bar are sometimes required for complete miscibility of the two phases . [ 4 ]
The vessel must be equipped with a means of heating. It can be placed inside an oven for small vessels, or an oil or electrically heated jacket for larger vessels. Care must be taken if rubber seals are used on the vessel, as the supercritical carbon dioxide may dissolve in the rubber, causing swelling, and the rubber will rupture on depressurization. [ citation needed ]
The pressure in the system must be maintained from the pump right through the pressure vessel. In smaller systems (up to about 10 mL / min) a simple restrictor can be used. This can be either a capillary tube cut to length, or a needle valve which can be adjusted to maintain pressure at different flow rates. In larger systems a back pressure regulator will be used, which maintains pressure upstream of the regulator by means of a spring, compressed air, or electronically driven valve. Whichever is used, heating must be supplied, as the adiabatic expansion of the CO 2 results in significant cooling. This is problematic if water or other extracted material is present in the sample, as this may freeze in the restrictor or valve and cause blockages.
The supercritical solvent is passed into a vessel at lower pressure than the extraction vessel. The density, and hence dissolving power, of supercritical fluids varies sharply with pressure, and hence the solubility in the lower density CO 2 is much lower, and the material precipitates for collection. It is possible to fractionate the dissolved material using a series of vessels at reducing pressure. The CO 2 can be recycled or depressurized to atmospheric pressure and vented. For analytical SFE, the pressure is usually dropped to atmospheric, and the now gaseous carbon dioxide bubbled through a solvent to trap the precipitated components.
This is an important aspect. The fluid is cooled before pumping to maintain liquid conditions, then heated after pressurization. As the fluid is expanded into the separator, heat must be provided to prevent excessive cooling. For small scale extractions, such as for analytical purposes, it is usually sufficient to pre-heat the fluid in a length of tubing inside the oven containing the extraction cell. The restrictor can be electrically heated, or even heated with a hairdryer. For larger systems, the energy required during each stage of the process can be calculated using the thermodynamic properties of the supercritical fluid. [ 5 ]
There are two essential steps to SFE, transport (by diffusion or otherwise) of the solid particles to the surface, and dissolution in the supercritical fluid. Other factors, such as diffusion into the particle by the SF and reversible release such as desorption from an active site are sometimes significant, but not dealt with in detail here. Figure 2 shows the stages during extraction from a spherical particle where at the start of the extraction the level of extractant is equal across the whole sphere (Fig. 2a). As extraction commences, material is initially extracted from the edge of the sphere, and the concentration in the center is unchanged (Fig 2b). As the extraction progresses, the concentration in the center of the sphere drops as the extractant diffuses towards the edge of the sphere (Figure 2c). [ 6 ]
The relative rates of diffusion and dissolution are illustrated by two extreme cases in Figure 3. Figure 3a shows a case where dissolution is fast relative to diffusion. The material is carried away from the edge faster than it can diffuse from the center, so the concentration at the edge drops to zero. The material is carried away as fast as it arrives at the surface, and the extraction is completely diffusion limited. Here the rate of extraction can be increased by increasing diffusion rate, for example raising the temperature, but not by increasing the flow rate of the solvent. Figure 3b shows a case where solubility is low relative to diffusion. The extractant is able to diffuse to the edge faster than it can be carried away by the solvent, and the concentration profile is flat. In this case, the extraction rate can be increased by increasing the rate of dissolution, for example by increasing flow rate of the solvent.
The extraction curve of % recovery against time can be used to elucidate the type of extraction occurring. Figure 4(a) shows a typical diffusion controlled curve. The extraction is initially rapid, until the concentration at the surface drops to zero, and the rate then becomes much slower. The % extracted eventually approaches 100%. Figure 4(b) shows a curve for a solubility limited extraction. The extraction rate is almost constant, and only flattens off towards the end of the extraction. Figure 4(c) shows a curve where there are significant matrix effects, where there is some sort of reversible interaction with the matrix, such as desorption from an active site. The recovery flattens off, and if the 100% value is not known, then it is hard to tell that extraction is less than complete.
The optimum will depend on the purpose of the extraction. For an analytical extraction to determine, say, antioxidant content of a polymer , then the essential factors are complete extraction in the shortest time. However, for production of an essential oil extract from a plant, then quantity of CO 2 used will be a significant cost, and "complete" extraction not required, a yield of 70 - 80% perhaps being sufficient to provide economic returns. In another case, the selectivity may be more important, and a reduced rate of extraction will be preferable if it provides greater discrimination. Therefore, few comments can be made which are universally applicable. However, some general principles are outlined below.
This can be achieved by increasing the temperature, swelling the matrix, or reducing the particle size. Matrix swelling can sometimes be increased by increasing the pressure of the solvent, and by adding modifiers to the solvent. Some polymers and elastomers in particular are swelled dramatically by CO 2 , with diffusion being increased by several orders of magnitude in some cases. [ 7 ]
Generally, higher pressure will increase solubility. The effect of temperature is less certain, as close to the critical point, increasing the temperature causes decreases in density, and hence dissolving power. At pressures well above the critical pressure , solubility is likely to increase with temperature. [ 8 ] Addition of low levels of modifiers (sometimes called entrainers), such as methanol and ethanol, can also significantly increase solubility, particularly of more polar compounds.
The flow rate of supercritical carbon dioxide should be measured in terms of mass flow rather than by volume because the density of the CO 2 changes according to the temperature both before entering the pump heads and during compression. Coriolis flow meters are best used to achieve such flow confirmation. To maximize the rate of extraction, the flow rate should be high enough for the extraction to be completely diffusion limited (but this will be very wasteful of solvent). However, to minimize the amount of solvent used, the extraction should be completely solubility limited (which will take a very long time). Flow rate must therefore be determined depending on the competing factors of time and solvent costs, and also capital costs of pumps, heaters and heat exchangers. The optimum flow rate will probably be somewhere in the region where both solubility and diffusion are significant factors. | https://en.wikipedia.org/wiki/Supercritical_fluid_extraction |
Supercritical hydrolysis is a chemical engineering process in which water in the supercritical state can be employed to achieve a variety of reactions within seconds. To cope with the extremely short times of reaction on an industrial scale, the process should be continuous . This continuity enables the ratio of the amount of water to the other reactants to be less than unity which minimizes the energy needed to heat the water above 374 °C (705 °F), the critical temperature . Application of the process to biomass provides simple sugars in near quantitative yield by supercritical hydrolysis of the constituent polysaccharides. The phenolic polymer components of the biomass, usually exemplified by lignins , are converted into a water-insoluble liquid mixture of low molecular phenols ( monomerization ).
A private company, Renmatix, based in King of Prussia, PA , has developed a supercritical hydrolysis technology to convert a range of non-food biomass feedstocks into cellulosic sugars for application in biochemicals and biofuels. It has a demonstration facility in Georgia, currently capable of processing three dry tons of hardwood biomass into cellulosic sugar daily. In Australia, a government-sponsored entity called Licella, is similarly transforming sawdust. Both processes require high ratios of water to the amount of feedstock. This energy profligacy can be avoided by the use of a plastic-type extruder through which the solid, but wet, biomass is conveyed to a small inductively heated reaction zone as shown by Xtrudx Technologies Inc of Seattle.
Supercritical hydrolysis can be considered a broadly applicable green chemistry process that utilizes water simultaneously as a heat transfer agent, a solvent, a reactant, a source of hydrogen and as a char-reduction component.
Ethanol Producers Magazine 2012, 18(3), 70-72
US Patent 7,955,508 June 11, 2011
US Patent 8,057,666 November 15, 2011
US Patent 8.890,143 March 17, 2015 | https://en.wikipedia.org/wiki/Supercritical_hydrolysis |
Supercritical liquid–gas boundaries are lines in the pressure-temperature (pT) diagram that delimit more liquid-like and more gas-like states of a supercritical fluid . They comprise the Fisher–Widom line , the Widom line , and the Frenkel line .
According to textbook knowledge, it is possible to transform a liquid continuously into a gas, without undergoing a phase transition , by heating and compressing strongly enough to go around the critical point . However, different criteria still allow to distinguish liquid-like and more gas-like states of a supercritical fluid . These criteria result in different boundaries in the pT plane. These lines emanate either from the critical point, or from the liquid–vapor boundary (boiling curve) somewhat below the critical point. They do not correspond to first or second order phase transitions, but to weaker singularities.
The Fisher–Widom line [ 1 ] is the boundary between monotonic and oscillating asymptotics of the pair correlation function G ( r → ) {\displaystyle G({\vec {r}})} .
The Widom line is a generalization thereof, apparently so named by H. Eugene Stanley . [ 2 ] However, it was first measured experimentally in 1956 by Jones and Walker, [ 3 ] and subsequently named the 'hypercritical line' by Bernal in 1964, [ 4 ] who suggested a structural interpretation.
A common criterion for the Widom line is a peak in the isobaric heat capacity. [ 5 ] [ 6 ] In the subcritical region, the phase transition is associated with an effective spike in the heat capacity (i.e., the latent heat ). Approaching the critical point, the latent heat falls to zero but this is accompanied by a gradual rise in heat capacity in the pure phases near phase transition. At the critical point, the latent heat is zero but the heat capacity shows a diverging singularity. Beyond the critical point, there is no divergence, but rather a smooth peak in the heat capacity; the highest point of this peak identifies the Widom line.
The Frenkel line is a boundary between "rigid" and "non-rigid" fluids characterized by the onset of transverse sound modes. [ 7 ] One of the criteria for locating the Frenkel line is based on the velocity autocorrelation function (vacf): below the Frenkel line the vacf demonstrates oscillatory behaviour, while above it the vacf monotonically decays to zero. The second criterion is based on the fact that at moderate temperatures liquids can sustain transverse excitations, which disappear upon heating. One further criterion is based on isochoric heat capacity measurements. The isochoric heat capacity per particle of a monatomic liquid near to the melting line is close to 3 k B {\displaystyle 3k_{B}} (where k B {\displaystyle k_{B}} is the Boltzmann constant ). The contribution to the heat capacity due to the potential part of transverse excitations is 1 k B {\displaystyle 1k_{B}} . Therefore at the Frenkel line, where transverse excitations vanish, the isochoric heat capacity per particle should be c V = 2 k B {\displaystyle c_{V}=2k_{B}} , a direct prediction from the phonon theory of liquid thermodynamics. [ 8 ] [ 9 ] [ 10 ]
Anisimov et al. (2004), [ 11 ] without referring to Frenkel, Fisher, or Widom, reviewed thermodynamic derivatives (specific heat, expansion coefficient, compressibility) and transport coefficients (viscosity, speed of sound) in supercritical water, and found pronounced extrema as a function of pressure up to 100 K above the critical temperature. | https://en.wikipedia.org/wiki/Supercritical_liquid–gas_boundaries |
A supercritical steam generator is a type of boiler that operates at supercritical pressure and temperature, frequently used in the production of electric power .
In contrast to a subcritical boiler in which steam bubbles form, a supercritical steam generator operates above the critical pressure – 22 megapascals (3,200 psi ) and temperature 374 °C (705 °F). Under these conditions, the liquid water density decreases smoothly with no phase change, becoming indistinguishable from steam . The water temperature drops below the critical point as it does work in a high pressure turbine and enters the generator's condenser , resulting in slightly less fuel use. The efficiency of power plants with supercritical steam generators is higher than with subcritical steam because thermodynamic efficiency is directly related to the magnitude of their temperature drop. At supercritical pressure the higher temperature steam is converted more efficiently to mechanical energy in the turbine (as given by Carnot's theorem ).
Technically, the term "boiler" should not be used for a supercritical pressure steam generator as boiling does not occur.
Contemporary supercritical steam generators are sometimes referred to as Benson boilers. [ 1 ] In 1922, Mark Benson was granted a patent for a boiler designed to convert water into steam at high pressure.
Safety was the main concern behind Benson's concept. Earlier steam generators were designed for relatively low pressures of up to about 100 bar (10 MPa ; 1,450 psi ), corresponding to the state of the art in steam turbine development at the time. One of their distinguishing technical characteristics was the riveted water/steam separator drum. These drums were where the water filled tubes were terminated after having passed through the boiler furnace.
These header drums were intended to be partially filled with water and above the water there was a baffle filled space where the boiler's steam and water vapour collected. The entrained water droplets were collected by the baffles and returned to the water pan. The mostly-dry steam was piped out of the drum as the separated steam output of the boiler. These drums were often the source of boiler explosions , usually with catastrophic consequences.
However, this drum could be completely eliminated if the evaporation separation process was avoided altogether. This would happen if water entered the boiler at a pressure above the critical pressure (3,206 pounds per square inch, 22.10 MPa); was heated to a temperature above the critical temperature (706 °F, 374 °C) and then expanded (through a simple nozzle) to dry steam at some lower subcritical pressure. This could be obtained at a throttle valve located downstream of the evaporator section of the boiler.
As development of Benson technology continued, boiler design soon moved away from the original concept introduced by Mark Benson. In 1929, a test boiler that had been built in 1927 began operating in the thermal power plant at Gartenfeld in Berlin for the first time in subcritical mode with a fully open throttle valve. The second Benson boiler began operation in 1930 without a pressurizing valve at pressures between 40 and 180 bar (4 and 18 MPa; 580 and 2,611 psi) at the Berlin cable factory. This application represented the birth of the modern variable-pressure Benson boiler. After that development, the original patent was no longer used. The "Benson boiler" name, however, was retained.
1957: Unit 6 at the Philo Power Plant in Philo, Ohio was the first commercial supercritical steam-electric generating unit in the world, [ 2 ] and it could operate short-term at ultra-supercritical levels. [ 3 ] It took until 2012 for the first US coal-fired plant designed to operate at ultra-supercritical temperatures to be opened, John W. Turk Jr. Coal Plant in Arkansas . [ 4 ]
Two innovations have been projected to improve once-through steam generators [ citation needed ] :
On 3 June 2014, the Australian government's research organization CSIRO announced that they had generated 'supercritical steam' at a pressure of 23.5 MPa (3,410 psi) and 570 °C (1,060 °F) in what it claims is a world record for solar thermal energy. [ 5 ]
These definitions regarding steam generation were found in a report on coal production in China investigated by the Center for American Progress . [ 6 ]
Nuclear power plant steam typically enters turbines at subcritical values – for U-Tube Steam Generators 77 bar (1,117 psi) and 294 °C (561 °F), with comparable temperature and pressure for Once Through Steam Generators type. [ 7 ]
The term "advanced ultra-supercritical" (AUSC) or "700°C technology" is sometimes used to describe generators where the water is above 700 °C (1,292 °F). [ 8 ]
The term High-Efficiency, Low-Emissions ("HELE") has been used by the coal industry to describe supercritical and ultra-supercritical coal generation. [ 9 ] [ 10 ]
Industry leading (as of 2019) Mitsubishi Hitachi Power Systems charts its gas turbine combined cycle power generation efficiency ( lower heating value ) at well under 55% for gas turbine inlet temp of 1,250 °C (2,282 °F), roughly 56% for 1,400 °C (2,552 °F), about 58% for 1,500 °C (2,732 °F), and 64% for 1,600 °C (2,912 °F), all of which far exceed (due to Carnot efficiency) thresholds for AUSC or Ultra-supercritical technology, which are still limited by the steam temperature. [ 11 ] | https://en.wikipedia.org/wiki/Supercritical_steam_generator |
Supercritical water oxidation ( SCWO ) is a process that occurs in water at temperatures and pressures above a mixture's thermodynamic critical point . Under these conditions water becomes a fluid with unique properties that can be used to advantage in the destruction of recalcitrant and hazardous wastes such as polychlorinated biphenyls (PCB) or per- and polyfluoroalkyl substances (PFAS). Supercritical water has a density between that of water vapor and liquid at standard conditions, and exhibits high gas -like diffusion rates along with high liquid -like collision rates. In addition, the behavior of water as a solvent is altered (in comparison to that of subcritical liquid water) - it behaves much less like a polar solvent. As a result, the solubility behavior is "reversed" so that oxygen, and organics such as chlorinated hydrocarbons become soluble in the water, allowing single-phase reaction of aqueous waste with a dissolved oxidizer . The reversed solubility also causes salts to precipitate out of solution, meaning they can be treated using conventional methods for solid-waste residuals. Efficient oxidation reactions occur at low temperature (400-650 °C) with reduced NOx production.
SCWO can be classified as green chemistry or as a clean technology. The elevated pressures and temperatures required for SCWO are routinely encountered in industrial applications such as petroleum refining and chemical synthesis.
A unique addition (mostly of academic interest) to the world of supercritical water (SCW) oxidation is generating high-pressure flames inside the SCW medium. The pioneer works on high-pressure supercritical water flames were carried out by professor EU Franck at the German University of Karlsruhe in the late 80s. The works were mainly aimed at anticipating conditions which would cause spontaneous generation of non-desirable flames in the flameless SCW oxidation process. These flames would cause instabilities to the system and its components. ETH Zurich pursued the investigation of hydrothermal flames in continuously operated reactors. The rising needs for waste treatment and destruction methods motivated a Japanese Group in the Ebara Corporation to explore SCW flames as an environmental tool. Research on hydrothermal flames has also begun at NASA Glenn Research Center in Cleveland, Ohio.
Basic research on supercritical water oxidation was undertaken in the 1990s at Sandia National Laboratory's Combustion Research Facility (CRF), in Livermore, CA. Originally proposed as a hazardous waste destruction technology in response to the Kyoto protocol , multiple waste streams were studied by Steven F. Rice and Russ Hanush, and hydrothermal (supercritical water) flames were investigated by Richard R. Steeper and Jason D. Aiken. Among the waste streams studied were military dyes and pyrotechnics, [ 1 ] [ 2 ] methanol, [ 3 ] [ 4 ] and isopropyl alcohol. [ 5 ] Hydrogen peroxide was used as an oxidizing agent, and Eric Croiset was tasked with detailed measurements of the decomposition of hydrogen peroxide at supercritical water conditions. [ 6 ]
In mid-1992, Thomas G. McGuinness, PE invented what is now known as the "transpiring-wall SCWO reactor" (TWR) while seconded to Los Alamos National Laboratory on behalf of Summit Research Corporation. McGuinness subsequently received the first US patent for a TWR in early 1995. The TWR was designed to mitigate problems of salt/solids deposition, corrosion and thermal limitations occurring in other SCWO reactor designs (eg. tubular & vat-type reactors) at the time. The upper part of the vertical reactor incorporates a permeable liner through which a clean fluid permeates to help prevent salts and other solids from accumulating at the inner surface of the liner. The liner also insulates the outer pressure containment vessel from high temperatures within the reaction zone. The liner can be manufactured from a variety of materials resistant to corrosion and high reaction temperatures. The bottom end of the TWR incorporates a "quench cooler" for cooling the reaction byproducts while neutralizing any components that might form acids during transition to subcritical temperature. Proof-of-concept and performance advantages of the TWR for a variety of feedstocks was demonstrated by Eckhard Dinjus and Johannes Abeln at Forschungszentrum Karlsruhe (FZK), via direct comparison between a TWR and an adjacent tubular reactor.
Major engineering challenges were associated with the deposition of salts [ 7 ] and chemical corrosion in these supercritical water reactors. Anthony Lajeunesse led the team investigating these issues. To address these issues Lajeunesse designed a transpiring wall reactor [ 8 ] which introduced a pressure differential through the walls of an inner sleeve filled with pores to continuously rinse the inner walls of the reactor with fresh water. Russ Hanush was charged with the construction and operation of the supercritical fluids reactor (SFR) [ 9 ] used for these studies. Among its design intricacies were the Inconel 625 alloy necessary for operation at such extreme temperatures and pressures, and the design of the high-pressure, high-temperature optical cells used for photometric access to the reacting flows which incorporated 24 carat gold pressure seals and sapphire windows. [ 10 ] [ 11 ]
Several companies in the United States are now working to commercialize supercritical reactors to destroy hazardous wastes . Widespread commercial application of SCWO technology requires a reactor design capable of resisting fouling and corrosion under supercritical conditions. [ 12 ]
In Japan a number of commercial SCWO applications exist, among them one unit for treatment of halogenated waste built by Organo. In Korea two commercial size units have been built by Hanwha . [1]
In Europe, Chematur Engineering AB of Sweden commercialized the SCWO technology for treatment of spent chemical catalysts to recover the precious metal, the AquaCat process. The unit has been built for Johnson Matthey in the UK. It is the only commercial SCWO unit in Europe and with its capacity of 3000 l/h it is the largest SCWO unit in the world. Chematur's Super Critical Fluids technology was acquired by SCFI Group ( Cork, Ireland ) who are actively commercializing the Aqua Critox SCWO process for treatment of sludge, e.g. de-inking sludge and sewage sludge. Many long duration trials on these applications have been made and thanks to the high destruction efficiency of 99.9%+ the solid residue after the SCWO process is well suited for recycling – in the case of de-inking sludge as paper filler and in the case of sewage sludge as phosphorus and coagulant. SCFI Group operate a 250 l/h Aqua Critox demonstration plant in Cork, Ireland.
AquaNova Technologies, Inc. https://aquanovatech.com is actively commercializing their 2nd-generation transpiring-wall SCWO reactor ("TWR") with a focus on waste treatment and renewable energy applications. AquaNova's patent-pending TWR-SCWO technology is projected to treat a broad variety of wastes, including PFAS, while generating electric power with improved system thermal efficiency. AquaNova's paradigm-changing technology is designed to operate at supercritical and sub-critical pressures, and at higher reaction temperatures than traditional SCWO technology. AquaNova is targeting larger-scale industrial applications. AquaNova Technologies was founded by Tom McGuinness, PE, who is the original inventor of the transpiring-wall reactor (TWR) under US patent 5,384,051.
374Water Inc. is a company offering commercial SCWO systems that convert organic wastes to clean water, energy and minerals. It is spun out after more than seven years of research and development funded by the Bill & Melinda Gates Foundation to Prof. Deshusses laboratory based at Duke University. [ 13 ] The founders of 374Water, Prof. Marc Deshusses and Kobe Nagar, possess the waste processing reactor patent relevant to SCWO. [ 14 ] 374Water is actively commercializing its AirSCWO systems for the treatment of biosolids and wastewater sludges, organic chemical wastes, and PFAS wastes including unspent Aqueous Film Forming Foams (AFFFs), rinsates or spent resins and adsorption media. The first commercial sale was announced in February 2022. [ 15 ]
Aquarden Technologies (Skaevinge, Denmark) provides modular SCWO plants for the destruction of hazardous pollutants such as PFAS, pesticides, and other problematic hydrocarbons in industrial wastestreams. [ 16 ] Aquarden is also providing remediation of hazardous energetic wastes and chemical warfare agents with SCWO, where a full-scale SCWO system has been operating for some years in France for the Defense Industry.
Revive Environmental Technology , based in the United States, has commercialized a transportable SCWO-based system known as the PFAS Annihilator® for the destruction of per- and polyfluoroalkyl substances (PFAS). The system has demonstrated 99.99% destruction efficiency across a broad range of PFAS compounds, including long-chain and short-chain variants, in diverse matrices such as landfill leachate, aqueous film-forming foams (AFFFs), and industrial wastewater. The technology has been proven both in laboratory settings and in permitted commercial operations in Columbus, Ohio, and Grand Rapids, Michigan, with third-party laboratories validating its performance through a formal certificate of destruction protocol. [ 17 ] [ 18 ]
There are some research groups working in this topic throughout the world: | https://en.wikipedia.org/wiki/Supercritical_water_oxidation |
A supercurrent is a superconducting current, that is, electric current which flows without dissipation in a superconductor . [ 1 ] [ 2 ] [ 3 ] Under certain conditions, an electric current can also flow without dissipation in microscopically small non-superconducting metals. However, currents in such perfect conductors are not called supercurrents, but persistent currents .
This electromagnetism -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Supercurrent |
In nuclear physics a superdeformed nucleus is a nucleus that is very far from spherical , forming an ellipsoid with axes in ratios of approximately 2:1:1. [ 1 ] Normal deformation is approximately 1.3:1:1. Only some nuclei can exist in superdeformed states.
The first superdeformed states to be observed were the fission isomers , low-spin states of elements in the actinide series. The strong force decays much faster than the Coulomb force , which becomes stronger when nucleons are greater than 2.5 femtometers apart. For this reason, these elements undergo spontaneous fission . In the late 1980s, high-spin superdeformed rotational bands were observed in other regions of the periodic table. Specific elements include ruthenium , rhodium , palladium , silver , osmium , iridium , platinum , gold , and mercury .
The existence of superdeformed states occurs because of a combination of macroscopic and microscopic factors, which together lower their energies, and make them stable minima of energy as a function of deformation. Macroscopically, the nucleus can be described by the liquid drop model . The liquid drop's energy as a function of deformation is at a minimum for zero deformation, due to the surface tension term. However, the curve may become soft with respect to high deformations because of the Coulomb repulsion (especially for the fission isomers, which have high Z) and also, in the case of high-spin states, because of the increased moment of inertia. Modulating this macroscopic behavior, the microscopic shell correction creates certain superdeformed magic numbers that are analogous to the spherical magic numbers. For nuclei near these magic numbers, the shell correction creates a second minimum in the energy as a function of deformation.
Even more deformed states (3:1) are called hyperdeformed .
This nuclear physics or atomic physics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Superdeformation |
Superdiamagnetism (or perfect diamagnetism ) is a phenomenon occurring in certain materials at low temperatures , characterised by the complete absence of magnetic permeability (i.e. a volume magnetic susceptibility χ V {\displaystyle \chi _{\rm {V}}} = −1) and the exclusion of the interior magnetic field .
Superdiamagnetism established that the superconductivity of a material was a stage of phase transition . Superconducting magnetic levitation is due to superdiamagnetism, which repels a permanent magnet which approaches the superconductor, and flux pinning , which prevents the magnet floating away.
Superdiamagnetism is a feature of superconductivity . It was identified in 1933, by Walther Meissner and Robert Ochsenfeld , but it is considered distinct from the Meissner effect which occurs when the superconductivity first forms, and involves the exclusion of magnetic fields that already penetrate the object.
Fritz London and Heinz London developed the theory that the exclusion of magnetic flux is brought about by electrical screening currents that flow at the surface of the superconducting material and which generate a magnetic field that exactly cancels the externally applied field inside the superconductor. These screening currents are generated whenever a superconducting material is brought inside a magnetic field. This can be understood by the fact that a superconductor has zero electrical resistance, so that eddy currents , induced by the motion of the material inside a magnetic field, will not decay. Fritz, at the Royal Society in 1935, stated that the thermodynamic state would be described by a single wave function .
"Screening currents" also appear in a situation wherein an initially normal, conducting metal is placed inside a magnetic field. As soon as the metal is cooled below the appropriate transition temperature, it becomes superconducting. This expulsion of magnetic field upon the cooling of the metal cannot be explained any longer by merely assuming zero resistance and is called the Meissner effect . It shows that the superconducting state does not depend on the history of preparation, only upon the present values of temperature , pressure and magnetic field, and therefore is a true thermodynamic state . | https://en.wikipedia.org/wiki/Superdiamagnetism |
In geometry , a superegg is a solid of revolution obtained by rotating an elongated superellipse with exponent greater than 2 around its longest axis . It is a special case of superellipsoid .
Unlike an elongated ellipsoid , an elongated superegg can stand upright on a flat surface, or on top of another superegg. [ 1 ] This is due to its curvature being zero at the tips. The shape was popularized by Danish poet and scientist Piet Hein (1905–1996). Supereggs of various materials, including brass, were sold as novelties or " executive toys " in the 1960s.
The superegg is a superellipsoid whose horizontal cross-sections are circles. It is defined by the inequality
where R is the horizontal radius at the "equator" (the widest part as defined by the circles), and h is one half of the height. The exponent p determines the degree of flattening at the tips and equator. Hein's choice was p = 2.5 (the same one he used for the Sergels Torg roundabout), and R / h = 6/5. [ 2 ]
The definition can be changed to have an equality rather than an inequality; this changes the superegg to being a surface of revolution rather than a solid. [ 3 ]
The volume of a superegg can be derived via squigonometry , a generalization of trigonometry to squircles . [ 4 ] It is related to the gamma function :
V = 4 π h R 2 3 p Γ ( 1 / p ) Γ ( 2 / p ) Γ ( 3 / p ) . {\displaystyle V={\frac {4\pi hR^{2}}{3p}}{\frac {\Gamma (1/p)\Gamma (2/p)}{\Gamma (3/p)}}\,.}
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Superegg |
In mathematics , a superellipsoid (or super-ellipsoid ) is a solid whose horizontal sections are superellipses (Lamé curves) with the same squareness parameter ϵ 2 {\displaystyle \epsilon _{2}} , and whose vertical sections through the center are superellipses with the squareness parameter ϵ 1 {\displaystyle \epsilon _{1}} . It is a generalization of an ellipsoid, which is a special case when ϵ 1 = ϵ 2 = 1 {\displaystyle \epsilon _{1}=\epsilon _{2}=1} . [ 2 ]
Superellipsoids as computer graphics primitives were popularized by Alan H. Barr (who used the name " superquadrics " to refer to both superellipsoids and supertoroids ). [ 2 ] [ 3 ] In modern computer vision and robotics literatures, superquadrics and superellipsoids are used interchangeably, since superellipsoids are the most representative and widely utilized shape among all the superquadrics. [ 4 ] [ 5 ]
Superellipsoids have an rich shape vocabulary, including cuboids, cylinders, ellipsoids, octahedra and their intermediates. [ 6 ] It becomes an important geometric primitive widely used in computer vision, [ 6 ] [ 5 ] [ 7 ] robotics, [ 4 ] and physical simulation. [ 8 ] The main advantage of describing objects and envirionment with superellipsoids is its conciseness and expressiveness in shape. [ 6 ] Furthermore, a closed-form expression of the Minkowski sum between two superellipsoids is available. [ 9 ] This makes it a desirable geometric primitive for robot grasping, collision detection, and motion planning. [ 4 ]
A handful of notable mathematical figures can arise as special cases of superellipsoids given the correct set of values, which are depicted in the above graphic:
Piet Hein 's supereggs are also special cases of superellipsoids.
The basic superellipsoid is defined by the implicit function
The parameters ϵ 1 {\displaystyle \epsilon _{1}} and ϵ 2 {\displaystyle \epsilon _{2}} are positive real numbers that control the squareness of the shape.
The surface of the superellipsoid is defined by the equation:
f ( x , y , z ) = 1 {\displaystyle f(x,y,z)=1}
For any given point ( x , y , z ) ∈ R 3 {\displaystyle (x,y,z)\in \mathbb {R} ^{3}} , the point lies inside the superellipsoid if f ( x , y , z ) < 1 {\displaystyle f(x,y,z)<1} , and outside if f ( x , y , z ) > 1 {\displaystyle f(x,y,z)>1} .
Any " parallel of latitude " of the superellipsoid (a horizontal section at any constant z between -1 and +1) is a Lamé curve with exponent 2 / ϵ 2 {\displaystyle 2/\epsilon _{2}} , scaled by a = ( 1 − z 2 ϵ 1 ) ϵ 1 2 {\displaystyle a=(1-z^{\frac {2}{\epsilon _{1}}})^{\frac {\epsilon _{1}}{2}}} , which is
Any " meridian of longitude " (a section by any vertical plane through the origin) is a Lamé curve with exponent 2 / ϵ 1 {\displaystyle 2/\epsilon _{1}} , stretched horizontally by a factor w that depends on the sectioning plane. Namely, if x = u cos θ {\displaystyle x=u\cos \theta } and y = u sin θ {\displaystyle y=u\sin \theta } , for a given θ {\displaystyle \theta } , then the section is
where
In particular, if ϵ 2 {\displaystyle \epsilon _{2}} is 1, the horizontal cross-sections are circles, and the horizontal stretching w {\displaystyle w} of the vertical sections is 1 for all planes. In that case, the superellipsoid is a solid of revolution , obtained by rotating the Lamé curve with exponent 2 / ϵ 1 {\displaystyle 2/\epsilon _{1}} around the vertical axis.
The basic shape above extends from −1 to +1 along each coordinate axis. The general superellipsoid is obtained by scaling the basic shape along each axis by factors a x {\displaystyle a_{x}} , a y {\displaystyle a_{y}} , a z {\displaystyle a_{z}} , the semi-diameters of the resulting solid. The implicit function is [ 2 ]
Similarly, the surface of the superellipsoid is defined by the equation
F ( x , y , z ) = 1 {\displaystyle F(x,y,z)=1}
For any given point ( x , y , z ) ∈ R 3 {\displaystyle (x,y,z)\in \mathbb {R} ^{3}} , the point lies inside the superellipsoid if f ( x , y , z ) < 1 {\displaystyle f(x,y,z)<1} , and outside if f ( x , y , z ) > 1 {\displaystyle f(x,y,z)>1} .
Therefore, the implicit function is also called the inside-outside function of the superellipsoid. [ 2 ]
The superellipsoid has a parametric representation in terms of surface parameters η ∈ [ − π / 2 , π / 2 ) {\displaystyle \eta \in [-\pi /2,\pi /2)} , ω ∈ [ − π , π ) {\displaystyle \omega \in [-\pi ,\pi )} . [ 3 ]
In computer vision and robotic applications, a superellipsoid with a general pose in the 3D Euclidean space is usually of more interest. [ 6 ] [ 5 ]
For a given Euclidean transformation of the superellipsoid frame g = [ R ∈ S O ( 3 ) , t ∈ R 3 ] ∈ S E ( 3 ) {\displaystyle g=[\mathbf {R} \in SO(3),\mathbf {t} \in \mathbb {R} ^{3}]\in SE(3)} relative to the world frame, the implicit function of a general posed superellipsoid surface defined the world frame is [ 6 ]
F ( g − 1 ∘ ( x , y , z ) ) = 1 {\displaystyle F\left(g^{-1}\circ (x,y,z)\right)=1}
where ∘ {\displaystyle \circ } is the transformation operation that maps the point ( x , y , z ) ∈ R 3 {\displaystyle (x,y,z)\in \mathbb {R} ^{3}} in the world frame into the canonical superellipsoid frame.
The volume encompassed by the superelllipsoid surface can be expressed in terms of the beta functions β ( ⋅ , ⋅ ) {\displaystyle \beta (\cdot ,\cdot )} , [ 10 ]
V ( ϵ 1 , ϵ 2 , a x , a y , a z ) = 2 a x a y a z ϵ 1 ϵ 2 β ( ϵ 1 2 , ϵ 1 + 1 ) β ( ϵ 2 2 , ϵ 2 + 2 2 ) {\displaystyle V(\epsilon _{1},\epsilon _{2},a_{x},a_{y},a_{z})=2a_{x}a_{y}a_{z}\epsilon _{1}\epsilon _{2}\beta ({\frac {\epsilon _{1}}{2}},\epsilon _{1}+1)\beta ({\frac {\epsilon _{2}}{2}},{\frac {\epsilon _{2}+2}{2}})}
or equivalently with the Gamma function Γ ( ⋅ ) {\displaystyle \Gamma (\cdot )} , since
β ( m , n ) = Γ ( m ) Γ ( n ) Γ ( m + n ) {\displaystyle \beta (m,n)={\frac {\Gamma (m)\Gamma (n)}{\Gamma (m+n)}}}
Recoverying the superellipsoid (or superquadrics) representation from raw data (e.g., point cloud, mesh, images, and voxels) is an important task in computer vision, [ 11 ] [ 7 ] [ 6 ] [ 5 ] robotics, [ 4 ] and physical simulation. [ 8 ]
Traditional computational methods model the problem as a least-square problem. [ 11 ] The goal is to find out the optimal set of superellipsoid parameters θ ≐ [ ϵ 1 , ϵ 2 , a x , a y , a z , g ] {\displaystyle \theta \doteq [\epsilon _{1},\epsilon _{2},a_{x},a_{y},a_{z},g]} that minimize an objective function. Other than the shape parameters, g ∈ {\displaystyle g\in } SE(3) is the pose of the superellipsoid frame with respect to the world coordinate.
There are two commonly used objective functions. [ 12 ] The first one is constructed directly based on the implicit function [ 11 ]
G 1 ( θ ) = a x a y a z ∑ i = 1 N ( F ϵ 1 ( g − 1 ∘ ( x i , y i , z i ) ) − 1 ) 2 {\displaystyle G_{1}(\theta )=a_{x}a_{y}a_{z}\sum _{i=1}^{N}\left(F^{\epsilon _{1}}\left(g^{-1}\circ (x_{i},y_{i},z_{i})\right)-1\right)^{2}}
The minimization of the objective function provides a recovered superellipsoid as close as possible to all the input points { ( x i , y i , z i ) ∈ R 3 , i = 1 , 2 , . . . , N } {\displaystyle \{(x_{i},y_{i},z_{i})\in \mathbb {R} ^{3},i=1,2,...,N\}} . At the mean time, the scalar value a x , a y , a z {\displaystyle a_{x},a_{y},a_{z}} is positively proportional to the volume of the superellipsoid, and thus have the effect of minimizing the volume as well.
The other objective function tries to minimized the radial distance between the points and the superellipsoid. That is [ 13 ] [ 12 ]
G 2 ( θ ) = ∑ i = 1 N ( | r i | | 1 − F − ϵ 1 2 ( g − 1 ∘ ( x i , y i , z i ) ) | ) 2 {\displaystyle G_{2}(\theta )=\sum _{i=1}^{N}\left(\left|r_{i}\right|\left|1-F^{-{\frac {\epsilon _{1}}{2}}}\left(g^{-1}\circ (x_{i},y_{i},z_{i})\right)\right|\right)^{2}} , where r i = ‖ ( x i , y i , z i ) ‖ 2 {\displaystyle r_{i}=\|(x_{i},y_{i},z_{i})\|_{2}}
A probabilistic method called EMS is designed to deal with noise and outliers . [ 6 ] In this method, the superellipsoid recovery is reformulated as a maximum likelihood estimation problem, and an optimization method is proposed to avoid local minima utilizing geometric similarities of the superellipsoids.
The method is further extended by modeling with nonparametric bayesian techniques to recovery multiple superellipsoids simultaneously. [ 14 ] | https://en.wikipedia.org/wiki/Superellipsoid |
In 1931 RCA introduced a new line of Superette radio receivers. These used the superheterodyne principle but were lower cost than earlier products, in an attempt to maintain sales during the onset of the Great Depression.
Edwin Howard Armstrong invented the superheterodyne receiver in 1918. [ 1 ] Armstrong and RCA (under David Sarnoff ) had a business and technical relationship, that would last into the 1940s.
Funded by RCA, Armstrong designed a radio that can receive stations easily without complex tuning or interference from other stations. Early radio designs by Armstrong and others produced radios that were very sensitive but hard to keep under control due to the nature of radio waves operating at higher frequencies. Armstrong's superheterodyne receiver converted these high frequencies into one lower frequency. This allow the radio to be more stable or easier to tune, with less interference. [ 2 ]
The result was the RCA Radiola AR-812 and Radiola VIII Superheterodynes in 1924, the world's first consumer superheterodyne receivers. In 1924, these cost $224 and $475 respectively. [ 3 ] Up to 1930, RCA controlled the superheterodyne patent, and any radio manufacturer that wanted to build one had to pay royalties to RCA. In 1928 RCA launched their first AC operated superheterodyne radio, the Radiola 60 ($147 in 1928 dollars). [ 4 ]
All these superhets were large and expensive. In the 1930s the depression was in full force. The trend in radios were smaller, more compact and lower cost. RCA introduced the Superette line in 1931 with the R7 table and R9 console.
From 1931 RCA produced a range of small mantel radios called the Superette , which at introduction sold for $57.50 not including the vacuum tubes. [ 5 ] [ 6 ] "Super" was derived from superheterodyne . Probably the most well known is the Model R-7, which was produced in several versions.
RCA also produced a console version, the model R-9. The R-7 and R-9 share identical chassis (using RCA tubes 280, 227, 235, 245 and 224). There were several versions of the R7 table (mantel) version: the R-7A using pentode output tubes (RCA 247), R-7DC and R-9DC for 110 VDC power, and the R-7 LW for long wave listening. These early superheterodynes had no AVC so stronger stations were louder than weaker ones.
RCA produced spinoffs of the Superette during the 1931-32 model year. These models are based on the R-7 design but are not called Superette in RCA's literature. [ 7 ] [ 8 ] "Superette" was reserved for the R-7 and R-9 models. [ 7 ] | https://en.wikipedia.org/wiki/Superette_(radio) |
Superexchange or Kramers–Anderson superexchange interaction , is a prototypical indirect exchange coupling between neighboring magnetic moments (usually next-nearest neighboring cations , see the schematic illustration of MnO below) by virtue of exchanging electrons through a non-magnetic anion known as the superexchange center . In this way, it differs from direct exchange, in which there is direct overlap of electron wave function from nearest neighboring cations not involving an intermediary anion or exchange center. While direct exchange can be either ferromagnetic or antiferromagnetic, the superexchange interaction is usually antiferromagnetic, preferring opposite alignment of the connected magnetic moments. Similar to the direct exchange, superexchange calls for the combined effect of Pauli exclusion principle and Coulomb's repulsion of the electrons. If the superexchange center and the magnetic moments it connects to are non-collinear, namely the atomic bonds are canted, the superexchange will be accompanied by the antisymmetric exchange known as the Dzyaloshinskii–Moriya interaction , which prefers orthogonal alignment of neighboring magnetic moments. In this situation, the symmetric and antisymmetric contributions compete with each other and can result in versatile magnetic spin textures such as magnetic skyrmions .
Superexchange was theoretically proposed by Hendrik Kramers in 1934, when he noticed that in crystals like Manganese(II) oxide (MnO), there are manganese atoms that interact with one another despite having nonmagnetic oxygen atoms between them. [ 1 ] Phillip Anderson later refined Kramers' model in 1950. [ 2 ]
A set of semi-empirical rules were developed by John B. Goodenough and Junjiro Kanamori [ ja ] in the 1950s. [ 3 ] [ 4 ] [ 5 ] These rules, now referred to as the Goodenough–Kanamori rules , have proven highly successful in rationalizing the magnetic properties of a wide range of materials on a qualitative level. They are based on the symmetry relations and electron occupancy of the overlapping atomic orbitals (assuming the localized Heitler–London , or valence-bond , model is more representative of the chemical bonding than is the delocalized, or Hund–Mulliken–Bloch , model). Essentially, the Pauli exclusion principle dictates that between two magnetic ions with half-occupied orbitals, which couple through an intermediary non-magnetic ion (e.g. O 2− ), the superexchange will be strongly anti-ferromagnetic while the coupling between an ion with a filled orbital and one with a half-filled orbital will be ferromagnetic. The coupling between an ion with either a half-filled or filled orbital and one with a vacant orbital can be either antiferromagnetic or ferromagnetic, but generally favors ferromagnetic. [ 6 ] When multiple types of interactions are present simultaneously, the antiferromagnetic one is generally dominant, since it is independent of the intra-atomic exchange term. [ 7 ] For simple cases, the Goodenough–Kanamori rules readily allow the prediction of the net magnetic exchange expected for the coupling between ions. Complications begin to arise in various situations:
Double exchange is a related magnetic coupling interaction proposed by Clarence Zener to account for electrical transport properties. It differs from superexchange in the following manner: in superexchange, the occupancy of the d-shell of the two metal ions is the same or differs by two, and the electrons are localized. For other occupations (double exchange), the electrons are itinerant (delocalized); this results in the material displaying magnetic exchange coupling, as well as metallic conductivity.
The p orbitals from oxygen and d orbitals from manganese can form a direct exchange.
There is antiferromagnetic order because the singlet state is energetically favoured. This configuration allows a delocalization of the involved electrons due to a lowering of the kinetic energy. [ citation needed ]
Quantum-mechanical perturbation theory results in an antiferromagnetic interaction of the spins of neighboring Mn atoms with the energy operator ( Hamiltonian )
where t Mn,O is the so-called hopping energy between a Mn 3 d and the oxygen p orbitals, while U is a so-called Hubbard energy for Mn. The expression S ^ 1 ⋅ S ^ 2 {\displaystyle {\hat {S}}_{1}\cdot {\hat {S}}_{2}} is the scalar product between the Mn spin-vector operators ( Heisenberg model ).
It has been proven, that due to the multiple energy scales present in the model for superexchange, perturbation theory is not in general convergent, and is thus not an appropriate method for deriving this interaction between spins [ 8 ] and that this undoubtedly accounts for the incorrect qualitative characterization of some transition-metal oxide compounds as Mott-Hubbard, rather than Charge-Transfer, insulators. This is particularly apparent whenever the p - d orbital energy difference is not extremely large, compared with the d -electron correlation energy U . | https://en.wikipedia.org/wiki/Superexchange |
SUPERFAMILY is a database and search platform of structural and functional annotation for all proteins and genomes. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] It classifies amino acid sequences into known structural domains , especially into SCOP superfamilies. [ 8 ] [ 9 ] Domains are functional, structural, and evolutionary units that form proteins. Domains of common Ancestry are grouped into superfamilies. The domains and domain superfamilies are defined and described in SCOP. [ 8 ] [ 10 ] Superfamilies are groups of proteins which have structural evidence to support a common evolutionary ancestor but may not have detectable sequence homology . [ 11 ]
The SUPERFAMILY annotation is based on a collection of hidden Markov models (HMM), which represent structural protein domains at the SCOP superfamily level. [ 12 ] [ 13 ] A superfamily groups together domains which have an evolutionary relationship. The annotation is produced by scanning protein sequences from completely sequenced genomes against the hidden Markov models.
For each protein you can:
For each genome you can:
For each superfamily you can:
All annotation, models and the database dump are freely available for download to everyone.
Sequence Search
Submit a protein or DNA sequence for SCOP superfamily and family level classification using the SUPERFAMILY HMM's. Sequences can be submitted either by raw input or by uploading a file, but all must be in FASTA format . Sequences can be amino acids, a fixed frame nucleotide sequence, or all frames of a submitted nucleotide sequence. Up to 1000 sequences can be run at a time.
Keyword Search
Search the database using a superfamily, family, or species name plus a sequence, SCOP, PDB , or HMM ID's. A successful search yields the class, folds, superfamilies, families, and individual proteins matching the query.
Domain Assignments
The database has domain assignments, alignments, and architectures for completely sequence eukaryotic and prokaryotic organisms, plus sequence collections.
Comparative Genomics Tools
Browse unusual (over- and under-represented) superfamilies and families, adjacent domain pair lists and graphs, unique domain pairs, domain combinations, domain architecture co-occurrence networks , and domain distribution across taxonomic kingdoms for each organism.
Genome Statistics
For each genome: number of sequences, number of sequences with assignment, percentage of sequences with assignment, percentage total sequence coverage, number of domains assigned, number of superfamilies assigned, number of families assigned, average superfamily size, percentage produced by duplication, average sequence length, average length matched, number of domain pairs, and number of unique domain architectures.
Gene Ontology
Domain-centric Gene Ontology (GO) automatically annotated.
Due to the growing gap between sequenced proteins and known functions of proteins, it is becoming increasingly important to develop a more automated method for functionally annotating proteins, especially for proteins with known domains. SUPERFAMILY uses protein-level GO annotations taken from the Genome Ontology Annotation (GOA) project, which offers high-quality GO annotations directly associated to proteins in the UniprotKB over a wide spectrum of species. [ 15 ] SUPERFAMILY has generated GO annotations for evolutionarily closed domains (at the SCOP family level) and distant domains (at the SCOP superfamily level).
Phenotype Ontology
Domain-centric phenotype /anatomy ontology including Disease Ontology, Human Phenotype, Mouse Phenotype, Worm Phenotype, Yeast Phenotype, Fly Phenotype, Fly Anatomy, Zebrafish Anatomy, Xenopus Anatomy, and Arabidopsis Plant.
Superfamily Annotation
InterPro abstracts for over 1,000 superfamilies, and Gene Ontology (GO) annotation for over 700 superfamilies. This feature allows for the direct annotation of key features, functions, and structures of a superfamily.
Functional Annotation
Functional annotation of SCOP 1.73 superfamilies.
The SUPERFAMILY database uses a scheme of 50 detailed function categories which map to 7 general function categories, similar to the scheme used in the COG database. [ 16 ] A general function assigned to a superfamily was used to reflect the major function for that superfamily. The general categories of function are:
Each domain superfamily in SCOP classes a to g were manually annotated using this scheme [ 17 ] [ 18 ] [ 19 ] and the information used was provided by SCOP , [ 10 ] InterPro , [ 20 ] [ 21 ] Pfam , [ 22 ] Swiss Prot , [ 23 ] and various literature sources.
Phylogenetic Trees
Create custom phylogenetic trees by selecting 3 or more available genomes on the SUPERFAMILY site. Trees are generated using heuristic parsimony methods, and are based on protein domain architecture data for all genomes in SUPERFAMILY. Genome combinations, or specific clades, can be displayed as individual trees.
Similar Domain Architectures
This feature allows the user to find the 10 domain architectures which are most similar to the domain architecture of interest.
Hidden Markov Models
Produce SCOP domain assignments for a sequence using the SUPERFAMILY hidden Markov models .
Profile Comparison
Find remote domain matches when the HMM search fails to find a significant match. Profile comparison (PRC) [ 24 ] for aligning and scoring two profile HMM's are used.
Web Services
Distributed Annotation Server and linking to SUPERFAMILY.
Downloads
Sequences, assignments, models, MySQL database, and scripts - updated weekly.
The SUPERFAMILY database has numerous research applications and has been used by many research groups for various studies. It can serve either as a database for proteins that the user wishes to examine with other methods, or to assign a function and structure to a novel or uncharacterized protein. One study found SUPERFAMILY to be very adept at correctly assigning an appropriate function and structure to a large number of domains of unknown function by comparing them to the databases hidden Markov models. [ 25 ] Another study used SUPERFAMILY to generate a data set of 1,733 Fold superfamily domains (FSF) in use of a comparison of proteomes and functionomes for to identify the origin of cellular diversification. [ 26 ] | https://en.wikipedia.org/wiki/Superfamily_database |
Superfecundation is the fertilization of two or more ova from the same menstrual cycle by sperm from the same or different males, whether through separate acts of intercourse or during a single sexual encounter with multiple males (e.g. double penetration). This can potentially result in twin babies that have different biological fathers. [ 1 ] [ 2 ]
The term superfecundation is derived from fecund , meaning able to produce offspring. Homopaternal superfecundation is a form of twinning where fertilization of two separate ova occurs as a result of two or more distinct instances of intercourse or insemination with the same male partner or donor, leading to fraternal twins . [ 3 ] Heteropaternal superfecundation , on the other hand, is an atypical form of twinning that results in twins that are genetically half siblings – sharing the same biological mother, but with different biological fathers.
Sperm cells can live inside a human female's body for up to five days, and once ovulation occurs, the egg remains viable for 12–48 hours before it begins to disintegrate. [ 4 ] Superfecundation most commonly happens within hours or days of the first instance of fertilization with ova released during the same cycle.
Ovulation is normally suspended during pregnancy to prevent further ova becoming fertilized and to help increase the chances of a full-term pregnancy. However, if an ovum is atypically released after the female was already impregnated when previously ovulating, a chance of a second pregnancy occurs, albeit at a different stage of development. This is known as superfetation . [ 5 ]
Heteropaternal superfecundation is common in animals such as cats and dogs. Stray dogs can produce litters in which every puppy has a different sire. Though rare in humans, cases have been documented. In one study on humans, the frequency was 2.4% among dizygotic twins whose parents had been involved in paternity suits . [ 6 ]
In 1982, twins who were born with two different skin colors were discovered to be conceived as a result of heteropaternal superfecundation. [ 5 ] [ 7 ]
In 1995, a young woman gave birth to diamniotic monochorionic twins , who were originally assumed to be monozygotic twins until a paternity suit led to a DNA test. This led to the discovery that the twins had different fathers. [ 3 ]
In 2001, a case of spontaneous monopaternal superfecundation was reported after a woman undergoing IVF treatments gave birth to quintuplets after only two embryos were implanted. Genetic testing supported that the twinning was not a result of the embryos splitting, and that all five boys shared the same father. [ 5 ] [ 8 ]
In 2008, on the Maury Show a paternity test on live television established a heteropaternal superfecundation. [ 9 ]
In 2015, a judge in New Jersey ruled that a man should only pay child support for one of two twins, as he was only the biological father to one of the children. [ 10 ]
In 2017, an IVF-implanted surrogate mother gave birth to two children: one genetically unrelated child from an implanted embryo, and a biological child from her own egg and her husband's sperm. [ 11 ]
In 2019, a Chinese woman was reported to have two babies from different fathers, one of whom was her husband and the other was a man having a secret affair with her during the same time. [ 12 ]
In 2022, a 19-year-old Brazilian from Mineiros gave birth to twins from two different fathers with whom she had sex on the same day. [ 13 ]
Greek mythology holds many cases of superfecundation: | https://en.wikipedia.org/wiki/Superfecundation |
Superferromagnetism is the magnetism of an ensemble of magnetically interacting super-moment-bearing material particles that would be superparamagnetic if they were not interacting. [ 1 ] Nanoparticles of iron oxides, such as ferrihydrite (nominally FeOOH), often cluster and interact magnetically. These interactions change the magnetic behaviours of the nanoparticles (both above and below their blocking temperatures) and lead to an ordered low-temperature phase with non-randomly oriented particle super-moments.
The phenomenon appears to have been first described and the term "superferromagnatism" introduced by Bostanjoglo and Röhkel, for a metallic film system. [ 2 ] A decade later, the same phenomenon was rediscovered and described to occur in small-particle systems. [ 3 ] [ 4 ] The discovery is attributed as such in the scientific literature. [ 5 ]
This physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Superferromagnetism |
Superficial velocity (or superficial flow velocity ), in engineering of multiphase flows and flows in porous media , is a hypothetical (artificial) flow velocity calculated as if the given phase or fluid were the only one flowing or present in a given cross sectional area. Other phases, particles, the skeleton of the porous medium, etc. present in the channel are disregarded.
Superficial velocity is used in many engineering equations because it is the value which is usually readily known and unambiguous, whereas real velocity is often variable from place to place.
Superficial velocity can be expressed as:
where:
Using the concept of porosity , the dependence between the advection velocity and the superficial velocity can be expressed as (for one-dimensional flow):
where:
The local physical velocity can still be different than the average fluid velocity because the vector of the local fluid flow does not have to be parallel to that of average flow. Also, there may be local constriction in the flow channel. | https://en.wikipedia.org/wiki/Superficial_velocity |
Superfluid helium-4 ( helium II or He-II ) is the superfluid form of helium-4 , the most common isotope of the element helium . A superfluid is a state of matter in which matter behaves like a fluid with zero viscosity . The substance, which resembles other liquids such as helium I (conventional, non-superfluid liquid helium), flows without friction past any surface, which allows it to continue to circulate over obstructions and through pores in containers which hold it, subject only to its own inertia . [ 1 ]
The formation of the superfluid is a manifestation of the formation of a Bose–Einstein condensate of helium atoms. This condensation occurs in liquid helium-4 at a far higher temperature (2.17 K) than it does in helium-3 (2.5 mK) because each atom of helium-4 is a boson particle, by virtue of its zero spin . Helium-3, however, is a fermion particle, which can form bosons only by pairing with itself at much lower temperatures, in a weaker process that is similar to the electron pairing in superconductivity . [ 2 ]
Known as a major facet in the study of quantum hydrodynamics and macroscopic quantum phenomena , the superfluidity effect was discovered by Pyotr Kapitsa [ 3 ] and John F. Allen , and Don Misener [ 4 ] in 1937. Onnes possibly observed the superfluid phase transition on August 2, 1911, the same day that he observed superconductivity in mercury. [ 5 ] It has since been described through phenomenological and microscopic theories.
In the 1950s, Hall and Vinen performed experiments establishing the existence of quantized vortex lines in superfluid helium. [ 6 ] In the 1960s, Rayfield and Reif established the existence of quantized vortex rings. [ 7 ] Packard has observed the intersection of vortex lines with the free surface of the fluid, [ 8 ] and Avenel and Varoquaux have studied the Josephson effect in superfluid helium-4. [ 9 ] In 2006, a group at the University of Maryland visualized quantized vortices by using small tracer particles of solid hydrogen . [ 10 ]
In the early 2000s, physicists created a Fermionic condensate from pairs of ultra-cold fermionic atoms. Under certain conditions, fermion pairs form diatomic molecules and undergo Bose–Einstein condensation . At the other limit, the fermions (most notably superconducting electrons) form Cooper pairs which also exhibit superfluidity. This work with ultra-cold atomic gases has allowed scientists to study the region in between these two extremes, known as the BEC-BCS crossover .
Supersolids may also have been discovered in 2004 by physicists at Penn State University . When helium-4 is cooled below about 200 mK under high pressures, a fraction (≈1%) of the solid appears to become superfluid. [ 11 ] [ 12 ] By quench cooling or lengthening the annealing time, thus increasing or decreasing the defect density respectively, it was shown, via torsional oscillator experiment, that the supersolid fraction could be made to range from 20% to completely non-existent. This suggested that the supersolid nature of helium-4 is not intrinsic to helium-4 but a property of helium-4 and disorder. [ 13 ] [ 14 ] Some emerging theories posit that the supersolid signal observed in helium-4 was actually an observation of either a superglass state [ 15 ] or intrinsically superfluid grain boundaries in the helium-4 crystal. [ 16 ]
Recently [ timeframe? ] in the field of chemistry, superfluid helium-4 has been successfully used in spectroscopic techniques as a quantum solvent . Referred to as superfluid helium droplet spectroscopy (SHeDS), it is of great interest in studies of gas molecules, as a single molecule solvated in a superfluid medium allows a molecule to have effective rotational freedom, allowing it to behave similarly to how it would in the "gas" phase. Droplets of superfluid helium also have a characteristic temperature of about 0.4 K which cools the solvated molecule(s) to its ground or nearly ground rovibronic state.
Superfluids are also used in high-precision devices such as gyroscopes , which allow the measurement of some theoretically predicted gravitational effects (for an example, see Gravity Probe B ).
The Infrared Astronomical Satellite IRAS , launched in January 1983 to gather infrared data was cooled by 73 kilograms of superfluid helium, maintaining a temperature of 1.6 K (−271.55 °C). When used in conjunction with helium-3, temperatures as low as 40 mK are routinely achieved in extreme low temperature experiments. The helium-3, in liquid state at 3.2 K, can be evaporated into the superfluid helium-4, where it acts as a gas due to the latter's properties as a Bose–Einstein condensate. This evaporation pulls energy from the overall system, which can be pumped out in a way completely analogous to normal refrigeration techniques. (See dilution refrigerator )
Superfluid-helium technology is used to extend the temperature range of cryocoolers to lower temperatures. So far the limit is 1.19 K, but there is a potential to reach 0.7 K. [ 17 ]
Superfluids, such as helium-4 below the lambda point (known, for simplicity, as helium II ), exhibit many unusual properties. A superfluid acts as if it were a mixture of a normal component, with all the properties of a normal fluid, and a superfluid component. The superfluid component has zero viscosity and zero entropy. Application of heat to a spot in superfluid helium results in a flow of the normal component which takes care of the heat transport at relatively high velocity (up to 20 cm/s) which leads to a very high effective thermal conductivity.
Many ordinary liquids, like alcohol or petroleum, creep up solid walls, driven by their surface tension. Liquid helium also has this property, but, in the case of He-II, the flow of the liquid in the layer is not restricted by its viscosity but by a critical velocity which is about 20 cm/s. This is a fairly high velocity so superfluid helium can flow relatively easily up the wall of containers, over the top, and down to the same level as the surface of the liquid inside the container, in a siphon effect. It was, however, observed, that the flow through nanoporous membrane becomes restricted if the pore diameter is less than 0.7 nm (i.e. roughly three times the classical diameter of helium atom), suggesting the unusual hydrodynamic properties of He arise at larger scale than in the classical liquid helium. [ 18 ]
Another fundamental property becomes visible if a superfluid is placed in a rotating container. Instead of rotating uniformly with the container, the rotating state consists of quantized vortices. That is, when the container is rotated at speeds below the first critical angular velocity, the liquid remains perfectly stationary. Once the first critical angular velocity is reached, the superfluid will form a vortex. The vortex strength is quantized, that is, a superfluid can only spin at certain "allowed" values. Rotation in a normal fluid, like water, is not quantized. If the rotation speed is increased more and more quantized vortices will be formed which arrange in nice patterns similar to the Abrikosov lattice in a superconductor.
Although the phenomenologies of the superfluid states of helium-4 and helium-3 are very similar, the microscopic details of the transitions are very different. Helium-4 atoms are bosons, and their superfluidity can be understood in terms of the Bose–Einstein statistics that they obey. Specifically, the superfluidity of helium-4 can be regarded as a consequence of Bose–Einstein condensation in an interacting system. On the other hand, helium-3 atoms are fermions, and the superfluid transition in this system is described by a generalization of the BCS theory of superconductivity. In it, Cooper pairing takes place between atoms rather than electrons , and the attractive interaction between them is mediated by spin fluctuations rather than phonons . (See fermion condensate .) A unified description of superconductivity and superfluidity is possible in terms of gauge symmetry breaking .
Figure 1 is the phase diagram of 4 He. [ 19 ] It is a pressure-temperature (p-T) diagram indicating the solid and liquid regions separated by the melting curve (between the liquid and solid state) and the liquid and gas region, separated by the vapor-pressure line. This latter ends in the critical point where the difference between gas and liquid disappears. The diagram shows the remarkable property that 4 He is liquid even at absolute zero . 4 He is only solid at pressures above 25 bar .
Figure 1 also shows the λ-line. This is the line that separates two fluid regions in the phase diagram indicated by He-I and He-II. In the He-I region the helium behaves like a normal fluid; in the He-II region the helium is superfluid.
The name lambda-line comes from the specific heat – temperature plot which has the shape of the Greek letter λ. [ 20 ] [ 21 ] See figure 2, which shows a peak at 2.172 K, the so-called λ-point of 4 He.
Below the lambda line the liquid can be described by the so-called two-fluid model. It behaves as if it consists of two components: a normal component, which behaves like a normal fluid, and a superfluid component with zero viscosity and zero entropy. The ratios of the respective densities ρ n /ρ and ρ s /ρ, with ρ n (ρ s ) the density of the normal (superfluid) component, and ρ (the total density), depends on temperature and is represented in figure 3. [ 22 ] By lowering the temperature, the fraction of the superfluid density increases from zero at T λ to one at zero kelvins. Below 1 K the helium is almost completely superfluid.
It is possible to create density waves of the normal component (and hence of the superfluid component since ρ n + ρ s = constant) which are similar to ordinary sound waves. This effect is called second sound . Due to the temperature dependence of ρ n (figure 3) these waves in ρ n are also temperature waves.
The equation of motion for the superfluid component, in a somewhat simplified form, [ 23 ] is given by Newton's law
F → = M 4 d v → s d t . {\displaystyle {\vec {F}}=M_{4}{\frac {\mathrm {d} {\vec {v}}_{s}}{\mathrm {d} t}}.}
The mass M 4 {\textstyle M_{4}} is the molar mass of 4 He, and v → s {\textstyle {\vec {v}}_{s}} is the velocity of the superfluid component. The time derivative is the so-called hydrodynamic derivative, i.e. the rate of increase of the velocity when moving with the fluid. In the case of superfluid 4 He in the gravitational field the force is given by [ 24 ] [ 25 ]
F → = − ∇ → ( μ + M 4 g z ) . {\displaystyle {\vec {F}}=-{\vec {\nabla }}(\mu +M_{4}gz).}
In this expression μ {\textstyle \mu } is the molar chemical potential, g {\textstyle g} the gravitational acceleration, and z {\textstyle z} the vertical coordinate. Thus we get the equation which states that the thermodynamics of a certain constant will be amplified by the force of the natural gravitational acceleration
Eq. (1) only holds if v s {\textstyle v_{s}} is below a certain critical value, which usually is determined by the diameter of the flow channel. [ 26 ] [ 27 ]
In classical mechanics the force is often the gradient of a potential energy. Eq. (1) shows that, in the case of the superfluid component, the force contains a term due to the gradient of the chemical potential . This is the origin of the remarkable properties of He-II such as the fountain effect.
In order to rewrite Eq. (1) in more familiar form we use the general formula
Here S m {\textstyle S_{m}} is the molar entropy and V m {\textstyle V_{m}} the molar volume. With Eq. (2) μ ( p , T ) {\textstyle \mu (p,T)} can be found by a line integration in the p {\textstyle p} – T {\textstyle T} plane. First we integrate from the origin ( 0 , 0 ) {\textstyle (0,0)} to ( p , 0 ) {\textstyle (p,0)} , so at T = 0 {\textstyle T=0} . Next we integrate from ( p , 0 ) {\textstyle (p,0)} to ( p , T ) {\textstyle (p,T)} , so with constant pressure (see figure 6). In the first integral d T = 0 {\textstyle \mathrm {d} T=0} and in the second d p = 0 {\textstyle \mathrm {d} p=0} . With Eq. (2) we obtain
We are interested only in cases where p {\textstyle p} is small so that V m {\textstyle V_{m}} is practically constant. So
where V m 0 {\textstyle V_{m0}} is the molar volume of the liquid at T = 0 {\textstyle T=0} and p = 0 {\textstyle p=0} . The other term in Eq. (3) is also written as a product of V m 0 {\textstyle V_{m0}} and a quantity p f {\displaystyle p_{f}} which has the dimension of pressure
The pressure p f {\textstyle p_{f}} is called the fountain pressure. It can be calculated from the entropy of 4 He which, in turn, can be calculated from the heat capacity. For T = T λ {\textstyle T=T_{\lambda }} the fountain pressure is equal to 0.692 bar. With a density of liquid helium of 125 kg/m 3 and g = 9.8 m/s 2 this corresponds with a liquid-helium column of 56 meter height. So, in many experiments, the fountain pressure has a bigger effect on the motion of the superfluid helium than gravity.
With Eqs. (4) and (5) , Eq. (3) obtains the form
Substitution of Eq. (6) in (1) gives
with ρ 0 = M 4 / V m 0 {\textstyle \rho _{0}=M_{4}/V_{m0}} the density of liquid 4 He at zero pressure and temperature.
Eq. (7) shows that the superfluid component is accelerated by gradients in the pressure and in the gravitational field, as usual, but also by a gradient in the fountain pressure.
So far Eq. (5) has only mathematical meaning, but in special experimental arrangements p f {\textstyle p_{f}} can show up as a real pressure. Figure 7 shows two vessels both containing He-II. The left vessel is supposed to be at zero kelvins ( T l = 0 {\textstyle T_{l}=0} ) and zero pressure ( p l = 0 {\textstyle p_{l}=0} ). The vessels are connected by a so-called superleak. This is a tube, filled with a very fine powder, so the flow of the normal component is blocked. However, the superfluid component can flow through this superleak without any problem (below a critical velocity of about 20 cm/s). In the steady state v s = 0 {\textstyle v_{s}=0} so Eq. (7) implies
where the indexes l {\textstyle l} and r {\textstyle r} apply to the left and right side of the superleak respectively. In this particular case p l = 0 {\textstyle p_{l}=0} , z l = z r {\textstyle z_{l}=z_{r}} , and p f l = 0 {\textstyle p_{fl}=0} (since T l = 0 {\textstyle T_{l}=0} ). Consequently,
0 = p r − p f r . {\displaystyle 0=p_{r}-p_{fr}.}
This means that the pressure in the right vessel is equal to the fountain pressure at T r {\textstyle T_{r}} .
In an experiment, arranged as in figure 8, a fountain can be created. The fountain effect is used to drive the circulation of 3 He in dilution refrigerators. [ 28 ] [ 29 ]
Figure 9 depicts a heat-conduction experiment between two temperatures T H {\textstyle T_{H}} and T L {\textstyle T_{L}} connected by a tube filled with He-II. When heat is applied to the hot end a pressure builds up at the hot end according to Eq. (7) . This pressure drives the normal component from the hot end to the cold end according to
Here η n {\textstyle \eta _{n}} is the viscosity of the normal component, [ 30 ] Z {\textstyle Z} some geometrical factor, and V ˙ n {\textstyle {\dot {V}}_{n}} the volume flow. The normal flow is balanced by a flow of the superfluid component from the cold to the hot end. At the end sections a normal to superfluid conversion takes place and vice versa. So heat is transported, not by heat conduction, but by convection. This kind of heat transport is very effective, so the thermal conductivity of He-II is very much better than the best materials. The situation is comparable with heat pipes where heat is transported via gas–liquid conversion. The high thermal conductivity of He-II is applied for stabilizing superconducting magnets such as in the Large Hadron Collider at CERN .
L. D. Landau's phenomenological and semi-microscopic theory of superfluidity of helium-4 earned him the Nobel Prize in physics, in 1962. Assuming that sound waves are the most important excitations in helium-4 at low temperatures, he showed that helium-4 flowing past a wall would not spontaneously create excitations if the flow velocity was less than the sound velocity. In this model, the sound velocity is the "critical velocity" above which superfluidity is destroyed. (Helium-4 actually has a lower flow velocity than the sound velocity, but this model is useful to illustrate the concept.) Landau also showed that the sound wave and other excitations could equilibrate with one another and flow separately from the rest of the helium-4, which is known as the "condensate".
From the momentum and flow velocity of the excitations he could then define a "normal fluid" density, which is zero at zero temperature and increases with temperature. At the so-called Lambda temperature, where the normal fluid density equals the total density, the helium-4 is no longer superfluid.
To explain the early specific heat data on superfluid helium-4, Landau posited the existence of a type of excitation he called a " roton ", but as better data became available he considered that the "roton" was the same as a high momentum version of sound.
The Landau theory does not elaborate on the microscopic structure of the superfluid component of liquid helium. [ 31 ] The first attempts to create a microscopic theory of the superfluid component itself were done by London [ 32 ] and subsequently, Tisza. [ 33 ] [ 34 ] Other microscopical models have been proposed by different authors. Their main objective is to derive the form of the inter-particle potential between helium atoms in superfluid state from first principles of quantum mechanics .
To date, a number of models of this kind have been proposed, including: models with vortex rings, hard-sphere models, and Gaussian cluster theories.
Landau thought that vorticity entered superfluid helium-4 by vortex sheets, but such sheets have since been shown to be unstable. Lars Onsager and, later independently, Feynman showed that vorticity enters by quantized vortex lines. They also developed the idea of quantum vortex rings. Arie Bijl in the 1940s, [ 35 ] and Richard Feynman around 1955, [ 36 ] developed microscopic theories for the roton, which was shortly observed with inelastic neutron experiments by Palevsky. Later on, Feynman admitted that his model gives only qualitative agreement with experiment. [ 37 ] [ 38 ]
The models are based on the simplified form of the inter-particle potential between helium-4 atoms in the superfluid phase. Namely, the potential is assumed to be of the hard-sphere type. [ 39 ] [ 40 ] [ 41 ] In these models the famous Landau (roton) spectrum of excitations is qualitatively reproduced.
This is a two-scale approach which describes the superfluid component of liquid helium-4. It
consists of two nested models linked via parametric space . The short-wavelength part describes the interior structure of the fluid element using a non-perturbative approach based on the logarithmic Schrödinger equation ; it suggests the Gaussian -like behaviour of the element's interior density and interparticle interaction potential. The long-wavelength part is the quantum many-body theory of such elements which deals with their dynamics and interactions. [ 42 ] The approach provides a unified description of the phonon , maxon and roton excitations, and has noteworthy agreement with experiment: with one essential parameter to fit one reproduces at high accuracy the Landau roton spectrum, sound velocity and structure factor of superfluid helium-4. [ 43 ] This model utilizes the general theory of quantum Bose liquids with logarithmic nonlinearities [ 44 ] which is based on introducing a dissipative -type contribution to energy related to the quantum Everett–Hirschman entropy function . [ 45 ] [ 46 ] | https://en.wikipedia.org/wiki/Superfluid_helium-4 |
Superfluidity is the characteristic property of a fluid with zero viscosity which therefore flows without any loss of kinetic energy . When stirred, a superfluid forms vortices that continue to rotate indefinitely. Superfluidity occurs in two isotopes of helium ( helium-3 and helium-4 ) when they are liquefied by cooling to cryogenic temperatures. It is also a property of various other exotic states of matter theorized to exist in astrophysics , high-energy physics , and theories of quantum gravity . [ 1 ] The theory of superfluidity was developed by Soviet theoretical physicists Lev Landau and Isaak Khalatnikov .
Superfluidity often co-occurs with Bose–Einstein condensation , but neither phenomenon is directly related to the other; not all Bose–Einstein condensates can be regarded as superfluids, and not all superfluids are Bose–Einstein condensates. [ 2 ] Even when superfluidity and condensation co-occur, their magnitudes are not linked: at low temperature, liquid helium has a large superfluid fraction but a low condensate fraction; while a weakly interacting BEC, with almost unity condensate fraction, can display a vanishing superfluid fraction. [ 3 ]
Superfluids have some potential practical uses, such as dissolving substances in a quantum solvent .
Superfluidity was discovered in helium-4 by Pyotr Kapitsa [ 4 ] and independently by John F. Allen and Don Misener [ 5 ] in 1937. Onnes possibly observed the superfluid phase transition on August 2 1911, the same day that he observed superconductivity in mercury. [ 6 ] It has since been described through phenomenology and microscopic theories .
In liquid helium-4, the superfluidity occurs at far higher temperatures than it does in helium-3 . Each atom of helium-4 is a boson particle, by virtue of its integer spin . A helium-3 atom is a fermion particle; it can form bosons only by pairing with another particle like itself, which occurs at much lower temperatures. The discovery of superfluidity in helium-3 was the basis for the award of the 1996 Nobel Prize in Physics . [ 1 ] This process is similar to the electron pairing in superconductivity .
Superfluidity in an ultracold fermionic gas was experimentally proven by Wolfgang Ketterle and his team who observed quantum vortices in lithium-6 at a temperature of 50 nK at MIT in April 2005. [ 7 ] [ 8 ] Such vortices had previously been observed in an ultracold bosonic gas using rubidium-87 in 2000, [ 9 ] and more recently in two-dimensional gases . [ 10 ] As early as 1999, Lene Hau created such a condensate using sodium atoms [ 11 ] for the purpose of slowing light, and later stopping it completely. [ 12 ] Her team subsequently used this system of compressed light [ 13 ] to generate the superfluid analogue of shock waves and tornadoes: [ 14 ]
These dramatic excitations result in the formation of solitons that in turn decay into quantized vortices —created far out of equilibrium, in pairs of opposite circulation—revealing directly the process of superfluid breakdown in Bose–Einstein condensates. With a double light-roadblock setup, we can generate controlled collisions between shock waves resulting in completely unexpected, nonlinear excitations. We have observed hybrid structures consisting of vortex rings embedded in dark solitonic shells. The vortex rings act as 'phantom propellers' leading to very rich excitation dynamics.
The idea that superfluidity exists inside neutron stars was first proposed by Arkady Migdal . [ 15 ] [ 16 ] By analogy with electrons inside superconductors forming Cooper pairs because of electron–lattice interaction, it is expected that nucleons in a neutron star at sufficiently high density and low temperature can also form Cooper pairs because of the long-range attractive nuclear force and lead to superfluidity and superconductivity. [ 17 ]
Superfluid vacuum theory (SVT) is an approach in theoretical physics and quantum mechanics where the physical vacuum is viewed as superfluid. [ citation needed ]
The ultimate goal of the approach is to develop scientific models that unify quantum mechanics (describing three of the four known fundamental interactions) with gravity . This makes SVT a candidate for the theory of quantum gravity and an extension of the Standard Model . [ citation needed ]
It is hoped that development of such a theory would unify into a single consistent model of all fundamental interactions,
and to describe all known interactions and elementary particles as different manifestations of the same entity, superfluid vacuum. [ citation needed ]
On the macro-scale a larger similar phenomenon has been suggested as happening in the murmurations of starlings . The rapidity of change in flight patterns mimics the phase change leading to superfluidity in some liquid states. [ 18 ]
Light behaves like a superfluid in various applications such as Poisson's Spot . As the liquid helium shown above, light will travel along the surface of an obstacle before continuing along its trajectory. Since light is not affected by local gravity its "level" becomes its own trajectory and velocity. Another example is how a beam of light travels through the hole of an aperture and along its backside before diffraction. [ citation needed ] | https://en.wikipedia.org/wiki/Superfluidity |
The supergalactic coordinate system is a reference frame for the supercluster of galaxies that contains the Milky Way galaxy, referenced to a local relatively flat collection of galaxy clusters used to define the supergalactic plane.
The supergalactic plane is more or less perpendicular to the plane of the Milky Way; the angle is 84.5°. As viewed from Earth, the plane traces a great circle across the sky through the following constellations :
In the 1950s the astronomer Gérard de Vaucouleurs recognized the existence of a flattened “local supercluster” from the Shapley-Ames Catalog in the environment of the Milky Way. He noticed that when one plots nearby galaxies in 3D, they lie more or less on a plane. A flattened distribution of nebulae had earlier been noted by William Herschel . Vera Rubin had also identified the supergalactic plane in the 1950s, but her data remained unpublished. [ 1 ] The plane delineated by various galaxies defined in 1976 the equator of the supergalactic coordinate system de Vaucouleurs developed. In years thereafter with more observation data available de Vaucouleurs' and Rubin's findings about the existence of the plane proved right.
Based on the supergalactic coordinate system of de Vaucouleurs, surveys [ 2 ] in recent years determined the positions of nearby galaxy clusters relative to the supergalactic plane. Amongst others the Virgo cluster , the Norma cluster (including the Great Attractor ), the Coma cluster, the Pisces-Perseus supercluster , the Hydra cluster, the Centaurus cluster, the Pisces-Cetus supercluster and the Shapley Concentration were found to be near the supergalactic plane.
The supergalactic coordinate system is a spherical coordinate system in which the equator lies in the supergalactic plane.
By convention, supergalactic latitude is usually abbreviated SGB, and supergalactic longitude as SGL, by analogy to b and l conventionally used for galactic coordinates .
The transformation from a triple of Cartesian supergalactic coordinates to a triple of galactic coordinates is
The left column in this matrix is the image of the origin of the supergalactic system in the galactic system, the right column in this matrix is the image of the north pole of the supergalactic coordinates in the galactic system, and the middle column is the cross product (to complete the right-handed coordinate system).
The corresponding cartesian coordinate system allows points to be specified by coordinates (SGX, SGY, SGZ). In this system the supergalactic z-axis points towards the north supergalactic pole, the supergalactic x-axis points towards the zero point, and the supergalactic y-axis is perpendicular to both. | https://en.wikipedia.org/wiki/Supergalactic_coordinate_system |
A superglass is a phase of matter which is characterized by superfluidity and a frozen amorphous structure at the same time. [ 1 ]
J.C. Séamus Davis theorised that frozen helium-4 (at 0.2 K and 50 atm) may be a superglass. [ 1 ] [ 2 ] [ 3 ]
This physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Superglass |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.